Initial wiki md to rst auto convertation

This commit is contained in:
George Melikov
2020-05-16 16:39:45 +03:00
commit c47d449832
57 changed files with 18232 additions and 0 deletions

View File

@@ -0,0 +1,6 @@
- `Aaron Toponce's ZFS on Linux User
Guide <https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/>`__
- `OpenZFS System
Administration <http://open-zfs.org/wiki/System_Administration>`__
- `Oracle Solaris ZFS Administration
Guide <http://docs.oracle.com/cd/E19253-01/819-5461/>`__

36
docs/Async-Write.rst Normal file
View File

@@ -0,0 +1,36 @@
Async Writes
~~~~~~~~~~~~
The number of concurrent operations issued for the async write I/O class
follows a piece-wise linear function defined by a few adjustable points.
::
| o---------| <-- zfs_vdev_async_write_max_active
^ | /^ |
| | / | |
active | / | |
I/O | / | |
count | / | |
| / | |
|-------o | | <-- zfs_vdev_async_write_min_active
0|_______^______|_________|
0% | | 100% of zfs_dirty_data_max
| |
| `-- zfs_vdev_async_write_active_max_dirty_percent
`--------- zfs_vdev_async_write_active_min_dirty_percent
Until the amount of dirty data exceeds a minimum percentage of the dirty
data allowed in the pool, the I/O scheduler will limit the number of
concurrent operations to the minimum. As that threshold is crossed, the
number of concurrent operations issued increases linearly to the maximum
at the specified maximum percentage of the dirty data allowed in the
pool.
Ideally, the amount of dirty data on a busy pool will stay in the sloped
part of the function between
zfs_vdev_async_write_active_min_dirty_percent and
zfs_vdev_async_write_active_max_dirty_percent. If it exceeds the maximum
percentage, this indicates that the rate of incoming data is greater
than the rate that the backend storage can handle. In this case, we must
further throttle incoming writes, as described in the next section.

245
docs/Buildbot-Options.rst Normal file
View File

@@ -0,0 +1,245 @@
There are a number of ways to control the ZFS Buildbot at a commit
level. This page provides a summary of various options that the ZFS
Buildbot supports and how it impacts testing. More detailed information
regarding its implementation can be found at the `ZFS Buildbot Github
page <https://github.com/zfsonlinux/zfs-buildbot>`__.
Choosing Builders
-----------------
By default, all commits in your ZFS pull request are compiled by the
BUILD builders. Additionally, the top commit of your ZFS pull request is
tested by TEST builders. However, there is the option to override which
types of builder should be used on a per commit basis. In this case, you
can add
``Requires-builders: <none|all|style|build|arch|distro|test|perf|coverage|unstable>``
to your commit message. A comma separated list of options can be
provided. Supported options are:
- ``all``: This commit should be built by all available builders
- ``none``: This commit should not be built by any builders
- ``style``: This commit should be built by STYLE builders
- ``build``: This commit should be built by all BUILD builders
- ``arch``: This commit should be built by BUILD builders tagged as
'Architectures'
- ``distro``: This commit should be built by BUILD builders tagged as
'Distributions'
- ``test``: This commit should be built and tested by the TEST builders
(excluding the Coverage TEST builders)
- ``perf``: This commit should be built and tested by the PERF builders
- ``coverage`` : This commit should be built and tested by the Coverage
TEST builders
- ``unstable`` : This commit should be built and tested by the Unstable
TEST builders (currently only the Fedora Rawhide TEST builder)
A couple of examples on how to use ``Requires-builders:`` in commit
messages can be found below.
.. _preventing-a-commit-from-being-built-and-tested:
Preventing a commit from being built and tested.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Requires-builders: none
.. _submitting-a-commit-to-style-and-test-builders-only:
Submitting a commit to STYLE and TEST builders only.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Requires-builders: style test
Requiring SPL Versions
----------------------
Currently, the ZFS Buildbot attempts to choose the correct SPL branch to
build based on a pull request's base branch. In the cases where a
specific SPL version needs to be built, the ZFS buildbot supports
specifying an SPL version for pull request testing. By opening a pull
request against ZFS and adding ``Requires-spl:`` in a commit message,
you can instruct the buildbot to use a specific SPL version. Below are
examples of a commit messages that specify the SPL version.
Build SPL from a specific pull request
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Requires-spl: refs/pull/123/head
Build SPL branch ``spl-branch-name`` from ``zfsonlinux/spl`` repository
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Requires-spl: spl-branch-name
Requiring Kernel Version
------------------------
Currently, Kernel.org builders will clone and build the master branch of
Linux. In cases where a specific version of the Linux kernel needs to be
built, the ZFS buildbot supports specifying the Linux kernel to be built
via commit message. By opening a pull request against ZFS and adding
``Requires-kernel:`` in a commit message, you can instruct the buildbot
to use a specific Linux kernel. Below is an example commit message that
specifies a specific Linux kernel tag.
.. _build-linux-kernel-version-414:
Build Linux Kernel Version 4.14
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Requires-kernel: v4.14
Build Steps Overrides
---------------------
Each builder will execute or skip build steps based on its default
preferences. In some scenarios, it might be possible to skip various
build steps. The ZFS buildbot supports overriding the defaults of all
builders in a commit message. The list of available overrides are:
- ``Build-linux: <Yes|No>``: All builders should build Linux for this
commit
- ``Build-lustre: <Yes|No>``: All builders should build Lustre for this
commit
- ``Build-spl: <Yes|No>``: All builders should build the SPL for this
commit
- ``Build-zfs: <Yes|No>``: All builders should build ZFS for this
commit
- ``Built-in: <Yes|No>``: All Linux builds should build in SPL and ZFS
- ``Check-lint: <Yes|No>``: All builders should perform lint checks for
this commit
- ``Configure-lustre: <options>``: Provide ``<options>`` as configure
flags when building Lustre
- ``Configure-spl: <options>``: Provide ``<options>`` as configure
flags when building the SPL
- ``Configure-zfs: <options>``: Provide ``<options>`` as configure
flags when building ZFS
A couple of examples on how to use overrides in commit messages can be
found below.
Skip building the SPL and build Lustre without ldiskfs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Build-lustre: Yes
Configure-lustre: --disable-ldiskfs
Build-spl: No
Build ZFS Only
~~~~~~~~~~~~~~
::
This is a commit message
This text is part of the commit message body.
Signed-off-by: Contributor <contributor@email.com>
Build-lustre: No
Build-spl: No
Configuring Tests with the TEST File
------------------------------------
At the top level of the ZFS source tree, there is the ```TEST``
file <https://github.com/zfsonlinux/zfs/blob/master/TEST>`__ which
contains variables that control if and how a specific test should run.
Below is a list of each variable and a brief description of what each
variable controls.
- ``TEST_PREPARE_WATCHDOG`` - Enables the Linux kernel watchdog
- ``TEST_PREPARE_SHARES`` - Start NFS and Samba servers
- ``TEST_SPLAT_SKIP`` - Determines if ``splat`` testing is skipped
- ``TEST_SPLAT_OPTIONS`` - Command line options to provide to ``splat``
- ``TEST_ZTEST_SKIP`` - Determines if ``ztest`` testing is skipped
- ``TEST_ZTEST_TIMEOUT`` - The length of time ``ztest`` should run
- ``TEST_ZTEST_DIR`` - Directory where ``ztest`` will create vdevs
- ``TEST_ZTEST_OPTIONS`` - Options to pass to ``ztest``
- ``TEST_ZTEST_CORE_DIR`` - Directory for ``ztest`` to store core dumps
- ``TEST_ZIMPORT_SKIP`` - Determines if ``zimport`` testing is skipped
- ``TEST_ZIMPORT_DIR`` - Directory used during ``zimport``
- ``TEST_ZIMPORT_VERSIONS`` - Source versions to test
- ``TEST_ZIMPORT_POOLS`` - Names of the pools for ``zimport`` to use
for testing
- ``TEST_ZIMPORT_OPTIONS`` - Command line options to provide to
``zimport``
- ``TEST_XFSTESTS_SKIP`` - Determines if ``xfstest`` testing is skipped
- ``TEST_XFSTESTS_URL`` - URL to download ``xfstest`` from
- ``TEST_XFSTESTS_VER`` - Name of the tarball to download from
``TEST_XFSTESTS_URL``
- ``TEST_XFSTESTS_POOL`` - Name of pool to create and used by
``xfstest``
- ``TEST_XFSTESTS_FS`` - Name of dataset for use by ``xfstest``
- ``TEST_XFSTESTS_VDEV`` - Name of the vdev used by ``xfstest``
- ``TEST_XFSTESTS_OPTIONS`` - Command line options to provide to
``xfstest``
- ``TEST_ZFSTESTS_SKIP`` - Determines if ``zfs-tests`` testing is
skipped
- ``TEST_ZFSTESTS_DIR`` - Directory to store files and loopback devices
- ``TEST_ZFSTESTS_DISKS`` - Space delimited list of disks that
``zfs-tests`` is allowed to use
- ``TEST_ZFSTESTS_DISKSIZE`` - File size of file based vdevs used by
``zfs-tests``
- ``TEST_ZFSTESTS_ITERS`` - Number of times ``test-runner`` should
execute its set of tests
- ``TEST_ZFSTESTS_OPTIONS`` - Options to provide ``zfs-tests``
- ``TEST_ZFSTESTS_RUNFILE`` - The runfile to use when running
``zfs-tests``
- ``TEST_ZFSTESTS_TAGS`` - List of tags to provide to ``test-runner``
- ``TEST_ZFSSTRESS_SKIP`` - Determines if ``zfsstress`` testing is
skipped
- ``TEST_ZFSSTRESS_URL`` - URL to download ``zfsstress`` from
- ``TEST_ZFSSTRESS_VER`` - Name of the tarball to download from
``TEST_ZFSSTRESS_URL``
- ``TEST_ZFSSTRESS_RUNTIME`` - Duration to run ``runstress.sh``
- ``TEST_ZFSSTRESS_POOL`` - Name of pool to create and use for
``zfsstress`` testing
- ``TEST_ZFSSTRESS_FS`` - Name of dataset for use during ``zfsstress``
tests
- ``TEST_ZFSSTRESS_FSOPT`` - File system options to provide to
``zfsstress``
- ``TEST_ZFSSTRESS_VDEV`` - Directory to store vdevs for use during
``zfsstress`` tests
- ``TEST_ZFSSTRESS_OPTIONS`` - Command line options to provide to
``runstress.sh``

243
docs/Building-ZFS.rst Normal file
View File

@@ -0,0 +1,243 @@
GitHub Repositories
~~~~~~~~~~~~~~~~~~~
The official source for ZFS on Linux is maintained at GitHub by the
`zfsonlinux <https://github.com/zfsonlinux/>`__ organization. The
project consists of two primary git repositories named
`spl <https://github.com/zfsonlinux/spl>`__ and
`zfs <https://github.com/zfsonlinux/zfs>`__, both are required to build
ZFS on Linux.
**NOTE:** The SPL was merged in to the
`zfs <https://github.com/zfsonlinux/zfs>`__ repository, the last major
release with a separate SPL is ``0.7``.
- **SPL**: The SPL is thin shim layer which is responsible for
implementing the fundamental interfaces required by OpenZFS. It's
this layer which allows OpenZFS to be used across multiple platforms.
- **ZFS**: The ZFS repository contains a copy of the upstream OpenZFS
code which has been adapted and extended for Linux. The vast majority
of the core OpenZFS code is self-contained and can be used without
modification.
Installing Dependencies
~~~~~~~~~~~~~~~~~~~~~~~
The first thing you'll need to do is prepare your environment by
installing a full development tool chain. In addition, development
headers for both the kernel and the following libraries must be
available. It is important to note that if the development kernel
headers for the currently running kernel aren't installed, the modules
won't compile properly.
The following dependencies should be installed to build the latest ZFS
0.8 release.
- **RHEL/CentOS 7**:
.. code:: sh
sudo yum install epel-release gcc make autoconf automake libtool rpm-build dkms libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel
- **RHEL/CentOS 8, Fedora**:
.. code:: sh
sudo dnf install gcc make autoconf automake libtool rpm-build dkms libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python3 python3-devel python3-setuptools python3-cffi libffi-devel
- **Debian, Ubuntu**:
.. code:: sh
sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-$(uname -r) python3 python3-dev python3-setuptools python3-cffi libffi-dev
Build Options
~~~~~~~~~~~~~
There are two options for building ZFS on Linux, the correct one largely
depends on your requirements.
- **Packages**: Often it can be useful to build custom packages from
git which can be installed on a system. This is the best way to
perform integration testing with systemd, dracut, and udev. The
downside to using packages it is greatly increases the time required
to build, install, and test a change.
- **In-tree**: Development can be done entirely in the SPL and ZFS
source trees. This speeds up development by allowing developers to
rapidly iterate on a patch. When working in-tree developers can
leverage incremental builds, load/unload kernel modules, execute
utilities, and verify all their changes with the ZFS Test Suite.
The remainder of this page focuses on the **in-tree** option which is
the recommended method of development for the majority of changes. See
the [[custom-packages]] page for additional information on building
custom packages.
Developing In-Tree
~~~~~~~~~~~~~~~~~~
Clone from GitHub
^^^^^^^^^^^^^^^^^
Start by cloning the SPL and ZFS repositories from GitHub. The
repositories have a **master** branch for development and a series of
**\*-release** branches for tagged releases. After checking out the
repository your clone will default to the master branch. Tagged releases
may be built by checking out spl/zfs-x.y.z tags with matching version
numbers or matching release branches. Avoid using mismatched versions,
this can result build failures due to interface changes.
**NOTE:** SPL was merged in to the
`zfs <https://github.com/zfsonlinux/zfs>`__ repository, last release
with separate SPL is ``0.7``.
::
git clone https://github.com/zfsonlinux/zfs
If you need 0.7 release or older:
::
git clone https://github.com/zfsonlinux/spl
Configure and Build
^^^^^^^^^^^^^^^^^^^
For developers working on a change always create a new topic branch
based off of master. This will make it easy to open a pull request with
your change latter. The master branch is kept stable with extensive
`regression testing <http://build.zfsonlinux.org/>`__ of every pull
request before and after it's merged. Every effort is made to catch
defects as early as possible and to keep them out of the tree.
Developers should be comfortable frequently rebasing their work against
the latest master branch.
If you want to build 0.7 release or older, you should compile SPL first:
::
cd ./spl
git checkout master
sh autogen.sh
./configure
make -s -j$(nproc)
In this example we'll use the master branch and walk through a stock
**in-tree** build, so we don't need to build SPL separately. Start by
checking out the desired branch then build the ZFS and SPL source in the
tradition autotools fashion.
::
cd ./zfs
git checkout master
sh autogen.sh
./configure
make -s -j$(nproc)
| **tip:** ``--with-linux=PATH`` and ``--with-linux-obj=PATH`` can be
passed to configure to specify a kernel installed in a non-default
location. This option is also supported when building ZFS.
| **tip:** ``--enable-debug`` can be set to enable all ASSERTs and
additional correctness tests. This option is also supported when
building ZFS.
| **tip:** for version ``<=0.7`` ``--with-spl=PATH`` and
``--with-spl-obj=PATH``, where ``PATH`` is a full path, can be passed
to configure if it is unable to locate the SPL.
**Optional** Build packages
::
make deb #example for Debian/Ubuntu
Install
^^^^^^^
You can run ``zfs-tests.sh`` without installing ZFS, see below. If you
have reason to install ZFS after building it, pay attention to how your
distribution handles kernel modules. On Ubuntu, for example, the modules
from this repository install in the ``extra`` kernel module path, which
is not in the standard ``depmod`` search path. Therefore, for the
duration of your testing, edit ``/etc/depmod.d/ubuntu.conf`` and add
``extra`` to the beginning of the search path.
You may then install using
``sudo make install; sudo ldconfig; sudo depmod``. You'd uninstall with
``sudo make uninstall; sudo ldconfig; sudo depmod``.
.. _running-zloopsh-and-zfs-testssh:
Running zloop.sh and zfs-tests.sh
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you wish to run the ZFS Test Suite (ZTS), then ``ksh`` and a few
additional utilities must be installed.
- **RHEL/CentOS 7:**
.. code:: sh
sudo yum install ksh bc fio acl sysstat mdadm lsscsi parted attr dbench nfs-utils samba rng-tools pax perf
- **RHEL/CentOS 8, Fedora:**
.. code:: sh
sudo dnf install ksh bc fio acl sysstat mdadm lsscsi parted attr dbench nfs-utils samba rng-tools pax perf
- **Debian, Ubuntu:**
.. code:: sh
sudo apt install ksh bc fio acl sysstat mdadm lsscsi parted attr dbench nfs-kernel-server samba rng-tools pax linux-tools-common selinux-utils quota
There are a few helper scripts provided in the top-level scripts
directory designed to aid developers working with in-tree builds.
- **zfs-helper.sh:** Certain functionality (i.e. /dev/zvol/) depends on
the ZFS provided udev helper scripts being installed on the system.
This script can be used to create symlinks on the system from the
installation location to the in-tree helper. These links must be in
place to successfully run the ZFS Test Suite. The **-i** and **-r**
options can be used to install and remove the symlinks.
::
sudo ./scripts/zfs-helpers.sh -i
- **zfs.sh:** The freshly built kernel modules can be loaded using
``zfs.sh``. This script can latter be used to unload the kernel
modules with the **-u** option.
::
sudo ./scripts/zfs.sh
- **zloop.sh:** A wrapper to run ztest repeatedly with randomized
arguments. The ztest command is a user space stress test designed to
detect correctness issues by concurrently running a random set of
test cases. If a crash is encountered, the ztest logs, any associated
vdev files, and core file (if one exists) are collected and moved to
the output directory for analysis.
::
sudo ./scripts/zloop.sh
- **zfs-tests.sh:** A wrapper which can be used to launch the ZFS Test
Suite. Three loopback devices are created on top of sparse files
located in ``/var/tmp/`` and used for the regression test. Detailed
directions for the ZFS Test Suite can be found in the
`README <https://github.com/zfsonlinux/zfs/tree/master/tests>`__
located in the top-level tests directory.
::
./scripts/zfs-tests.sh -vx
**tip:** The **delegate** tests will be skipped unless group read
permission is set on the zfs directory and its parents.

124
docs/Checksums.rst Normal file
View File

@@ -0,0 +1,124 @@
Checksums and Their Use in ZFS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
End-to-end checksums are a key feature of ZFS and an important
differentiator for ZFS over other RAID implementations and filesystems.
Advantages of end-to-end checksums include:
- detects data corruption upon reading from media
- blocks that are detected as corrupt are automatically repaired if
possible, by using the RAID protection in suitably configured pools,
or redundant copies (see the zfs ``copies`` property)
- periodic scrubs can check data to detect and repair latent media
degradation (bit rot) and corruption from other sources
- checksums on ZFS replication streams, ``zfs send`` and
``zfs receive``, ensure the data received is not corrupted by
intervening storage or transport mechanisms
Checksum Algorithms
^^^^^^^^^^^^^^^^^^^
The checksum algorithms in ZFS can be changed for datasets (filesystems
or volumes). The checksum algorithm used for each block is stored in the
block pointer (metadata). The block checksum is calculated when the
block is written, so changing the algorithm only affects writes
occurring after the change.
The checksum algorithm for a dataset can be changed by setting the
``checksum`` property:
.. code:: bash
zfs set checksum=sha256 pool_name/dataset_name
+-----------+-----------------+-----------------+-----------------+
| Checksum | Ok for dedup | Compatible with | Notes |
| | and nopwrite? | other ZFS | |
| | | i | |
| | | mplementations? | |
+===========+=================+=================+=================+
| on | see notes | yes | ``on`` is a |
| | | | short hand for |
| | | | ``fletcher4`` |
| | | | for non-deduped |
| | | | datasets and |
| | | | ``sha256`` for |
| | | | deduped |
| | | | datasets |
+-----------+-----------------+-----------------+-----------------+
| off | no | yes | Do not do use |
| | | | ``off`` |
+-----------+-----------------+-----------------+-----------------+
| fletcher2 | no | yes | Deprecated |
| | | | implementation |
| | | | of Fletcher |
| | | | checksum, use |
| | | | ``fletcher4`` |
| | | | instead |
+-----------+-----------------+-----------------+-----------------+
| fletcher4 | no | yes | Fletcher |
| | | | algorithm, also |
| | | | used for |
| | | | ``zfs send`` |
| | | | streams |
+-----------+-----------------+-----------------+-----------------+
| sha256 | yes | yes | Default for |
| | | | deduped |
| | | | datasets |
+-----------+-----------------+-----------------+-----------------+
| noparity | no | yes | Do not use |
| | | | ``noparity`` |
+-----------+-----------------+-----------------+-----------------+
| sha512 | yes | requires pool | salted |
| | | feature | ``sha512`` |
| | | ``org.i | currently not |
| | | llumos:sha512`` | supported for |
| | | | any filesystem |
| | | | on the boot |
| | | | pools |
+-----------+-----------------+-----------------+-----------------+
| skein | yes | requires pool | salted |
| | | feature | ``skein`` |
| | | ``org. | currently not |
| | | illumos:skein`` | supported for |
| | | | any filesystem |
| | | | on the boot |
| | | | pools |
+-----------+-----------------+-----------------+-----------------+
| edonr | yes | requires pool | salted |
| | | feature | ``edonr`` |
| | | ``org. | currently not |
| | | illumos:edonr`` | supported for |
| | | | any filesystem |
| | | | on the boot |
| | | | pools |
+-----------+-----------------+-----------------+-----------------+
Checksum Accelerators
^^^^^^^^^^^^^^^^^^^^^
ZFS has the ability to offload checksum operations to the Intel
QuickAssist Technology (QAT) adapters.
Checksum Microbenchmarks
^^^^^^^^^^^^^^^^^^^^^^^^
Some ZFS features use microbenchmarks when the ``zfs.ko`` kernel module
is loaded to determine the optimal algorithm for checksums. The results
of the microbenchmarks are observable in the ``/proc/spl/kstat/zfs``
directory. The winning algorithm is reported as the "fastest" and
becomes the default. The default can be overridden by setting zfs module
parameters.
========= ==================================== ========================
Checksum Results Filename ``zfs`` module parameter
========= ==================================== ========================
Fletcher4 /proc/spl/kstat/zfs/fletcher_4_bench zfs_fletcher_4_impl
========= ==================================== ========================
Disabling Checksums
^^^^^^^^^^^^^^^^^^^
While it may be tempting to disable checksums to improve CPU
performance, it is widely considered by the ZFS community to be an
extrodinarily bad idea. Don't disable checksums.

204
docs/Custom-Packages.rst Normal file
View File

@@ -0,0 +1,204 @@
The following instructions assume you are building from an official
`release tarball <https://github.com/zfsonlinux/zfs/releases/latest>`__
(version 0.8.0 or newer) or directly from the `git
repository <https://github.com/zfsonlinux/zfs>`__. Most users should not
need to do this and should preferentially use the distribution packages.
As a general rule the distribution packages will be more tightly
integrated, widely tested, and better supported. However, if your
distribution of choice doesn't provide packages, or you're a developer
and want to roll your own, here's how to do it.
The first thing to be aware of is that the build system is capable of
generating several different types of packages. Which type of package
you choose depends on what's supported on your platform and exactly what
your needs are.
- **DKMS** packages contain only the source code and scripts for
rebuilding the kernel modules. When the DKMS package is installed
kernel modules will be built for all available kernels. Additionally,
when the kernel is upgraded new kernel modules will be automatically
built for that kernel. This is particularly convenient for desktop
systems which receive frequent kernel updates. The downside is that
because the DKMS packages build the kernel modules from source a full
development environment is required which may not be appropriate for
large deployments.
- **kmods** packages are binary kernel modules which are compiled
against a specific version of the kernel. This means that if you
update the kernel you must compile and install a new kmod package. If
you don't frequently update your kernel, or if you're managing a
large number of systems, then kmod packages are a good choice.
- **kABI-tracking kmod** Packages are similar to standard binary kmods
and may be used with Enterprise Linux distributions like Red Hat and
CentOS. These distributions provide a stable kABI (Kernel Application
Binary Interface) which allows the same binary modules to be used
with new versions of the distribution provided kernel.
By default the build system will generate user packages and both DKMS
and kmod style kernel packages if possible. The user packages can be
used with either set of kernel packages and do not need to be rebuilt
when the kernel is updated. You can also streamline the build process by
building only the DKMS or kmod packages as shown below.
Be aware that when building directly from a git repository you must
first run the *autogen.sh* script to create the *configure* script. This
will require installing the GNU autotools packages for your
distribution. To perform any of the builds, you must install all the
necessary development tools and headers for your distribution.
It is important to note that if the development kernel headers for the
currently running kernel aren't installed, the modules won't compile
properly.
- `Red Hat, CentOS and Fedora <#red-hat-centos-and-fedora>`__
- `Debian and Ubuntu <#debian-and-ubuntu>`__
RHEL, CentOS and Fedora
-----------------------
Make sure that the required packages are installed to build the latest
ZFS 0.8 release:
- **RHEL/CentOS 7**:
.. code:: sh
sudo yum install epel-release gcc make autoconf automake libtool rpm-build dkms libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python python2-devel python-setuptools python-cffi libffi-devel
- **RHEL/CentOS 8, Fedora**:
.. code:: sh
sudo dnf install gcc make autoconf automake libtool rpm-build kernel-rpm-macros dkms libtirpc-devel libblkid-devel libuuid-devel libudev-devel openssl-devel zlib-devel libaio-devel libattr-devel elfutils-libelf-devel kernel-devel-$(uname -r) python3 python3-devel python3-setuptools python3-cffi libffi-devel
`Get the source code <#get-the-source-code>`__.
DKMS
~~~~
Building rpm-based DKMS and user packages can be done as follows:
.. code:: sh
$ cd zfs
$ ./configure
$ make -j1 rpm-utils rpm-dkms
$ sudo yum localinstall *.$(uname -p).rpm *.noarch.rpm
kmod
~~~~
The key thing to know when building a kmod package is that a specific
Linux kernel must be specified. At configure time the build system will
make an educated guess as to which kernel you want to build against.
However, if configure is unable to locate your kernel development
headers, or you want to build against a different kernel, you must
specify the exact path with the *--with-linux* and *--with-linux-obj*
options.
.. code:: sh
$ cd zfs
$ ./configure
$ make -j1 rpm-utils rpm-kmod
$ sudo yum localinstall *.$(uname -p).rpm
kABI-tracking kmod
~~~~~~~~~~~~~~~~~~
The process for building kABI-tracking kmods is almost identical to for
building normal kmods. However, it will only produce binaries which can
be used by multiple kernels if the distribution supports a stable kABI.
In order to request kABI-tracking package the *--with-spec=redhat*
option must be passed to configure.
**NOTE:** This type of package is not available for Fedora.
.. code:: sh
$ cd zfs
$ ./configure --with-spec=redhat
$ make -j1 rpm-utils rpm-kmod
$ sudo yum localinstall *.$(uname -p).rpm
Debian and Ubuntu
-----------------
Make sure that the required packages are installed:
.. code:: sh
sudo apt install build-essential autoconf automake libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev linux-headers-$(uname -r) python3 python3-dev python3-setuptools python3-cffi libffi-dev
`Get the source code <#get-the-source-code>`__.
.. _kmod-1:
kmod
~~~~
The key thing to know when building a kmod package is that a specific
Linux kernel must be specified. At configure time the build system will
make an educated guess as to which kernel you want to build against.
However, if configure is unable to locate your kernel development
headers, or you want to build against a different kernel, you must
specify the exact path with the *--with-linux* and *--with-linux-obj*
options.
.. code:: sh
$ cd zfs
$ ./configure
$ make -j1 deb-utils deb-kmod
$ for file in *.deb; do sudo gdebi -q --non-interactive $file; done
.. _dkms-1:
DKMS
~~~~
Building deb-based DKMS and user packages can be done as follows:
.. code:: sh
$ sudo apt-get install dkms
$ cd zfs
$ ./configure
$ make -j1 deb-utils deb-dkms
$ for file in *.deb; do sudo gdebi -q --non-interactive $file; done
Get the Source Code
-------------------
Released Tarball
~~~~~~~~~~~~~~~~
The released tarball contains the latest fully tested and released
version of ZFS. This is the preferred source code location for use in
production systems. If you want to use the official released tarballs,
then use the following commands to fetch and prepare the source.
.. code:: sh
$ wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-x.y.z.tar.gz
$ tar -xzf zfs-x.y.z.tar.gz
Git Master Branch
~~~~~~~~~~~~~~~~~
The Git *master* branch contains the latest version of the software, and
will probably contain fixes that, for some reason, weren't included in
the released tarball. This is the preferred source code location for
developers who intend to modify ZFS. If you would like to use the git
version, you can clone it from Github and prepare the source like this.
.. code:: sh
$ git clone https://github.com/zfsonlinux/zfs.git
$ cd zfs
$ ./autogen.sh
Once the source has been prepared you'll need to decide what kind of
packages you're building and jump the to appropriate section above. Note
that not all package types are supported for all platforms.

View File

@@ -0,0 +1,47 @@
This experimental guide has been made official at [[Debian Buster Root
on ZFS]].
If you have an existing system installed from the experimental guide,
adjust your sources:
::
vi /etc/apt/sources.list.d/buster-backports.list
deb http://deb.debian.org/debian buster-backports main contrib
deb-src http://deb.debian.org/debian buster-backports main contrib
vi /etc/apt/preferences.d/90_zfs
Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-dkms zfs-initramfs zfs-test zfsutils-linux zfs-zed
Pin: release n=buster-backports
Pin-Priority: 990
This will allow you to upgrade from the locally-built packages to the
official buster-backports packages.
You should set a root password before upgrading:
::
passwd
Apply updates:
::
apt update
apt dist-upgrade
Reboot:
::
reboot
If the bpool fails to import, then enter the rescue shell (which
requires a root password) and run:
::
zpool import -f bpool
zpool export bpool
reboot

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,122 @@
Supported boot parameters
=========================
- rollback=<on|yes|1> Do a rollback of specified snapshot.
- zfs_debug=<on|yes|1> Debug the initrd script
- zfs_force=<on|yes|1> Force importing the pool. Should not be
necessary.
- zfs=<off|no|0> Don't try to import ANY pool, mount ANY filesystem or
even load the module.
- rpool=<pool> Use this pool for root pool.
- bootfs=<pool>/<dataset> Use this dataset for root filesystem.
- root=<pool>/<dataset> Use this dataset for root filesystem.
- root=ZFS=<pool>/<dataset> Use this dataset for root filesystem.
- root=zfs:<pool>/<dataset> Use this dataset for root filesystem.
- root=zfs:AUTO Try to detect both pool and rootfs
In all these cases, <dataset> could also be <dataset>@<snapshot>.
The reason there are so many supported boot options to get the root
filesystem, is that there are a lot of different ways too boot ZFS out
there, and I wanted to make sure I supported them all.
Pool imports
============
Import using /dev/disk/by-\*
----------------------------
The initrd will, if the variable USE_DISK_BY_ID is set in the file
/etc/default/zfs, to import using the /dev/disk/by-\* links. It will try
to import in this order:
1. /dev/disk/by-vdev
2. /dev/disk/by-\*
3. /dev
Import using cache file
-----------------------
If all of these imports fail (or if USE_DISK_BY_ID is unset), it will
then try to import using the cache file.
Last ditch attempt at importing
-------------------------------
If that ALSO fails, it will try one more time, without any -d or -c
options.
Booting
=======
Booting from snapshot:
----------------------
Enter the snapshot for the root= parameter like in this example:
::
linux /ROOT/debian-1@/boot/vmlinuz-3.2.0-4-amd64 root=ZFS=rpool/ROOT/debian-1@some_snapshot ro boot=zfs $bootfs quiet
This will clone the snapshot rpool/ROOT/debian-1@some_snapshot into the
filesystem rpool/ROOT/debian-1_some_snapshot and use that as root
filesystem. The original filesystem and snapshot is left alone in this
case.
**BEWARE** that it will first destroy, blindingly, the
rpool/ROOT/debian-1_some_snapshot filesystem before trying to clone the
snapshot into it again. So if you've booted from the same snapshot
previously and done some changes in that root filesystem, they will be
undone by the destruction of the filesystem.
Snapshot rollback
-----------------
From version 0.6.4-1-3 it is now also possible to specify rollback=1 to
do a rollback of the snapshot instead of cloning it. **BEWARE** that
this will destroy *all* snapshots done after the specified snapshot!
Select snapshot dynamically
---------------------------
From version 0.6.4-1-3 it is now also possible to specify a NULL
snapshot name (such as root=rpool/ROOT/debian-1@) and if so, the initrd
script will discover all snapshots below that filesystem (sans the at),
and output a list of snapshot for the user to choose from.
Booting from native encrypted filesystem
----------------------------------------
Although there is currently no support for native encryption in ZFS On
Linux, there is a patch floating around 'out there' and the initrd
supports loading key and unlock such encrypted filesystem.
Separated filesystems
---------------------
Descended filesystems
~~~~~~~~~~~~~~~~~~~~~
If there are separate filesystems (for example a separate dataset for
/usr), the snapshot boot code will try to find the snapshot under each
filesystems and clone (or rollback) them.
Example:
::
rpool/ROOT/debian-1@some_snapshot
rpool/ROOT/debian-1/usr@some_snapshot
These will create the following filesystems respectively (if not doing a
rollback):
::
rpool/ROOT/debian-1_some_snapshot
rpool/ROOT/debian-1/usr_some_snapshot
The initrd code will use the mountpoint option (if any) in the original
(without the snapshot part) dataset to find *where* it should mount the
dataset. Or it will use the name of the dataset below the root
filesystem (rpool/ROOT/debian-1 in this example) for the mount point.

File diff suppressed because it is too large Load Diff

67
docs/Debian.rst Normal file
View File

@@ -0,0 +1,67 @@
Offical ZFS on Linux
`DKMS <https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support>`__
style packages are available from the `Debian GNU/Linux
repository <https://tracker.debian.org/pkg/zfs-linux>`__ for the
following configurations. The packages previously hosted at
archive.zfsonlinux.org will not be updated and are not recommended for
new installations.
**Debian Releases:** Stretch (oldstable), Buster (stable), and newer
(testing, sid) **Architectures:** amd64
Table of contents
=================
- `Installation <#installation>`__
- `Related Links <#related-links>`__
Installation
------------
For Debian Buster, ZFS packages are included in the `contrib
repository <https://packages.debian.org/source/buster/zfs-linux>`__.
If you want to boot from ZFS, see [[Debian Buster Root on ZFS]] instead.
For troubleshooting existing installations on Stretch, see [[Debian
Stretch Root on ZFS]].
The `backports
repository <https://backports.debian.org/Instructions/>`__ often
provides newer releases of ZFS. You can use it as follows:
Add the backports repository:
::
# vi /etc/apt/sources.list.d/buster-backports.list
deb http://deb.debian.org/debian buster-backports main contrib
deb-src http://deb.debian.org/debian buster-backports main contrib
# vi /etc/apt/preferences.d/90_zfs
Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms zfs-dkms zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed
Pin: release n=buster-backports
Pin-Priority: 990
Update the list of packages:
::
# apt update
Install the kernel headers and other dependencies:
::
# apt install --yes dpkg-dev linux-headers-$(uname -r) linux-image-amd64
Install the zfs packages:
::
# apt-get install zfs-dkms zfsutils-linux
Related Links
-------------
- [[Debian GNU Linux initrd documentation]]
- [[Debian Buster Root on ZFS]]

2
docs/Debugging.rst Normal file
View File

@@ -0,0 +1,2 @@
The future home for documenting ZFS on Linux development and debugging
techniques.

View File

@@ -0,0 +1,16 @@
Developer Resources
===================
| [[Custom Packages]]
| [[Building ZFS]]
| `Buildbot
Status <http://build.zfsonlinux.org/tgrid?length=100&branch=master&category=Tests&rev_order=desc>`__
| `Buildbot
Options <https://github.com/zfsonlinux/zfs/wiki/Buildbot-Options>`__
| `OpenZFS
Tracking <http://build.zfsonlinux.org/openzfs-tracking.html>`__
| [[OpenZFS Patches]]
| [[OpenZFS Exceptions]]
| `OpenZFS
Documentation <http://open-zfs.org/wiki/Developer_resources>`__
| [[Git and GitHub for beginners]]

741
docs/FAQ.rst Normal file
View File

@@ -0,0 +1,741 @@
Table Of Contents
-----------------
- `What is ZFS on Linux <#what-is-zfs-on-linux>`__
- `Hardware Requirements <#hardware-requirements>`__
- `Do I have to use ECC memory for
ZFS? <#do-i-have-to-use-ecc-memory-for-zfs>`__
- `Installation <#installation>`__
- `Supported Architectures <#supported-architectures>`__
- `Supported Kernels <#supported-kernels>`__
- `32-bit vs 64-bit Systems <#32-bit-vs-64-bit-systems>`__
- `Booting from ZFS <#booting-from-zfs>`__
- `Selecting /dev/ names when creating a
pool <#selecting-dev-names-when-creating-a-pool>`__
- `Setting up the /etc/zfs/vdev_id.conf
file <#setting-up-the-etczfsvdev_idconf-file>`__
- `Changing /dev/ names on an existing
pool <#changing-dev-names-on-an-existing-pool>`__
- `The /etc/zfs/zpool.cache file <#the-etczfszpoolcache-file>`__
- `Generating a new /etc/zfs/zpool.cache
file <#generating-a-new-etczfszpoolcache-file>`__
- `Sending and Receiving Streams <#sending-and-receiving-streams>`__
- `hole_birth Bugs <#hole_birth-bugs>`__
- `Sending Large Blocks <#sending-large-blocks>`__
- `CEPH/ZFS <#cephzfs>`__
- `ZFS Configuration <#zfs-configuration>`__
- `CEPH Configuration (ceph.conf} <#ceph-configuration-cephconf>`__
- `Other General Guidelines <#other-general-guidelines>`__
- `Performance Considerations <#performance-considerations>`__
- `Advanced Format Disks <#advanced-format-disks>`__
- `ZVOL used space larger than
expected <#ZVOL-used-space-larger-than-expected>`__
- `Using a zvol for a swap device <#using-a-zvol-for-a-swap-device>`__
- `Using ZFS on Xen Hypervisor or Xen
Dom0 <#using-zfs-on-xen-hypervisor-or-xen-dom0>`__
- `udisks2 creates /dev/mapper/ entries for
zvol <#udisks2-creating-devmapper-entries-for-zvol>`__
- `Licensing <#licensing>`__
- `Reporting a problem <#reporting-a-problem>`__
- `Does ZFS on Linux have a Code of
Conduct? <#does-zfs-on-linux-have-a-code-of-conduct>`__
What is ZFS on Linux
--------------------
The ZFS on Linux project is an implementation of
`OpenZFS <http://open-zfs.org/wiki/Main_Page>`__ designed to work in a
Linux environment. OpenZFS is an outstanding storage platform that
encompasses the functionality of traditional filesystems, volume
managers, and more, with consistent reliability, functionality and
performance across all distributions. Additional information about
OpenZFS can be found in the `OpenZFS wikipedia
article <https://en.wikipedia.org/wiki/OpenZFS>`__.
Hardware Requirements
---------------------
Because ZFS was originally designed for Sun Solaris it was long
considered a filesystem for large servers and for companies that could
afford the best and most powerful hardware available. But since the
porting of ZFS to numerous OpenSource platforms (The BSDs, Illumos and
Linux - under the umbrella organization "OpenZFS"), these requirements
have been lowered.
The suggested hardware requirements are:
- ECC memory. This isn't really a requirement, but it's highly
recommended.
- 8GB+ of memory for the best performance. It's perfectly possible to
run with 2GB or less (and people do), but you'll need more if using
deduplication.
Do I have to use ECC memory for ZFS?
------------------------------------
Using ECC memory for OpenZFS is strongly recommended for enterprise
environments where the strongest data integrity guarantees are required.
Without ECC memory rare random bit flips caused by cosmic rays or by
faulty memory can go undetected. If this were to occur OpenZFS (or any
other filesystem) will write the damaged data to disk and be unable to
automatically detect the corruption.
Unfortunately, ECC memory is not always supported by consumer grade
hardware. And even when it is ECC memory will be more expensive. For
home users the additional safety brought by ECC memory might not justify
the cost. It's up to you to determine what level of protection your data
requires.
Installation
------------
ZFS on Linux is available for all major Linux distributions. Refer to
the [[getting started]] section of the wiki for links to installations
instructions for many popular distributions. If your distribution isn't
listed you can always build ZFS on Linux from the latest official
`tarball <https://github.com/zfsonlinux/zfs/releases>`__.
Supported Architectures
-----------------------
ZFS on Linux is regularly compiled for the following architectures:
x86_64, x86, aarch64, arm, ppc64, ppc.
Supported Kernels
-----------------
The `notes <https://github.com/zfsonlinux/zfs/releases>`__ for a given
ZFS on Linux release will include a range of supported kernels. Point
releases will be tagged as needed in order to support the *stable*
kernel available from `kernel.org <https://www.kernel.org/>`__. The
oldest supported kernel is 2.6.32 due to its prominence in Enterprise
Linux distributions.
.. _32-bit-vs-64-bit-systems:
32-bit vs 64-bit Systems
------------------------
You are **strongly** encouraged to use a 64-bit kernel. ZFS on Linux
will build for 32-bit kernels but you may encounter stability problems.
ZFS was originally developed for the Solaris kernel which differs from
the Linux kernel in several significant ways. Perhaps most importantly
for ZFS it is common practice in the Solaris kernel to make heavy use of
the virtual address space. However, use of the virtual address space is
strongly discouraged in the Linux kernel. This is particularly true on
32-bit architectures where the virtual address space is limited to 100M
by default. Using the virtual address space on 64-bit Linux kernels is
also discouraged but the address space is so much larger than physical
memory it is less of an issue.
If you are bumping up against the virtual memory limit on a 32-bit
system you will see the following message in your system logs. You can
increase the virtual address size with the boot option ``vmalloc=512M``.
::
vmap allocation for size 4198400 failed: use vmalloc=<size> to increase size.
However, even after making this change your system will likely not be
entirely stable. Proper support for 32-bit systems is contingent upon
the OpenZFS code being weaned off its dependence on virtual memory. This
will take some time to do correctly but it is planned for OpenZFS. This
change is also expected to improve how efficiently OpenZFS manages the
ARC cache and allow for tighter integration with the standard Linux page
cache.
Booting from ZFS
----------------
Booting from ZFS on Linux is possible and many people do it. There are
excellent walk throughs available for [[Debian]], [[Ubuntu]] and
`Gentoo <https://github.com/pendor/gentoo-zfs-install/tree/master/install>`__.
Selecting /dev/ names when creating a pool
------------------------------------------
There are different /dev/ names that can be used when creating a ZFS
pool. Each option has advantages and drawbacks, the right choice for
your ZFS pool really depends on your requirements. For development and
testing using /dev/sdX naming is quick and easy. A typical home server
might prefer /dev/disk/by-id/ naming for simplicity and readability.
While very large configurations with multiple controllers, enclosures,
and switches will likely prefer /dev/disk/by-vdev naming for maximum
control. But in the end, how you choose to identify your disks is up to
you.
- **/dev/sdX, /dev/hdX:** Best for development/test pools
- Summary: The top level /dev/ names are the default for consistency
with other ZFS implementations. They are available under all Linux
distributions and are commonly used. However, because they are not
persistent they should only be used with ZFS for development/test
pools.
- Benefits:This method is easy for a quick test, the names are
short, and they will be available on all Linux distributions.
- Drawbacks:The names are not persistent and will change depending
on what order they disks are detected in. Adding or removing
hardware for your system can easily cause the names to change. You
would then need to remove the zpool.cache file and re-import the
pool using the new names.
- Example: ``zpool create tank sda sdb``
- **/dev/disk/by-id/:** Best for small pools (less than 10 disks)
- Summary: This directory contains disk identifiers with more human
readable names. The disk identifier usually consists of the
interface type, vendor name, model number, device serial number,
and partition number. This approach is more user friendly because
it simplifies identifying a specific disk.
- Benefits: Nice for small systems with a single disk controller.
Because the names are persistent and guaranteed not to change, it
doesn't matter how the disks are attached to the system. You can
take them all out, randomly mixed them up on the desk, put them
back anywhere in the system and your pool will still be
automatically imported correctly.
- Drawbacks: Configuring redundancy groups based on physical
location becomes difficult and error prone.
- Example:
``zpool create tank scsi-SATA_Hitachi_HTS7220071201DP1D10DGG6HMRP``
- **/dev/disk/by-path/:** Good for large pools (greater than 10 disks)
- Summary: This approach is to use device names which include the
physical cable layout in the system, which means that a particular
disk is tied to a specific location. The name describes the PCI
bus number, as well as enclosure names and port numbers. This
allows the most control when configuring a large pool.
- Benefits: Encoding the storage topology in the name is not only
helpful for locating a disk in large installations. But it also
allows you to explicitly layout your redundancy groups over
multiple adapters or enclosures.
- Drawbacks: These names are long, cumbersome, and difficult for a
human to manage.
- Example:
``zpool create tank pci-0000:00:1f.2-scsi-0:0:0:0 pci-0000:00:1f.2-scsi-1:0:0:0``
- **/dev/disk/by-vdev/:** Best for large pools (greater than 10 disks)
- Summary: This approach provides administrative control over device
naming using the configuration file /etc/zfs/vdev_id.conf. Names
for disks in JBODs can be generated automatically to reflect their
physical location by enclosure IDs and slot numbers. The names can
also be manually assigned based on existing udev device links,
including those in /dev/disk/by-path or /dev/disk/by-id. This
allows you to pick your own unique meaningful names for the disks.
These names will be displayed by all the zfs utilities so it can
be used to clarify the administration of a large complex pool. See
the vdev_id and vdev_id.conf man pages for further details.
- Benefits: The main benefit of this approach is that it allows you
to choose meaningful human-readable names. Beyond that, the
benefits depend on the naming method employed. If the names are
derived from the physical path the benefits of /dev/disk/by-path
are realized. On the other hand, aliasing the names based on drive
identifiers or WWNs has the same benefits as using
/dev/disk/by-id.
- Drawbacks: This method relies on having a /etc/zfs/vdev_id.conf
file properly configured for your system. To configure this file
please refer to section `Setting up the /etc/zfs/vdev_id.conf
file <#setting-up-the-etczfsvdev_idconf-file>`__. As with
benefits, the drawbacks of /dev/disk/by-id or /dev/disk/by-path
may apply depending on the naming method employed.
- Example: ``zpool create tank mirror A1 B1 mirror A2 B2``
.. _setting-up-the-etczfsvdev_idconf-file:
Setting up the /etc/zfs/vdev_id.conf file
-----------------------------------------
In order to use /dev/disk/by-vdev/ naming the ``/etc/zfs/vdev_id.conf``
must be configured. The format of this file is described in the
vdev_id.conf man page. Several examples follow.
A non-multipath configuration with direct-attached SAS enclosures and an
arbitrary slot re-mapping.
::
multipath no
topology sas_direct
phys_per_port 4
# PCI_SLOT HBA PORT CHANNEL NAME
channel 85:00.0 1 A
channel 85:00.0 0 B
# Linux Mapped
# Slot Slot
slot 0 2
slot 1 6
slot 2 0
slot 3 3
slot 4 5
slot 5 7
slot 6 4
slot 7 1
A SAS-switch topology. Note that the channel keyword takes only two
arguments in this example.
::
topology sas_switch
# SWITCH PORT CHANNEL NAME
channel 1 A
channel 2 B
channel 3 C
channel 4 D
A multipath configuration. Note that channel names have multiple
definitions - one per physical path.
::
multipath yes
# PCI_SLOT HBA PORT CHANNEL NAME
channel 85:00.0 1 A
channel 85:00.0 0 B
channel 86:00.0 1 A
channel 86:00.0 0 B
A configuration using device link aliases.
::
# by-vdev
# name fully qualified or base name of device link
alias d1 /dev/disk/by-id/wwn-0x5000c5002de3b9ca
alias d2 wwn-0x5000c5002def789e
After defining the new disk names run ``udevadm trigger`` to prompt udev
to parse the configuration file. This will result in a new
/dev/disk/by-vdev directory which is populated with symlinks to /dev/sdX
names. Following the first example above, you could then create the new
pool of mirrors with the following command:
::
$ zpool create tank \
mirror A0 B0 mirror A1 B1 mirror A2 B2 mirror A3 B3 \
mirror A4 B4 mirror A5 B5 mirror A6 B6 mirror A7 B7
$ zpool status
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
A0 ONLINE 0 0 0
B0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
A1 ONLINE 0 0 0
B1 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
A2 ONLINE 0 0 0
B2 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
A3 ONLINE 0 0 0
B3 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
A4 ONLINE 0 0 0
B4 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
A5 ONLINE 0 0 0
B5 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
A6 ONLINE 0 0 0
B6 ONLINE 0 0 0
mirror-7 ONLINE 0 0 0
A7 ONLINE 0 0 0
B7 ONLINE 0 0 0
errors: No known data errors
Changing /dev/ names on an existing pool
----------------------------------------
Changing the /dev/ names on an existing pool can be done by simply
exporting the pool and re-importing it with the -d option to specify
which new names should be used. For example, to use the custom names in
/dev/disk/by-vdev:
::
$ zpool export tank
$ zpool import -d /dev/disk/by-vdev tank
.. _the-etczfszpoolcache-file:
The /etc/zfs/zpool.cache file
-----------------------------
Whenever a pool is imported on the system it will be added to the
``/etc/zfs/zpool.cache file``. This file stores pool configuration
information, such as the device names and pool state. If this file
exists when running the ``zpool import`` command then it will be used to
determine the list of pools available for import. When a pool is not
listed in the cache file it will need to be detected and imported using
the ``zpool import -d /dev/disk/by-id`` command.
.. _generating-a-new-etczfszpoolcache-file:
Generating a new /etc/zfs/zpool.cache file
------------------------------------------
The ``/etc/zfs/zpool.cache`` file will be automatically updated when
your pool configuration is changed. However, if for some reason it
becomes stale you can force the generation of a new
``/etc/zfs/zpool.cache`` file by setting the cachefile property on the
pool.
::
$ zpool set cachefile=/etc/zfs/zpool.cache tank
Conversely the cache file can be disabled by setting ``cachefile=none``.
This is useful for failover configurations where the pool should always
be explicitly imported by the failover software.
::
$ zpool set cachefile=none tank
Sending and Receiving Streams
-----------------------------
hole_birth Bugs
~~~~~~~~~~~~~~~
The hole_birth feature has/had bugs, the result of which is that, if you
do a ``zfs send -i`` (or ``-R``, since it uses ``-i``) from an affected
dataset, the receiver *will not see any checksum or other errors, but
will not match the source*.
ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the
faulty metadata which causes this issue *on the sender side*.
For more details, see the [[hole_birth FAQ]].
Sending Large Blocks
~~~~~~~~~~~~~~~~~~~~
When sending incremental streams which contain large blocks (>128K) the
``--large-block`` flag must be specified. Inconsist use of the flag
between incremental sends can result in files being incorrectly zeroed
when they are received. Raw encrypted send/recvs automatically imply the
``--large-block`` flag and are therefore unaffected.
For more details, see `issue
6224 <https://github.com/zfsonlinux/zfs/issues/6224>`__.
CEPH/ZFS
--------
There is a lot of tuning that can be done that's dependent on the
workload that is being put on CEPH/ZFS, as well as some general
guidelines. Some are as follow;
ZFS Configuration
~~~~~~~~~~~~~~~~~
The CEPH filestore back-end heavily relies on xattrs, for optimal
performance all CEPH workloads will benefit from the following ZFS
dataset parameters
- ``xattr=sa``
- ``dnodesize=auto``
Beyond that typically rbd/cephfs focused workloads benefit from small
recordsize({16K-128K), while objectstore/s3/rados focused workloads
benefit from large recordsize (128K-1M).
.. _ceph-configuration-cephconf:
CEPH Configuration (ceph.conf}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additionally CEPH sets various values internally for handling xattrs
based on the underlying filesystem. As CEPH only officially
supports/detects XFS and BTRFS, for all other filesystems it falls back
to rather `limited "safe"
values <https://github.com/ceph/ceph/blob/4fe7e2a458a1521839bc390c2e3233dd809ec3ac/src/common/config_opts.h#L1125-L1148>`__.
On newer releases need for larger xattrs will prevent OSD's from even
starting.
The officially recommended workaround (`see
here <http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/#not-recommended>`__)
has some severe downsides, and more specifically is geared toward
filesystems with "limited" xattr support such as ext4.
ZFS does not have a limit internally to xattrs length, as such we can
treat it similarly to how CEPH treats XFS. We can set overrides to set 3
internal values to the same as those used with XFS(`see
here <https://github.com/ceph/ceph/blob/9b317f7322848802b3aab9fec3def81dddd4a49b/src/os/filestore/FileStore.cc#L5714-L5737>`__
and
`here <https://github.com/ceph/ceph/blob/4fe7e2a458a1521839bc390c2e3233dd809ec3ac/src/common/config_opts.h#L1125-L1148>`__)
and allow it be used without the severe limitations of the "official"
workaround.
::
[osd]
filestore_max_inline_xattrs = 10
filestore_max_inline_xattr_size = 65536
filestore_max_xattr_value_size = 65536
Other General Guidelines
~~~~~~~~~~~~~~~~~~~~~~~~
- Use a separate journal device. Do not don't collocate CEPH journal on
ZFS dataset if at all possible, this will quickly lead to terrible
fragmentation, not to mention terrible performance upfront even
before fragmentation (CEPH journal does a dsync for every write).
- Use a SLOG device, even with a separate CEPH journal device. For some
workloads, skipping SLOG and setting ``logbias=throughput`` may be
acceptable.
- Use a high-quality SLOG/CEPH journal device, consumer based SSD, or
even NVMe WILL NOT DO (Samsung 830, 840, 850, etc) for a variety of
reasons. CEPH will kill them quickly, on-top of the performance being
quite low in this use. Generally recommended are [Intel DC S3610,
S3700, S3710, P3600, P3700], or [Samsung SM853, SM863], or better.
- If using an high quality SSD or NVMe device(as mentioned above), you
CAN share SLOG and CEPH Journal to good results on single device. A
ratio of 4 HDDs to 1 SSD (Intel DC S3710 200GB), with each SSD
partitioned (remember to align!) to 4x10GB (for ZIL/SLOG) + 4x20GB
(for CEPH journal) has been reported to work well.
Again - CEPH + ZFS will KILL a consumer based SSD VERY quickly. Even
ignoring the lack of power-loss protection, and endurance ratings, you
will be very disappointed with performance of consumer based SSD under
such a workload.
Performance Considerations
--------------------------
To achieve good performance with your pool there are some easy best
practices you should follow. Additionally, it should be made clear that
the ZFS on Linux implementation has not yet been optimized for
performance. As the project matures we can expect performance to
improve.
- **Evenly balance your disk across controllers:** Often the limiting
factor for performance is not the disk but the controller. By
balancing your disks evenly across controllers you can often improve
throughput.
- **Create your pool using whole disks:** When running zpool create use
whole disk names. This will allow ZFS to automatically partition the
disk to ensure correct alignment. It will also improve
interoperability with other OpenZFS implementations which honor the
wholedisk property.
- **Have enough memory:** A minimum of 2GB of memory is recommended for
ZFS. Additional memory is strongly recommended when the compression
and deduplication features are enabled.
- **Improve performance by setting ashift=12:** You may be able to
improve performance for some workloads by setting ``ashift=12``. This
tuning can only be set when block devices are first added to a pool,
such as when the pool is first created or when a new vdev is added to
the pool. This tuning parameter can result in a decrease of capacity
for RAIDZ configuratons.
Advanced Format Disks
---------------------
Advanced Format (AF) is a new disk format which natively uses a 4,096
byte, instead of 512 byte, sector size. To maintain compatibility with
legacy systems many AF disks emulate a sector size of 512 bytes. By
default, ZFS will automatically detect the sector size of the drive.
This combination can result in poorly aligned disk accesses which will
greatly degrade the pool performance.
Therefore, the ability to set the ashift property has been added to the
zpool command. This allows users to explicitly assign the sector size
when devices are first added to a pool (typically at pool creation time
or adding a vdev to the pool). The ashift values range from 9 to 16 with
the default value 0 meaning that zfs should auto-detect the sector size.
This value is actually a bit shift value, so an ashift value for 512
bytes is 9 (2^9 = 512) while the ashift value for 4,096 bytes is 12
(2^12 = 4,096).
To force the pool to use 4,096 byte sectors at pool creation time, you
may run:
::
$ zpool create -o ashift=12 tank mirror sda sdb
To force the pool to use 4,096 byte sectors when adding a vdev to a
pool, you may run:
::
$ zpool add -o ashift=12 tank mirror sdc sdd
ZVOL used space larger than expected
------------------------------------
| Depending on the filesystem used on the zvol (e.g. ext4) and the usage
(e.g. deletion and creation of many files) the ``used`` and
``referenced`` properties reported by the zvol may be larger than the
"actual" space that is being used as reported by the consumer.
| This can happen due to the way some filesystems work, in which they
prefer to allocate files in new untouched blocks rather than the
fragmented used blocks marked as free. This forces zfs to reference
all blocks that the underlying filesystem has ever touched.
| This is in itself not much of a problem, as when the ``used`` property
reaches the configured ``volsize`` the underlying filesystem will
start reusing blocks. But the problem arises if it is desired to
snapshot the zvol, as the space referenced by the snapshots will
contain the unused blocks.
| This issue can be prevented, by using the ``fstrim`` command to allow
the kernel to specify to zfs which blocks are unused.
| Executing a ``fstrim`` command before a snapshot is taken will ensure
a minimum snapshot size.
| Adding the ``discard`` option for the mounted ZVOL in ``\etc\fstab``
effectively enables the Linux kernel to issue the trim commands
continuously, without the need to execute fstrim on-demand.
Using a zvol for a swap device
------------------------------
You may use a zvol as a swap device but you'll need to configure it
appropriately.
**CAUTION:** for now swap on zvol may lead to deadlock, in this case
please send your logs
`here <https://github.com/zfsonlinux/zfs/issues/7734>`__.
- Set the volume block size to match your systems page size. This
tuning prevents ZFS from having to perform read-modify-write options
on a larger block while the system is already low on memory.
- Set the ``logbias=throughput`` and ``sync=always`` properties. Data
written to the volume will be flushed immediately to disk freeing up
memory as quickly as possible.
- Set ``primarycache=metadata`` to avoid keeping swap data in RAM via
the ARC.
- Disable automatic snapshots of the swap device.
::
$ zfs create -V 4G -b $(getconf PAGESIZE) \
-o logbias=throughput -o sync=always \
-o primarycache=metadata \
-o com.sun:auto-snapshot=false rpool/swap
Using ZFS on Xen Hypervisor or Xen Dom0
---------------------------------------
It is usually recommended to keep virtual machine storage and hypervisor
pools, quite separate. Although few people have managed to successfully
deploy and run ZFS on Linux using the same machine configured as Dom0.
There are few caveats:
- Set a fair amount of memory in grub.conf, dedicated to Dom0.
- dom0_mem=16384M,max:16384M
- Allocate no more of 30-40% of Dom0's memory to ZFS in
``/etc/modprobe.d/zfs.conf``.
- options zfs zfs_arc_max=6442450944
- Disable Xen's auto-ballooning in ``/etc/xen/xl.conf``
- Watch out for any Xen bugs, such as `this
one <https://github.com/zfsonlinux/zfs/issues/1067>`__ related to
ballooning
udisks2 creating /dev/mapper/ entries for zvol
----------------------------------------------
To prevent udisks2 from creating /dev/mapper entries that must be
manually removed or maintained during zvol remove / rename, create a
udev rule such as ``/etc/udev/rules.d/80-udisks2-ignore-zfs.rules`` with
the following contents:
::
ENV{ID_PART_ENTRY_SCHEME}=="gpt", ENV{ID_FS_TYPE}=="zfs_member", ENV{ID_PART_ENTRY_TYPE}=="6a898cc3-1dd2-11b2-99a6-080020736631", ENV{UDISKS_IGNORE}="1"
Licensing
---------
ZFS is licensed under the Common Development and Distribution License
(`CDDL <http://hub.opensolaris.org/bin/view/Main/opensolaris_license>`__),
and the Linux kernel is licensed under the GNU General Public License
Version 2 (`GPLv2 <http://www.gnu.org/licenses/gpl2.html>`__). While
both are free open source licenses they are restrictive licenses. The
combination of them causes problems because it prevents using pieces of
code exclusively available under one license with pieces of code
exclusively available under the other in the same binary. In the case of
the kernel, this prevents us from distributing ZFS on Linux as part of
the kernel binary. However, there is nothing in either license that
prevents distributing it in the form of a binary module or in the form
of source code.
Additional reading and opinions:
- `Software Freedom Law
Center <https://www.softwarefreedom.org/resources/2016/linux-kernel-cddl.html>`__
- `Software Freedom
Conservancy <https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/>`__
- `Free Software
Foundation <https://www.fsf.org/licensing/zfs-and-linux>`__
- `Encouraging closed source
modules <http://www.networkworld.com/article/2301697/smb/encouraging-closed-source-modules-part-1--copyright-and-software.html>`__
Reporting a problem
-------------------
You can open a new issue and search existing issues using the public
`issue tracker <https://github.com/zfsonlinux/zfs/issues>`__. The issue
tracker is used to organize outstanding bug reports, feature requests,
and other development tasks. Anyone may post comments after signing up
for a github account.
Please make sure that what you're actually seeing is a bug and not a
support issue. If in doubt, please ask on the mailing list first, and if
you're then asked to file an issue, do so.
When opening a new issue include this information at the top of the
issue:
- What distribution you're using and the version.
- What spl/zfs packages you're using and the version.
- Describe the problem you're observing.
- Describe how to reproduce the problem.
- Including any warning/errors/backtraces from the system logs.
When a new issue is opened it's not uncommon for a developer to request
additional information about the problem. In general, the more detail
you share about a problem the quicker a developer can resolve it. For
example, providing a simple test case is always exceptionally helpful.
Be prepared to work with the developer looking in to your bug in order
to get it resolved. They may ask for information like:
- Your pool configuration as reported by ``zdb`` or ``zpool status``.
- Your hardware configuration, such as
- Number of CPUs.
- Amount of memory.
- Whether your system has ECC memory.
- Whether it is running under a VMM/Hypervisor.
- Kernel version.
- Values of the spl/zfs module parameters.
- Stack traces which may be logged to ``dmesg``.
Does ZFS on Linux have a Code of Conduct?
-----------------------------------------
Yes, the ZFS on Linux community has a code of conduct. See the `Code of
Conduct <http://open-zfs.org/wiki/Code_of_Conduct>`__ for details.

69
docs/Fedora.rst Normal file
View File

@@ -0,0 +1,69 @@
Only
`DKMS <https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support>`__
style packages can be provided for Fedora from the official
zfsonlinux.org repository. This is because Fedora is a fast moving
distribution which does not provide a stable kABI. These packages track
the official ZFS on Linux tags and are updated as new versions are
released. Packages are available for the following configurations:
| **Fedora Releases:** 30, 31
| **Architectures:** x86_64
To simplify installation a zfs-release package is provided which
includes a zfs.repo configuration file and the ZFS on Linux public
signing key. All official ZFS on Linux packages are signed using this
key, and by default both yum and dnf will verify a package's signature
before allowing it be to installed. Users are strongly encouraged to
verify the authenticity of the ZFS on Linux public key using the
fingerprint listed here.
| **Location:** /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
| **Fedora 30 Package:**
`http://download.zfsonlinux.org/fedora/zfs-release.fc30.noarch.rpm <http://download.zfsonlinux.org/fedora/zfs-release.fc30.noarch.rpm>`__
| **Fedora 31 Package:**
`http://download.zfsonlinux.org/fedora/zfs-release.fc31.noarch.rpm <http://download.zfsonlinux.org/fedora/zfs-release.fc31.noarch.rpm>`__
| **Fedora 32 Package:**
`http://download.zfsonlinux.org/fedora/zfs-release.fc32.noarch.rpm <http://download.zfsonlinux.org/fedora/zfs-release.fc32.noarch.rpm>`__
| **Download from:**
`pgp.mit.edu <http://pgp.mit.edu/pks/lookup?search=0xF14AB620&op=index&fingerprint=on>`__
| **Fingerprint:** C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620
.. code:: sh
$ sudo dnf install http://download.zfsonlinux.org/fedora/zfs-release$(rpm -E %dist).noarch.rpm
$ gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
pub 2048R/F14AB620 2013-03-21 ZFS on Linux <zfs@zfsonlinux.org>
Key fingerprint = C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620
sub 2048R/99685629 2013-03-21
The ZFS on Linux packages should be installed with ``dnf`` on Fedora.
Note that it is important to make sure that the matching *kernel-devel*
package is installed for the running kernel since DKMS requires it to
build ZFS.
.. code:: sh
$ sudo dnf install kernel-devel zfs
If the Fedora provided *zfs-fuse* package is already installed on the
system. Then the ``dnf swap`` command should be used to replace the
existing fuse packages with the ZFS on Linux packages.
.. code:: sh
$ sudo dnf swap zfs-fuse zfs
Testing Repositories
--------------------
In addition to the primary *zfs* repository a *zfs-testing* repository
is available. This repository, which is disabled by default, contains
the latest version of ZFS on Linux which is under active development.
These packages are made available in order to get feedback from users
regarding the functionality and stability of upcoming releases. These
packages **should not** be used on production systems. Packages from the
testing repository can be installed as follows.
::
$ sudo dnf --enablerepo=zfs-testing install kernel-devel zfs

14
docs/Getting-Started.rst Normal file
View File

@@ -0,0 +1,14 @@
To get started with OpenZFS refer to the provided documentation for your
distribution. It will cover the recommended installation method and any
distribution specific information. First time OpenZFS users are
encouraged to check out Aaron Toponce's `excellent
documentation <https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/>`__.
| `ArchLinux <https://wiki.archlinux.org/index.php/ZFS>`__
| [[Debian]]
| [[Fedora]]
| `FreeBSD <https://zfsonfreebsd.github.io/ZoF/>`__
| `Gentoo <https://wiki.gentoo.org/wiki/ZFS>`__
| `openSUSE <https://software.opensuse.org/package/zfs>`__
| [[RHEL and CentOS]]
| [[Ubuntu]]

View File

@@ -0,0 +1,210 @@
Git and GitHub for beginners (ZoL edition)
==========================================
This is a very basic rundown of how to use Git and GitHub to make
changes.
Recommended reading: `ZFS on Linux
CONTRIBUTING.md <https://github.com/zfsonlinux/zfs/blob/master/.github/CONTRIBUTING.md>`__
First time setup
================
If you've never used Git before, you'll need a little setup to start
things off.
::
git config --global user.name "My Name"
git config --global user.email myemail@noreply.non
Cloning the initial repository
==============================
The easiest way to get started is to click the fork icon at the top of
the main repository page. From there you need to download a copy of the
forked repository to your computer:
::
git clone https://github.com/<your-account-name>/zfs.git
This sets the "origin" repository to your fork. This will come in handy
when creating pull requests. To make pulling from the "upstream"
repository as changes are made, it is very useful to establish the
upstream repository as another remote (man git-remote):
::
cd zfs
git remote add upstream https://github.com/zfsonlinux/zfs.git
Preparing and making changes
============================
In order to make changes it is recommended to make a branch, this lets
you work on several unrelated changes at once. It is also not
recommended to make changes to the master branch unless you own the
repository.
::
git checkout -b my-new-branch
From here you can make your changes and move on to the next step.
Recommended reading: `C Style and Coding Standards for
SunOS <https://www.cis.upenn.edu/~lee/06cse480/data/cstyle.ms.pdf>`__,
`ZFS on Linux Developer
Resources <https://github.com/zfsonlinux/zfs/wiki/Developer-Resources>`__,
`OpenZFS Developer
Resources <http://open-zfs.org/wiki/Developer_resources>`__
Testing your patches before pushing
===================================
Before committing and pushing, you may want to test your patches. There
are several tests you can run against your branch such as style
checking, and functional tests. All pull requests go through these tests
before being pushed to the main repository, however testing locally
takes the load off the build/test servers. This step is optional but
highly recommended, however the test suite should be run on a virtual
machine or a host that currently does not use ZFS. You may need to
install ``shellcheck`` and ``flake8`` to run the ``checkstyle``
correctly.
::
sh autogen.sh
./configure
make checkstyle
Recommended reading: `Building
ZFS <https://github.com/zfsonlinux/zfs/wiki/Building-ZFS>`__, `ZFS Test
Suite
README <https://github.com/zfsonlinux/zfs/blob/master/tests/README.md>`__
Committing your changes to be pushed
====================================
When you are done making changes to your branch there are a few more
steps before you can make a pull request.
::
git commit --all --signoff
This command opens an editor and adds all unstaged files from your
branch. Here you need to describe your change and add a few things:
::
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch my-new-branch
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: hello.c
#
The first thing we need to add is the commit message. This is what is
displayed on the git log, and should be a short description of the
change. By style guidelines, this has to be less than 72 characters in
length.
Underneath the commit message you can add a more descriptive text to
your commit. The lines in this section have to be less than 72
characters.
When you are done, the commit should look like this:
::
Add hello command
This is a test commit with a descriptive commit message.
This message can be more than one line as shown here.
Signed-off-by: My Name <myemail@noreply.non>
Closes #9998
Issue #9999
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch my-new-branch
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: hello.c
#
You can also reference issues and pull requests if you are filing a pull
request for an existing issue as shown above. Save and exit the editor
when you are done.
Pushing and creating the pull request
=====================================
Home stretch. You've made your change and made the commit. Now it's time
to push it.
::
git push --set-upstream origin my-new-branch
This should ask you for your github credentials and upload your changes
to your repository.
The last step is to either go to your repository or the upstream
repository on GitHub and you should see a button for making a new pull
request for your recently committed branch.
Correcting issues with your pull request
========================================
Sometimes things don't always go as planned and you may need to update
your pull request with a correction to either your commit message, or
your changes. This can be accomplished by re-pushing your branch. If you
need to make code changes or ``git add`` a file, you can do those now,
along with the following:
::
git commit --amend
git push --force
This will return you to the commit editor screen, and push your changes
over top of the old ones. Do note that this will restart the process of
any build/test servers currently running and excessively pushing can
cause delays in processing of all pull requests.
Maintaining your repository
===========================
When you wish to make changes in the future you will want to have an
up-to-date copy of the upstream repository to make your changes on. Here
is how you keep updated:
::
git checkout master
git pull upstream master
git push origin master
This will make sure you are on the master branch of the repository, grab
the changes from upstream, then push them back to your repository.
Final words
===========
This is a very basic introduction to Git and GitHub, but should get you
on your way to contributing to many open source projects. Not all
projects have style requirements and some may have different processes
to getting changes committed so please refer to their documentation to
see if you need to do anything different. One topic we have not touched
on is the ``git rebase`` command which is a little more advanced for
this wiki article.
Additional resources: `Github Help <https://help.github.com/>`__,
`Atlassian Git Tutorials <https://www.atlassian.com/git/tutorials>`__

View File

@@ -0,0 +1 @@
This page has moved to [[Debian Jessie Root on ZFS]].

19
docs/Home.rst Normal file
View File

@@ -0,0 +1,19 @@
.. raw:: html
<p align="center">[[/img/480px-Open-ZFS-Secondary-Logo-Colour-halfsize.png|alt=openzfs]]</p>
Welcome to the OpenZFS GitHub wiki. This wiki provides documentation for
users and developers working with (or contributing to) the OpenZFS
project. New users or system administrators should refer to the
documentation for their favorite platform to get started.
+----------------------+----------------------+----------------------+
| [[Getting Started]] | [[Project and | [[Developer |
| | Community]] | Resources]] |
+======================+======================+======================+
| How to get started | About the project | Technical |
| with OpenZFS on your | and how to | documentation |
| favorite platform | contribute | discussing the |
| | | OpenZFS |
| | | implementation |
+----------------------+----------------------+----------------------+

9
docs/License.rst Normal file
View File

@@ -0,0 +1,9 @@
|Creative Commons License|
Wiki content is licensed under a `Creative Commons
Attribution-ShareAlike
license <http://creativecommons.org/licenses/by-sa/3.0/>`__ unless
otherwise noted.
.. |Creative Commons License| image:: https://i.creativecommons.org/l/by-sa/3.0/88x31.png
:target: http://creativecommons.org/licenses/by-sa/3.0/

31
docs/Mailing-Lists.rst Normal file
View File

@@ -0,0 +1,31 @@
+----------------------+----------------------+----------------------+
|              | Description | List Archive |
|             List     | | |
|                      | | |
+======================+======================+======================+
| `zfs-annou | A low-traffic list | `arch |
| nce@list.zfsonlinux. | for announcements | ive <https://zfsonli |
| org <https://zfsonli | such as new releases | nux.topicbox.com/gro |
| nux.topicbox.com/gro | | ups/zfs-announce>`__ |
| ups/zfs-announce>`__ | | |
+----------------------+----------------------+----------------------+
| `zfs-dis | A user discussion | `arc |
| cuss@list.zfsonlinux | list for issues | hive <https://zfsonl |
| .org <https://zfsonl | related to | inux.topicbox.com/gr |
| inux.topicbox.com/gr | functionality and | oups/zfs-discuss>`__ |
| oups/zfs-discuss>`__ | usability | |
+----------------------+----------------------+----------------------+
| `zfs | A development list | `a |
| -devel@list.zfsonlin | for developers to | rchive <https://zfso |
| ux.org <https://zfso | discuss technical | nlinux.topicbox.com/ |
| nlinux.topicbox.com/ | issues | groups/zfs-devel>`__ |
| groups/zfs-devel>`__ | | |
+----------------------+----------------------+----------------------+
| `devel | A | `archive <https://o |
| oper@open-zfs.org <h | platform-independent | penzfs.topicbox.com/ |
| ttp://open-zfs.org/w | mailing list for ZFS | groups/developer>`__ |
| iki/Mailing_list>`__ | developers to review | |
| | ZFS code and | |
| | architecture changes | |
| | from all platforms | |
+----------------------+----------------------+----------------------+

315
docs/OpenZFS-Patches.rst Normal file
View File

@@ -0,0 +1,315 @@
The ZFS on Linux project is an adaptation of the upstream `OpenZFS
repository <https://github.com/openzfs/openzfs/>`__ designed to work in
a Linux environment. This upstream repository acts as a location where
new features, bug fixes, and performance improvements from all the
OpenZFS platforms can be integrated. Each platform is responsible for
tracking the OpenZFS repository and merging the relevant improvements
back in to their release.
For the ZFS on Linux project this tracking is managed through an
`OpenZFS tracking <http://build.zfsonlinux.org/openzfs-tracking.html>`__
page. The page is updated regularly and shows a list of OpenZFS commits
and their status in regard to the ZFS on Linux master branch.
This page describes the process of applying outstanding OpenZFS commits
to ZFS on Linux and submitting those changes for inclusion. As a
developer this is a great way to familiarize yourself with ZFS on Linux
and to begin quickly making a valuable contribution to the project. The
following guide assumes you have a `github
account <https://help.github.com/articles/signing-up-for-a-new-github-account/>`__,
are familiar with git, and are used to developing in a Linux
environment.
Porting OpenZFS changes to ZFS on Linux
---------------------------------------
Setup the Environment
~~~~~~~~~~~~~~~~~~~~~
**Clone the source.** Start by making a local clone of the
`spl <https://github.com/zfsonlinux/spl>`__ and
`zfs <https://github.com/zfsonlinux/zfs>`__ repositories.
::
$ git clone -o zfsonlinux https://github.com/zfsonlinux/spl.git
$ git clone -o zfsonlinux https://github.com/zfsonlinux/zfs.git
**Add remote repositories.** Using the GitHub web interface
`fork <https://help.github.com/articles/fork-a-repo/>`__ the
`zfs <https://github.com/zfsonlinux/zfs>`__ repository in to your
personal GitHub account. Add your new zfs fork and the
`openzfs <https://github.com/openzfs/openzfs/>`__ repository as remotes
and then fetch both repositories. The OpenZFS repository is large and
the initial fetch may take some time over a slow connection.
::
$ cd zfs
$ git remote add <your-github-account> git@github.com:<your-github-account>/zfs.git
$ git remote add openzfs https://github.com/openzfs/openzfs.git
$ git fetch --all
**Build the source.** Compile the spl and zfs master branches. These
branches are always kept stable and this is a useful verification that
you have a full build environment installed and all the required
dependencies are available. This may also speed up the compile time
latter for small patches where incremental builds are an option.
::
$ cd ../spl
$ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc)
$
$ cd ../zfs
$ sh autogen.sh && ./configure --enable-debug && make -s -j$(nproc)
Pick a patch
~~~~~~~~~~~~
Consult the `OpenZFS
tracking <http://build.zfsonlinux.org/openzfs-tracking.html>`__ page and
select a patch which has not yet been applied. For your first patch you
will want to select a small patch to familiarize yourself with the
process.
Porting a Patch
~~~~~~~~~~~~~~~
There are 2 methods:
- `cherry-pick (easier) <#cherry-pick>`__
- `manual merge <#manual-merge>`__
Please read about `manual merge <#manual-merge>`__ first to learn the
whole process.
Cherry-pick
^^^^^^^^^^^
You can start to
`cherry-pick <https://git-scm.com/docs/git-cherry-pick>`__ by your own,
but we have made a special
`script <https://github.com/zfsonlinux/zfs-buildbot/blob/master/scripts/openzfs-merge.sh>`__,
which tries to
`cherry-pick <https://git-scm.com/docs/git-cherry-pick>`__ the patch
automatically and generates the description.
0) Prepare environment:
Mandatory git settings (add to ``~/.gitconfig``):
::
[merge]
renameLimit = 999999
[user]
email = mail@yourmail.com
name = Your Name
Download the script:
::
wget https://raw.githubusercontent.com/zfsonlinux/zfs-buildbot/master/scripts/openzfs-merge.sh
1) Run:
::
./openzfs-merge.sh -d path_to_zfs_folder -c openzfs_commit_hash
This command will fetch all repositories, create a new branch
``autoport-ozXXXX`` (XXXX - OpenZFS issue number), try to cherry-pick,
compile and check cstyle on success.
If it succeeds without any merge conflicts - go to ``autoport-ozXXXX``
branch, it will have ready to pull commit. Congratulations, you can go
to step 7!
Otherwise you should go to step 2.
2) Resolve all merge conflicts manually. Easy method - install
`Meld <http://meldmerge.org/>`__ or any other diff tool and run
``git mergetool``.
3) Check all compile and cstyle errors (See `Testing a
patch <#testing-a-patch>`__).
4) Commit your changes with any description.
5) Update commit description (last commit will be changed):
::
./openzfs-merge.sh -d path_to_zfs_folder -g openzfs_commit_hash
6) Add any porting notes (if you have modified something):
``git commit --amend``
7) Push your commit to github:
``git push <your-github-account> autoport-ozXXXX``
8) Create a pull request to ZoL master branch.
9) Go to `Testing a patch <#testing-a-patch>`__ section.
Manual merge
^^^^^^^^^^^^
**Create a new branch.** It is important to create a new branch for
every commit you port to ZFS on Linux. This will allow you to easily
submit your work as a GitHub pull request and it makes it possible to
work on multiple OpenZFS changes concurrently. All development branches
need to be based off of the ZFS master branch and it's helpful to name
the branches after the issue number you're working on.
::
$ git checkout -b openzfs-<issue-nr> master
**Generate a patch.** One of the first things you'll notice about the
ZFS on Linux repository is that it is laid out differently than the
OpenZFS repository. Organizationally it is much flatter, this is
possible because it only contains the code for OpenZFS not an entire OS.
That means that in order to apply a patch from OpenZFS the path names in
the patch must be changed. A script called zfs2zol-patch.sed has been
provided to perform this translation. Use the ``git format-patch``
command and this script to generate a patch.
::
$ git format-patch --stdout <commit-hash>^..<commit-hash> | \
./scripts/zfs2zol-patch.sed >openzfs-<issue-nr>.diff
**Apply the patch.** In many cases the generated patch will apply
cleanly to the repository. However, it's important to keep in mind the
zfs2zol-patch.sed script only translates the paths. There are often
additional reasons why a patch might not apply. In some cases hunks of
the patch may not be applicable to Linux and should be dropped. In other
cases a patch may depend on other changes which must be applied first.
The changes may also conflict with Linux specific modifications. In all
of these cases the patch will need to be manually modified to apply
cleanly while preserving the its original intent.
::
$ git am ./openzfs-<commit-nr>.diff
**Update the commit message.** By using ``git format-patch`` to generate
the patch and then ``git am`` to apply it the original comment and
authorship will be preserved. However, due to the formatting of the
OpenZFS commit you will likely find that the entire commit comment has
been squashed in to the subject line. Use ``git commit --amend`` to
cleanup the comment and be careful to follow `these standard
guidelines <http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html>`__.
The summary line of an OpenZFS commit is often very long and you should
truncate it to 50 characters. This is useful because it preserves the
correct formatting of ``git log --pretty=oneline`` command. Make sure to
leave a blank line between the summary and body of the commit. Then
include the full OpenZFS commit message wrapping any lines which exceed
72 characters. Finally, add a ``Ported-by`` tag with your contact
information and both a ``OpenZFS-issue`` and ``OpenZFS-commit`` tag with
appropriate links. You'll want to verify your commit contains all of the
following information:
- The subject line from the original OpenZFS patch in the form:
"OpenZFS <issue-nr> - short description".
- The original patch authorship should be preserved.
- The OpenZFS commit message.
- The following tags:
- **Authored by:** Original patch author
- **Reviewed by:** All OpenZFS reviewers from the original patch.
- **Approved by:** All OpenZFS reviewers from the original patch.
- **Ported-by:** Your name and email address.
- **OpenZFS-issue:** https ://www.illumos.org/issues/issue
- **OpenZFS-commit:** https
://github.com/openzfs/openzfs/commit/hash
- **Porting Notes:** An optional section describing any changes
required when porting.
For example, OpenZFS issue 6873 was `applied to
Linux <https://github.com/zfsonlinux/zfs/commit/b3744ae>`__ from this
upstream `OpenZFS
commit <https://github.com/openzfs/openzfs/commit/ee06391>`__.
::
OpenZFS 6873 - zfs_destroy_snaps_nvl leaks errlist
Authored by: Chris Williamson <chris.williamson@delphix.com>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Reviewed by: Paul Dagnelie <pcd@delphix.com>
Ported-by: Denys Rtveliashvili <denys@rtveliashvili.name>
lzc_destroy_snaps() returns an nvlist in errlist.
zfs_destroy_snaps_nvl() should nvlist_free() it before returning.
OpenZFS-issue: https://www.illumos.org/issues/6873
OpenZFS-commit: https://github.com/openzfs/openzfs/commit/ee06391
Testing a Patch
~~~~~~~~~~~~~~~
**Build the source.** Verify the patched source compiles without errors
and all warnings are resolved.
::
$ make -s -j$(nproc)
**Run the style checker.** Verify the patched source passes the style
checker, the command should return without printing any output.
::
$ make cstyle
**Open a Pull Request.** When your patch builds cleanly and passes the
style checks `open a new pull
request <https://help.github.com/articles/creating-a-pull-request/>`__.
The pull request will be queued for `automated
testing <https://github.com/zfsonlinux/zfs-buildbot/>`__. As part of the
testing the change is built for a wide range of Linux distributions and
a battery of functional and stress tests are run to detect regressions.
::
$ git push <your-github-account> openzfs-<issue-nr>
**Fix any issues.** Testing takes approximately 2 hours to fully
complete and the results are posted in the GitHub `pull
request <https://github.com/zfsonlinux/zfs/pull/4594>`__. All the tests
are expected to pass and you should investigate and resolve any test
failures. The `test
scripts <https://github.com/zfsonlinux/zfs-buildbot/tree/master/scripts>`__
are all available and designed to run locally in order reproduce an
issue. Once you've resolved the issue force update the pull request to
trigger a new round of testing. Iterate until all the tests are passing.
::
# Fix issue, amend commit, force update branch.
$ git commit --amend
$ git push --force <your-github-account> openzfs-<issue-nr>
Merging the Patch
~~~~~~~~~~~~~~~~~
**Review.** Lastly one of the ZFS on Linux maintainers will make a final
review of the patch and may request additional changes. Once the
maintainer is happy with the final version of the patch they will add
their signed-off-by, merge it to the master branch, mark it complete on
the tracking page, and thank you for your contribution to the project!
Porting ZFS on Linux changes to OpenZFS
---------------------------------------
Often an issue will be first fixed in ZFS on Linux or a new feature
developed. Changes which are not Linux specific should be submitted
upstream to the OpenZFS GitHub repository for review. The process for
this is described in the `OpenZFS
README <https://github.com/openzfs/openzfs/>`__.

View File

@@ -0,0 +1,2 @@
This page is obsolete, use
`http://build.zfsonlinux.org/openzfs-tracking.html <http://build.zfsonlinux.org/openzfs-tracking.html>`__

569
docs/OpenZFS-exceptions.rst Normal file
View File

@@ -0,0 +1,569 @@
Commit exceptions used to explicitly reference a given Linux commit.
These exceptions are useful for a variety of reasons.
**This page is used to generate**\ `OpenZFS
Tracking <http://build.zfsonlinux.org/openzfs-tracking.html>`__\ **page.**
Format:
^^^^^^^
- ``<openzfs issue>|-|<comment>`` - The OpenZFS commit isn't applicable
to Linux, or the OpenZFS -> ZFS on Linux commit matching is unable to
associate the related commits due to lack of information (denoted by
a -).
- ``<openzfs issue>|<commit>|<comment>`` - The fix was merged to Linux
prior to their being an OpenZFS issue.
- ``<openzfs issue>|!|<comment>`` - The commit is applicable but not
applied for the reason described in the comment.
+------------------+-------------------+-----------------------------+
| OpenZFS issue id | status/ZFS commit | comment |
+==================+===================+=============================+
| 10500 | 03916905 | |
+------------------+-------------------+-----------------------------+
| 10154 | - | Not applicable to Linux |
+------------------+-------------------+-----------------------------+
| 10067 | - | The only ZFS change was to |
| | | zfs remap, which was |
| | | removed on Linux. |
+------------------+-------------------+-----------------------------+
| 9884 | - | Not applicable to Linux |
+------------------+-------------------+-----------------------------+
| 9851 | - | Not applicable to Linux |
+------------------+-------------------+-----------------------------+
| 9683 | - | Not applicable to Linux due |
| | | to devids not being used |
+------------------+-------------------+-----------------------------+
| 9680 | - | Applied and rolled back in |
| | | OpenZFS, additional changes |
| | | needed. |
+------------------+-------------------+-----------------------------+
| 9672 | 29445fe3 | |
+------------------+-------------------+-----------------------------+
| 9626 | 59e6e7ca | |
+------------------+-------------------+-----------------------------+
| 9635 | - | Not applicable to Linux |
+------------------+-------------------+-----------------------------+
| 9623 | 22448f08 | |
+------------------+-------------------+-----------------------------+
| 9621 | 305bc4b3 | |
+------------------+-------------------+-----------------------------+
| 9539 | 5228cf01 | |
+------------------+-------------------+-----------------------------+
| 9512 | b4555c77 | |
+------------------+-------------------+-----------------------------+
| 9487 | 48fbb9dd | |
+------------------+-------------------+-----------------------------+
| 9466 | 272b5d73 | |
+------------------+-------------------+-----------------------------+
| 9433 | 0873bb63 | |
+------------------+-------------------+-----------------------------+
| 9421 | 64c1dcef | |
+------------------+-------------------+-----------------------------+
| 9237 | - | Introduced by 8567 which |
| | | was never applied to Linux |
+------------------+-------------------+-----------------------------+
| 9194 | - | Not applicable the '-o |
| | | ashift=value' option is |
| | | provided on Linux |
+------------------+-------------------+-----------------------------+
| 9077 | - | Not applicable to Linux |
+------------------+-------------------+-----------------------------+
| 9027 | 4a5d7f82 | |
+------------------+-------------------+-----------------------------+
| 9018 | 3ec34e55 | |
+------------------+-------------------+-----------------------------+
| 8984 | ! | WIP to support NFSv4 ACLs |
+------------------+-------------------+-----------------------------+
| 8969 | - | Not applicable to Linux |
+------------------+-------------------+-----------------------------+
| 8942 | 650258d7 | |
+------------------+-------------------+-----------------------------+
| 8941 | 390d679a | |
+------------------+-------------------+-----------------------------+
| 8858 | - | Not applicable to Linux |
+------------------+-------------------+-----------------------------+
| 8856 | - | Not applicable to Linux due |
| | | to Encryption (b525630) |
+------------------+-------------------+-----------------------------+
| 8809 | ! | Adding libfakekernel needs |
| | | to be done by refactoring |
| | | existing code. |
+------------------+-------------------+-----------------------------+
| 8713 | 871e0732 | |
+------------------+-------------------+-----------------------------+
| 8661 | 1ce23dca | |
+------------------+-------------------+-----------------------------+
| 8648 | f763c3d1 | |
+------------------+-------------------+-----------------------------+
| 8602 | a032ac4 | |
+------------------+-------------------+-----------------------------+
| 8601 | d99a015 | Equivalent fix included in |
| | | initial commit |
+------------------+-------------------+-----------------------------+
| 8590 | 935e2c2 | |
+------------------+-------------------+-----------------------------+
| 8569 | - | This change isn't relevant |
| | | for Linux. |
+------------------+-------------------+-----------------------------+
| 8567 | - | An alternate fix was |
| | | applied for Linux. |
+------------------+-------------------+-----------------------------+
| 8552 | 935e2c2 | |
+------------------+-------------------+-----------------------------+
| 8521 | ee6370a7 | |
+------------------+-------------------+-----------------------------+
| 8502 | ! | Apply when porting OpenZFS |
| | | 7955 |
+------------------+-------------------+-----------------------------+
| 8477 | 92e43c1 | |
+------------------+-------------------+-----------------------------+
| 8454 | - | An alternate fix was |
| | | applied for Linux. |
+------------------+-------------------+-----------------------------+
| 8408 | 5f1346c | |
+------------------+-------------------+-----------------------------+
| 8379 | - | This change isn't relevant |
| | | for Linux. |
+------------------+-------------------+-----------------------------+
| 8376 | - | This change isn't relevant |
| | | for Linux. |
+------------------+-------------------+-----------------------------+
| 8311 | ! | Need to assess |
| | | applicability to Linux. |
+------------------+-------------------+-----------------------------+
| 8304 | - | This change isn't relevant |
| | | for Linux. |
+------------------+-------------------+-----------------------------+
| 8300 | 44f09cd | |
+------------------+-------------------+-----------------------------+
| 8265 | - | The large_dnode feature has |
| | | been implemented for Linux. |
+------------------+-------------------+-----------------------------+
| 8168 | 78d95ea | |
+------------------+-------------------+-----------------------------+
| 8138 | 44f09cd | The spelling fix to the zfs |
| | | man page came in with the |
| | | mdoc conversion. |
+------------------+-------------------+-----------------------------+
| 8108 | - | An equivalent Linux |
| | | specific fix was made. |
+------------------+-------------------+-----------------------------+
| 8064 | - | This change isn't relevant |
| | | for Linux. |
+------------------+-------------------+-----------------------------+
| 8021 | 7657def | |
+------------------+-------------------+-----------------------------+
| 8022 | e55ebf6 | |
+------------------+-------------------+-----------------------------+
| 8013 | - | The change is illumos |
| | | specific and not applicable |
| | | for Linux. |
+------------------+-------------------+-----------------------------+
| 7982 | - | The change is illumos |
| | | specific and not applicable |
| | | for Linux. |
+------------------+-------------------+-----------------------------+
| 7970 | c30e58c | |
+------------------+-------------------+-----------------------------+
| 7956 | cda0317 | |
+------------------+-------------------+-----------------------------+
| 7955 | ! | Need to assess |
| | | applicability to Linux. If |
| | | porting, apply 8502. |
+------------------+-------------------+-----------------------------+
| 7869 | df7eecc | |
+------------------+-------------------+-----------------------------+
| 7816 | - | The change is illumos |
| | | specific and not applicable |
| | | for Linux. |
+------------------+-------------------+-----------------------------+
| 7803 | - | This functionality is |
| | | provided by |
| | | ``upda |
| | | te_vdev_config_dev_strs()`` |
| | | on Linux. |
+------------------+-------------------+-----------------------------+
| 7801 | 0eef1bd | Commit f25efb3 in |
| | | openzfs/master has a small |
| | | change for linting which is |
| | | being ported. |
+------------------+-------------------+-----------------------------+
| 7779 | - | The change isn't relevant, |
| | | ``zfs_ctldir.c`` was |
| | | rewritten for Linux. |
+------------------+-------------------+-----------------------------+
| 7740 | 32d41fb | |
+------------------+-------------------+-----------------------------+
| 7739 | 582cc014 | |
+------------------+-------------------+-----------------------------+
| 7730 | e24e62a | |
+------------------+-------------------+-----------------------------+
| 7710 | - | None of the illumos build |
| | | system is used under Linux. |
+------------------+-------------------+-----------------------------+
| 7602 | 44f09cd | |
+------------------+-------------------+-----------------------------+
| 7591 | 541a090 | |
+------------------+-------------------+-----------------------------+
| 7586 | c443487 | |
+------------------+-------------------+-----------------------------+
| 7570 | - | Due to differences in the |
| | | block layer all discards |
| | | are handled asynchronously |
| | | under Linux. This |
| | | functionality could be |
| | | ported but it's unclear to |
| | | what purpose. |
+------------------+-------------------+-----------------------------+
| 7542 | - | The Linux libshare code |
| | | differs significantly from |
| | | the upstream OpenZFS code. |
| | | Since this change doesn't |
| | | address a Linux specific |
| | | issue it doesn't need to be |
| | | ported. The eventual plan |
| | | is to retire all of the |
| | | existing libshare code and |
| | | use the ZED to more |
| | | flexibly control filesystem |
| | | sharing. |
+------------------+-------------------+-----------------------------+
| 7512 | - | None of the illumos build |
| | | system is used under Linux. |
+------------------+-------------------+-----------------------------+
| 7497 | - | DTrace is isn't readily |
| | | available under Linux. |
+------------------+-------------------+-----------------------------+
| 7446 | ! | Need to assess |
| | | applicability to Linux. |
+------------------+-------------------+-----------------------------+
| 7430 | 68cbd56 | |
+------------------+-------------------+-----------------------------+
| 7402 | 690fe64 | |
+------------------+-------------------+-----------------------------+
| 7345 | 058ac9b | |
+------------------+-------------------+-----------------------------+
| 7278 | - | Dynamic ARC tuning is |
| | | handled slightly |
| | | differently under Linux and |
| | | this case is covered by |
| | | arc_tuning_update() |
+------------------+-------------------+-----------------------------+
| 7238 | - | zvol_swap test already |
| | | disabled in ZoL |
+------------------+-------------------+-----------------------------+
| 7194 | d7958b4 | |
+------------------+-------------------+-----------------------------+
| 7164 | b1b85c87 | |
+------------------+-------------------+-----------------------------+
| 7041 | 33c0819 | |
+------------------+-------------------+-----------------------------+
| 7016 | d3c2ae1 | |
+------------------+-------------------+-----------------------------+
| 6914 | - | Under Linux the |
| | | arc_meta_limit can be tuned |
| | | with the |
| | | zfs_arc_meta_limit_percent |
| | | module option. |
+------------------+-------------------+-----------------------------+
| 6875 | ! | WIP to support NFSv4 ACLs |
+------------------+-------------------+-----------------------------+
| 6843 | f5f087e | |
+------------------+-------------------+-----------------------------+
| 6841 | 4254acb | |
+------------------+-------------------+-----------------------------+
| 6781 | 15313c5 | |
+------------------+-------------------+-----------------------------+
| 6765 | ! | WIP to support NFSv4 ACLs |
+------------------+-------------------+-----------------------------+
| 6764 | ! | WIP to support NFSv4 ACLs |
+------------------+-------------------+-----------------------------+
| 6763 | ! | WIP to support NFSv4 ACLs |
+------------------+-------------------+-----------------------------+
| 6762 | ! | WIP to support NFSv4 ACLs |
+------------------+-------------------+-----------------------------+
| 6648 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 6578 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 6577 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 6575 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 6568 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 6528 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 6494 | - | The ``vdev_disk.c`` and |
| | | ``vdev_file.c`` files have |
| | | been reworked extensively |
| | | for Linux. The proposed |
| | | changes are not needed. |
+------------------+-------------------+-----------------------------+
| 6468 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 6465 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 6434 | 472e7c6 | |
+------------------+-------------------+-----------------------------+
| 6421 | ca0bf58 | |
+------------------+-------------------+-----------------------------+
| 6418 | 131cc95 | |
+------------------+-------------------+-----------------------------+
| 6391 | ee06391 | |
+------------------+-------------------+-----------------------------+
| 6390 | 85802aa | |
+------------------+-------------------+-----------------------------+
| 6388 | 0de7c55 | |
+------------------+-------------------+-----------------------------+
| 6386 | 485c581 | |
+------------------+-------------------+-----------------------------+
| 6385 | f3ad9cd | |
+------------------+-------------------+-----------------------------+
| 6369 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 6368 | 2024041 | |
+------------------+-------------------+-----------------------------+
| 6346 | 058ac9b | |
+------------------+-------------------+-----------------------------+
| 6334 | 1a04bab | |
+------------------+-------------------+-----------------------------+
| 6290 | 017da6 | |
+------------------+-------------------+-----------------------------+
| 6250 | - | Linux handles crash dumps |
| | | in a fundamentally |
| | | different way than Illumos. |
| | | The proposed changes are |
| | | not needed. |
+------------------+-------------------+-----------------------------+
| 6249 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 6248 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 6220 | - | The b_thawed debug code was |
| | | unused under Linux and |
| | | removed. |
+------------------+-------------------+-----------------------------+
| 6209 | - | The Linux user space mutex |
| | | implementation is based on |
| | | phtread primitives. |
+------------------+-------------------+-----------------------------+
| 6095 | f866a4ea | |
+------------------+-------------------+-----------------------------+
| 6091 | c11f100 | |
+------------------+-------------------+-----------------------------+
| 5984 | 480f626 | |
+------------------+-------------------+-----------------------------+
| 5966 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 5961 | 22872ff | |
+------------------+-------------------+-----------------------------+
| 5882 | 83e9986 | |
+------------------+-------------------+-----------------------------+
| 5815 | - | This patch could be adapted |
| | | if needed use equivalent |
| | | Linux functionality. |
+------------------+-------------------+-----------------------------+
| 5770 | c3275b5 | |
+------------------+-------------------+-----------------------------+
| 5769 | dd26aa5 | |
+------------------+-------------------+-----------------------------+
| 5768 | - | The change isn't relevant, |
| | | ``zfs_ctldir.c`` was |
| | | rewritten for Linux. |
+------------------+-------------------+-----------------------------+
| 5766 | 4dd1893 | |
+------------------+-------------------+-----------------------------+
| 5693 | 0f7d2a4 | |
+------------------+-------------------+-----------------------------+
| 5692 | ! | This functionality should |
| | | be ported in such a way |
| | | that it can be integrated |
| | | with ``filefrag(8)``. |
+------------------+-------------------+-----------------------------+
| 5684 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 5410 | 0bf8501 | |
+------------------+-------------------+-----------------------------+
| 5409 | b23d543 | |
+------------------+-------------------+-----------------------------+
| 5379 | - | This particular issue never |
| | | impacted Linux due to the |
| | | need for a modified |
| | | zfs_putpage() |
| | | implementation. |
+------------------+-------------------+-----------------------------+
| 5316 | - | The illumos idmap facility |
| | | isn't available under |
| | | Linux. This patch could |
| | | still be applied to |
| | | minimize code delta or all |
| | | HAVE_IDMAP chunks could be |
| | | removed on Linux for better |
| | | readability. |
+------------------+-------------------+-----------------------------+
| 5313 | ec8501e | |
+------------------+-------------------+-----------------------------+
| 5312 | ! | This change should be made |
| | | but the ideal time to do it |
| | | is when the spl repository |
| | | is folded in to the zfs |
| | | repository (planned for |
| | | 0.8). At this time we'll |
| | | want to cleanup many of the |
| | | includes. |
+------------------+-------------------+-----------------------------+
| 5219 | ef56b07 | |
+------------------+-------------------+-----------------------------+
| 5179 | 3f4058c | |
+------------------+-------------------+-----------------------------+
| 5149 | - | Equivalent Linux |
| | | functionality is provided |
| | | by the |
| | | ``zvol_max_discard_blocks`` |
| | | module option. |
+------------------+-------------------+-----------------------------+
| 5148 | - | Discards are handled |
| | | differently under Linux, |
| | | there is no DKIOCFREE |
| | | ioctl. |
+------------------+-------------------+-----------------------------+
| 5136 | e8b96c6 | |
+------------------+-------------------+-----------------------------+
| 4752 | aa9af22 | |
+------------------+-------------------+-----------------------------+
| 4745 | 411bf20 | |
+------------------+-------------------+-----------------------------+
| 4698 | 4fcc437 | |
+------------------+-------------------+-----------------------------+
| 4620 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 4573 | 10b7549 | |
+------------------+-------------------+-----------------------------+
| 4571 | 6e1b9d0 | |
+------------------+-------------------+-----------------------------+
| 4570 | b1d13a6 | |
+------------------+-------------------+-----------------------------+
| 4391 | 78e2739 | |
+------------------+-------------------+-----------------------------+
| 4465 | cda0317 | |
+------------------+-------------------+-----------------------------+
| 4263 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 4242 | - | Neither vnodes or their |
| | | associated events exist |
| | | under Linux. |
+------------------+-------------------+-----------------------------+
| 4206 | 2820bc4 | |
+------------------+-------------------+-----------------------------+
| 4188 | 2e7b765 | |
+------------------+-------------------+-----------------------------+
| 4181 | 44f09cd | |
+------------------+-------------------+-----------------------------+
| 4161 | - | The Linux user space |
| | | reader/writer |
| | | implementation is based on |
| | | phtread primitives. |
+------------------+-------------------+-----------------------------+
| 4128 | ! | The |
| | | ldi_ev_register_callbacks() |
| | | interface doesn't exist |
| | | under Linux. It may be |
| | | possible to receive similar |
| | | notifications via the scsi |
| | | error handlers or possibly |
| | | a different interface. |
+------------------+-------------------+-----------------------------+
| 4072 | - | None of the illumos build |
| | | system is used under Linux. |
+------------------+-------------------+-----------------------------+
| 3947 | 7f9d994 | |
+------------------+-------------------+-----------------------------+
| 3928 | - | Neither vnodes or their |
| | | associated events exist |
| | | under Linux. |
+------------------+-------------------+-----------------------------+
| 3871 | d1d7e268 | |
+------------------+-------------------+-----------------------------+
| 3747 | 090ff09 | |
+------------------+-------------------+-----------------------------+
| 3705 | - | The Linux implementation |
| | | uses the lz4 workspace kmem |
| | | cache to resolve the stack |
| | | issue. |
+------------------+-------------------+-----------------------------+
| 3606 | c5b247f | |
+------------------+-------------------+-----------------------------+
| 3580 | - | Linux provides generic |
| | | ioctl handlers get/set |
| | | block device information. |
+------------------+-------------------+-----------------------------+
| 3543 | 8dca0a9 | |
+------------------+-------------------+-----------------------------+
| 3512 | 67629d0 | |
+------------------+-------------------+-----------------------------+
| 3507 | 43a696e | |
+------------------+-------------------+-----------------------------+
| 3444 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 3371 | 44f09cd | |
+------------------+-------------------+-----------------------------+
| 3311 | 6bb24f4 | |
+------------------+-------------------+-----------------------------+
| 3301 | - | The Linux implementation of |
| | | ``vdev_disk.c`` does not |
| | | include this comment. |
+------------------+-------------------+-----------------------------+
| 3258 | 9d81146 | |
+------------------+-------------------+-----------------------------+
| 3254 | ! | WIP to support NFSv4 ACLs |
+------------------+-------------------+-----------------------------+
| 3246 | cc92e9d | |
+------------------+-------------------+-----------------------------+
| 2933 | - | None of the illumos build |
| | | system is used under Linux. |
+------------------+-------------------+-----------------------------+
| 2897 | fb82700 | |
+------------------+-------------------+-----------------------------+
| 2665 | 32a9872 | |
+------------------+-------------------+-----------------------------+
| 2130 | 460a021 | |
+------------------+-------------------+-----------------------------+
| 1974 | - | This change was entirely |
| | | replaced in the ARC |
| | | restructuring. |
+------------------+-------------------+-----------------------------+
| 1898 | - | The zfs_putpage() function |
| | | was rewritten to properly |
| | | integrate with the Linux |
| | | VM. |
+------------------+-------------------+-----------------------------+
| 1700 | - | Not applicable to Linux, |
| | | the discard implementation |
| | | is entirely different. |
+------------------+-------------------+-----------------------------+
| 1618 | ca67b33 | |
+------------------+-------------------+-----------------------------+
| 1337 | 2402458 | |
+------------------+-------------------+-----------------------------+
| 1126 | e43b290 | |
+------------------+-------------------+-----------------------------+
| 763 | 3cee226 | |
+------------------+-------------------+-----------------------------+
| 742 | ! | WIP to support NFSv4 ACLs |
+------------------+-------------------+-----------------------------+
| 701 | 460a021 | |
+------------------+-------------------+-----------------------------+
| 348 | - | The Linux implementation of |
| | | ``vdev_disk.c`` must have |
| | | this differently. |
+------------------+-------------------+-----------------------------+
| 243 | - | Manual updates have been |
| | | made separately for Linux. |
+------------------+-------------------+-----------------------------+
| 184 | - | The zfs_putpage() function |
| | | was rewritten to properly |
| | | integrate with the Linux |
| | | VM. |
+------------------+-------------------+-----------------------------+

View File

@@ -0,0 +1,24 @@
OpenZFS is storage software which combines the functionality of
traditional filesystems, volume manager, and more. OpenZFS includes
protection against data corruption, support for high storage capacities,
efficient data compression, snapshots and copy-on-write clones,
continuous integrity checking and automatic repair, remote replication
with ZFS send and receive, and RAID-Z.
OpenZFS brings together developers from the illumos, Linux, FreeBSD and
OS X platforms, and a wide range of companies -- both online and at the
annual OpenZFS Developer Summit. High-level goals of the project include
raising awareness of the quality, utility and availability of
open-source implementations of ZFS, encouraging open communication about
ongoing efforts toward improving open-source variants of ZFS, and
ensuring consistent reliability, functionality and performance of all
distributions of ZFS.
| `Admin
Documentation <https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/>`__
| [[FAQ]]
| [[Mailing Lists]]
| `Releases <https://github.com/zfsonlinux/zfs/releases>`__
| `Issue Tracker <https://github.com/zfsonlinux/zfs/issues>`__
| `Roadmap <https://github.com/zfsonlinux/zfs/milestones>`__
| [[Signing Keys]]

166
docs/RHEL-and-CentOS.rst Normal file
View File

@@ -0,0 +1,166 @@
`kABI-tracking
kmod <http://elrepoproject.blogspot.com/2016/02/kabi-tracking-kmod-packages.html>`__
or
`DKMS <https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support>`__
style packages are provided for RHEL / CentOS based distributions from
the official zfsonlinux.org repository. These packages track the
official ZFS on Linux tags and are updated as new versions are released.
Packages are available for the following configurations:
| **EL Releases:** 6.x, 7.x, 8.x
| **Architectures:** x86_64
To simplify installation a zfs-release package is provided which
includes a zfs.repo configuration file and the ZFS on Linux public
signing key. All official ZFS on Linux packages are signed using this
key, and by default yum will verify a package's signature before
allowing it be to installed. Users are strongly encouraged to verify the
authenticity of the ZFS on Linux public key using the fingerprint listed
here.
| **Location:** /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
| **EL6 Package:**
`http://download.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm <http://download.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm>`__
| **EL7.5 Package:**
`http://download.zfsonlinux.org/epel/zfs-release.el7_5.noarch.rpm <http://download.zfsonlinux.org/epel/zfs-release.el7_5.noarch.rpm>`__
| **EL7.6 Package:**
`http://download.zfsonlinux.org/epel/zfs-release.el7_6.noarch.rpm <http://download.zfsonlinux.org/epel/zfs-release.el7_6.noarch.rpm>`__
| **EL7.7 Package:**
`http://download.zfsonlinux.org/epel/zfs-release.el7_7.noarch.rpm <http://download.zfsonlinux.org/epel/zfs-release.el7_7.noarch.rpm>`__
| **EL7.8 Package:**
`http://download.zfsonlinux.org/epel/zfs-release.el7_8.noarch.rpm <http://download.zfsonlinux.org/epel/zfs-release.el7_8.noarch.rpm>`__
| **EL8.0 Package:**
`http://download.zfsonlinux.org/epel/zfs-release.el8_0.noarch.rpm <http://download.zfsonlinux.org/epel/zfs-release.el8_0.noarch.rpm>`__
| **EL8.1 Package:**
`http://download.zfsonlinux.org/epel/zfs-release.el8_1.noarch.rpm <http://download.zfsonlinux.org/epel/zfs-release.el8_1.noarch.rpm>`__
| **Note:** Starting with EL7.7 **zfs-0.8** will become the default,
EL7.6 and older will continue to track the **zfs-0.7** point releases.
| **Download from:**
`pgp.mit.edu <http://pgp.mit.edu/pks/lookup?search=0xF14AB620&op=index&fingerprint=on>`__
| **Fingerprint:** C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620
::
$ sudo yum install http://download.zfsonlinux.org/epel/zfs-release.<dist>.noarch.rpm
$ gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
pub 2048R/F14AB620 2013-03-21 ZFS on Linux <zfs@zfsonlinux.org>
Key fingerprint = C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620
sub 2048R/99685629 2013-03-21
After installing the zfs-release package and verifying the public key
users can opt to install ether the kABI-tracking kmod or DKMS style
packages. For most users the kABI-tracking kmod packages are recommended
in order to avoid needing to rebuild ZFS for every kernel update. DKMS
packages are recommended for users running a non-distribution kernel or
for users who wish to apply local customizations to ZFS on Linux.
kABI-tracking kmod
------------------
By default the zfs-release package is configured to install DKMS style
packages so they will work with a wide range of kernels. In order to
install the kABI-tracking kmods the default repository in the
*/etc/yum.repos.d/zfs.repo* file must be switch from *zfs* to
*zfs-kmod*. Keep in mind that the kABI-tracking kmods are only verified
to work with the distribution provided kernel.
.. code:: diff
# /etc/yum.repos.d/zfs.repo
[zfs]
name=ZFS on Linux for EL 7 - dkms
baseurl=http://download.zfsonlinux.org/epel/7/$basearch/
-enabled=1
+enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
@@ -9,7 +9,7 @@
[zfs-kmod]
name=ZFS on Linux for EL 7 - kmod
baseurl=http://download.zfsonlinux.org/epel/7/kmod/$basearch/
-enabled=0
+enabled=1
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
The ZFS on Linux packages can now be installed using yum.
::
$ sudo yum install zfs
DKMS
----
To install DKMS style packages issue the following yum commands. First
add the `EPEL repository <https://fedoraproject.org/wiki/EPEL>`__ which
provides DKMS by installing the *epel-release* package, then the
*kernel-devel* and *zfs* packages. Note that it is important to make
sure that the matching *kernel-devel* package is installed for the
running kernel since DKMS requires it to build ZFS.
::
$ sudo yum install epel-release
$ sudo yum install "kernel-devel-uname-r == $(uname -r)" zfs
Important Notices
-----------------
.. _rhelcentos-7x-kmod-package-upgrade:
RHEL/CentOS 7.x kmod package upgrade
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When updating to a new RHEL/CentOS 7.x release the existing kmod
packages will not work due to upstream kABI changes in the kernel. After
upgrading to 7.x users must uninstall ZFS and then reinstall it as
described in the `kABI-tracking
kmod <https://github.com/zfsonlinux/zfs/wiki/RHEL-%26-CentOS/#kabi-tracking-kmod>`__
section. Compatible kmod packages will be installed from the matching
CentOS 7.x repository.
::
$ sudo yum remove zfs zfs-kmod spl spl-kmod libzfs2 libnvpair1 libuutil1 libzpool2 zfs-release
$ sudo yum install http://download.zfsonlinux.org/epel/zfs-release.el7_6.noarch.rpm
$ sudo yum autoremove
$ sudo yum clean metadata
$ sudo yum install zfs
Switching from DKMS to kABI-tracking kmod
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When switching from DKMS to kABI-tracking kmods first uninstall the
existing DKMS packages. This should remove the kernel modules for all
installed kernels but in practice it's not always perfectly reliable.
Therefore, it's recommended that you manually remove any remaining ZFS
kernel modules as shown. At this point the kABI-tracking kmods can be
installed as described in the section above.
::
$ sudo yum remove zfs zfs-kmod spl spl-kmod libzfs2 libnvpair1 libuutil1 libzpool2 zfs-release
$ sudo find /lib/modules/ \( -name "splat.ko" -or -name "zcommon.ko" \
-or -name "zpios.ko" -or -name "spl.ko" -or -name "zavl.ko" -or \
-name "zfs.ko" -or -name "znvpair.ko" -or -name "zunicode.ko" \) \
-exec /bin/rm {} \;
Testing Repositories
--------------------
In addition to the primary *zfs* repository a *zfs-testing* repository
is available. This repository, which is disabled by default, contains
the latest version of ZFS on Linux which is under active development.
These packages are made available in order to get feedback from users
regarding the functionality and stability of upcoming releases. These
packages **should not** be used on production systems. Packages from the
testing repository can be installed as follows.
::
$ sudo yum --enablerepo=zfs-testing install kernel-devel zfs

61
docs/Signing-Keys.rst Normal file
View File

@@ -0,0 +1,61 @@
All tagged ZFS on Linux
`releases <https://github.com/zfsonlinux/zfs/releases>`__ are signed by
the official maintainer for that branch. These signatures are
automatically verified by GitHub and can be checked locally by
downloading the maintainers public key.
Maintainers
-----------
Release branch (spl/zfs-*-release)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| **Maintainer:** `Ned Bass <https://github.com/nedbass>`__
| **Download:**
`pgp.mit.edu <http://pgp.mit.edu/pks/lookup?op=vindex&search=0xB97467AAC77B9667&fingerprint=on>`__
| **Key ID:** C77B9667
| **Fingerprint:** 29D5 610E AE29 41E3 55A2 FE8A B974 67AA C77B 9667
| **Maintainer:** `Tony Hutter <https://github.com/tonyhutter>`__
| **Download:**
`pgp.mit.edu <http://pgp.mit.edu/pks/lookup?op=vindex&search=0x6ad860eed4598027&fingerprint=on>`__
| **Key ID:** D4598027
| **Fingerprint:** 4F3B A9AB 6D1F 8D68 3DC2 DFB5 6AD8 60EE D459 8027
Master branch (master)
~~~~~~~~~~~~~~~~~~~~~~
| **Maintainer:** `Brian Behlendorf <https://github.com/behlendorf>`__
| **Download:**
`pgp.mit.edu <http://pgp.mit.edu/pks/lookup?op=vindex&search=0x0AB9E991C6AF658B&fingerprint=on>`__
| **Key ID:** C6AF658B
| **Fingerprint:** C33D F142 657E D1F7 C328 A296 0AB9 E991 C6AF 658B
Checking the Signature of a Git Tag
-----------------------------------
First import the public key listed above in to your key ring.
::
$ gpg --keyserver pgp.mit.edu --recv C6AF658B
gpg: requesting key C6AF658B from hkp server pgp.mit.edu
gpg: key C6AF658B: "Brian Behlendorf <behlendorf1@llnl.gov>" not changed
gpg: Total number processed: 1
gpg: unchanged: 1
After the pubic key is imported the signature of a git tag can be
verified as shown.
::
$ git tag --verify zfs-0.6.5
object 7a27ad00ae142b38d4aef8cc0af7a72b4c0e44fe
type commit
tag zfs-0.6.5
tagger Brian Behlendorf <behlendorf1@llnl.gov> 1441996302 -0700
ZFS Version 0.6.5
gpg: Signature made Fri 11 Sep 2015 11:31:42 AM PDT using DSA key ID C6AF658B
gpg: Good signature from "Brian Behlendorf <behlendorf1@llnl.gov>"
gpg: aka "Brian Behlendorf (LLNL) <behlendorf1@llnl.gov>"

107
docs/Troubleshooting.rst Normal file
View File

@@ -0,0 +1,107 @@
DRAFT
=====
This page contains tips for troubleshooting ZFS on Linux and what info
developers might want for bug triage.
- `About Log Files <#about-log-files>`__
- `Generic Kernel Log <#generic-kernel-log>`__
- `ZFS Kernel Module Debug
Messages <#zfs-kernel-module-debug-messages>`__
- `Unkillable Process <#unkillable-process>`__
- `ZFS Events <#zfs-events>`__
--------------
About Log Files
---------------
Log files can be very useful for troubleshooting. In some cases,
interesting information is stored in multiple log files that are
correlated to system events.
Pro tip: logging infrastructure tools like *elasticsearch*, *fluentd*,
*influxdb*, or *splunk* can simplify log analysis and event correlation.
Generic Kernel Log
~~~~~~~~~~~~~~~~~~
Typically, Linux kernel log messages are available from ``dmesg -T``,
``/var/log/syslog``, or where kernel log messages are sent (eg by
``rsyslogd``).
ZFS Kernel Module Debug Messages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ZFS kernel modules use an internal log buffer for detailed logging
information. This log information is available in the pseudo file
``/proc/spl/kstat/zfs/dbgmsg`` for ZFS builds where ZFS module parameter
`zfs_dbgmsg_enable =
1 <https://github.com/zfsonlinux/zfs/wiki/ZFS-on-Linux-Module-Parameters#zfs_dbgmsg_enable>`__
--------------
Unkillable Process
------------------
Symptom: ``zfs`` or ``zpool`` command appear hung, does not return, and
is not killable
Likely cause: kernel thread hung or panic
Log files of interest: `Generic Kernel Log <#generic-kernel-log>`__,
`ZFS Kernel Module Debug Messages <#zfs-kernel-module-debug-messages>`__
Important information: if a kernel thread is stuck, then a backtrace of
the stuck thread can be in the logs. In some cases, the stuck thread is
not logged until the deadman timer expires. See also `debug
tunables <https://github.com/zfsonlinux/zfs/wiki/ZFS-on-Linux-Module-Parameters#debug>`__
--------------
ZFS Events
----------
ZFS uses an event-based messaging interface for communication of
important events to other consumers running on the system. The ZFS Event
Daemon (zed) is a userland daemon that listens for these events and
processes them. zed is extensible so you can write shell scripts or
other programs that subscribe to events and take action. For example,
the script usually installed at ``/etc/zfs/zed.d/all-syslog.sh`` writes
a formatted event message to ``syslog.`` See the man page for ``zed(8)``
for more information.
A history of events is also available via the ``zpool events`` command.
This history begins at ZFS kernel module load and includes events from
any pool. These events are stored in RAM and limited in count to a value
determined by the kernel tunable
`zfs_event_len_max <https://github.com/zfsonlinux/zfs/wiki/ZFS-on-Linux-Module-Parameters#zfs_zevent_len_max>`__.
``zed`` has an internal throttling mechanism to prevent overconsumption
of system resources processing ZFS events.
More detailed information about events is observable using
``zpool events -v`` The contents of the verbose events is subject to
change, based on the event and information available at the time of the
event.
Each event has a class identifier used for filtering event types.
Commonly seen events are those related to pool management with class
``sysevent.fs.zfs.*`` including import, export, configuration updates,
and ``zpool history`` updates.
Events related to errors are reported as class ``ereport.*`` These can
be invaluable for troubleshooting. Some faults can cause multiple
ereports as various layers of the software deal with the fault. For
example, on a simple pool without parity protection, a faulty disk could
cause an ``ereport.io`` during a read from the disk that results in an
``erport.fs.zfs.checksum`` at the pool level. These events are also
reflected by the error counters observed in ``zpool status`` If you see
checksum or read/write errors in ``zpool status`` then there should be
one or more corresponding ereports in the ``zpool events`` output.
.. _draft-1:
DRAFT
=====

View File

@@ -0,0 +1,921 @@
Newer release available
~~~~~~~~~~~~~~~~~~~~~~~
- See [[Ubuntu 18.04 Root on ZFS]] for new installs.
Caution
~~~~~~~
- This HOWTO uses a whole physical disk.
- Do not use these instructions for dual-booting.
- Backup your data. Any existing data will be lost.
System Requirements
~~~~~~~~~~~~~~~~~~~
- `64-bit Ubuntu 16.04.5 ("Xenial") Desktop
CD <http://releases.ubuntu.com/16.04/ubuntu-16.04.5-desktop-amd64.iso>`__
(*not* the server image)
- `A 64-bit kernel is strongly
encouraged. <https://github.com/zfsonlinux/zfs/wiki/FAQ#32-bit-vs-64-bit-systems>`__
- A drive which presents 512B logical sectors. Installing on a drive
which presents 4KiB logical sectors (a “4Kn” drive) should work with
UEFI partitioning, but this has not been tested.
Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of
memory is recommended for normal performance in basic workloads. If you
wish to use deduplication, you will need `massive amounts of
RAM <http://wiki.freebsd.org/ZFSTuningGuide#Deduplication>`__. Enabling
deduplication is a permanent change that cannot be easily reverted.
Support
-------
If you need help, reach out to the community using the `zfs-discuss
mailing list <https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists>`__
or IRC at #zfsonlinux on `freenode <https://freenode.net/>`__. If you
have a bug report or feature request related to this HOWTO, please `file
a new issue <https://github.com/zfsonlinux/zfs/issues/new>`__ and
mention @rlaager.
Encryption
----------
This guide supports the three different Ubuntu encryption options:
unencrypted, LUKS (full-disk encryption), and eCryptfs (home directory
encryption).
Unencrypted does not encrypt anything, of course. All ZFS features are
fully available. With no encryption happening, this option naturally has
the best performance.
LUKS encrypts almost everything: the OS, swap, home directories, and
anything else. The only unencrypted data is the bootloader, kernel, and
initrd. The system cannot boot without the passphrase being entered at
the console. All ZFS features are fully available. Performance is good,
but LUKS sits underneath ZFS, so if multiple disks (mirror or raidz
configurations) are used, the data has to be encrypted once per disk.
eCryptfs protects the contents of the specified home directories. This
guide also recommends encrypted swap when using eCryptfs. Other
operating system directories, which may contain sensitive data, logs,
and/or configuration information, are not encrypted. ZFS compression is
useless on the encrypted home directories. ZFS snapshots are not
automatically and transparently mounted when using eCryptfs, and
manually mounting them requires serious knowledge of eCryptfs
administrative commands. eCryptfs sits above ZFS, so the encryption only
happens once, regardless of the number of disks in the pool. The
performance of eCryptfs may be lower than LUKS in single-disk scenarios.
If you want encryption, LUKS is recommended.
Step 1: Prepare The Install Environment
---------------------------------------
1.1 Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to
the Internet as appropriate (e.g. join your WiFi network). Open a
terminal (press Ctrl-Alt-T).
1.2 Setup and update the repositories:
::
$ sudo apt-add-repository universe
$ sudo apt update
1.3 Optional: Start the OpenSSH server in the Live CD environment:
If you have a second system, using SSH to access the target system can
be convenient.
::
$ passwd
There is no current password; hit enter at that prompt.
$ sudo apt --yes install openssh-server
**Hint:** You can find your IP address with
``ip addr show scope global | grep inet``. Then, from your main machine,
connect with ``ssh ubuntu@IP``.
1.4 Become root:
::
$ sudo -i
1.5 Install ZFS in the Live CD environment:
::
# apt install --yes debootstrap gdisk zfs-initramfs
**Note:** You can ignore the two error lines about "AppStream". They are
harmless.
Step 2: Disk Formatting
-----------------------
2.1 If you are re-using a disk, clear it as necessary:
::
If the disk was previously used in an MD array, zero the superblock:
# apt install --yes mdadm
# mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1
Clear the partition table:
# sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1
2.2 Partition your disk:
::
Run this if you need legacy (BIOS) booting:
# sgdisk -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-SATA_disk1
Run this for UEFI booting (for use now or in the future):
# sgdisk -n3:1M:+512M -t3:EF00 /dev/disk/by-id/scsi-SATA_disk1
Choose one of the following options:
2.2a Unencrypted or eCryptfs:
::
# sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-SATA_disk1
2.2b LUKS:
::
# sgdisk -n4:0:+512M -t4:8300 /dev/disk/by-id/scsi-SATA_disk1
# sgdisk -n1:0:0 -t1:8300 /dev/disk/by-id/scsi-SATA_disk1
Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the
``/dev/sd*`` device nodes directly can cause sporadic import failures,
especially on systems that have more than one storage pool.
**Hints:**
- ``ls -la /dev/disk/by-id`` will list the aliases.
- Are you doing this in a virtual machine? If your virtual disk is
missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using
KVM with virtio; otherwise, read the
`troubleshooting <https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS#troubleshooting>`__
section.
2.3 Create the root pool:
Choose one of the following options:
2.3a Unencrypted or eCryptfs:
::
# zpool create -o ashift=12 \
-O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \
-O mountpoint=/ -R /mnt \
rpool /dev/disk/by-id/scsi-SATA_disk1-part1
2.3b LUKS:
::
# cryptsetup luksFormat -c aes-xts-plain64 -s 256 -h sha256 \
/dev/disk/by-id/scsi-SATA_disk1-part1
# cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part1 luks1
# zpool create -o ashift=12 \
-O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \
-O mountpoint=/ -R /mnt \
rpool /dev/mapper/luks1
**Notes:**
- The use of ``ashift=12`` is recommended here because many drives
today have 4KiB (or larger) physical sectors, even though they
present 512B logical sectors. Also, a future replacement drive may
have 4KiB physical sectors (in which case ``ashift=12`` is desirable)
or 4KiB logical sectors (in which case ``ashift=12`` is required).
- Setting ``normalization=formD`` eliminates some corner cases relating
to UTF-8 filename normalization. It also implies ``utf8only=on``,
which means that only UTF-8 filenames are allowed. If you care to
support non-UTF-8 filenames, do not use this option. For a discussion
of why requiring UTF-8 filenames may be a bad idea, see `The problems
with enforced UTF-8 only
filenames <http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames>`__.
- Make sure to include the ``-part1`` portion of the drive path. If you
forget that, you are specifying the whole disk, which ZFS will then
re-partition, and you will lose the bootloader partition(s).
- For LUKS, the key size chosen is 256 bits. However, XTS mode requires
two keys, so the LUKS key is split in half. Thus, ``-s 256`` means
AES-128, which is the LUKS and Ubuntu default.
- Your passphrase will likely be the weakest link. Choose wisely. See
`section 5 of the cryptsetup
FAQ <https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects>`__
for guidance.
**Hints:**
- The root pool does not have to be a single disk; it can have a mirror
or raidz topology. In that case, repeat the partitioning commands for
all the disks which will be part of the pool. Then, create the pool
using
``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part1 /dev/disk/by-id/scsi-SATA_disk2-part1``
(or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and
list the partitions from additional disks).
- The pool name is arbitrary. On systems that can automatically install
to ZFS, the root pool is named ``rpool`` by default. If you work with
multiple systems, it might be wise to use ``hostname``,
``hostname0``, or ``hostname-1`` instead.
Step 3: System Installation
---------------------------
3.1 Create a filesystem dataset to act as a container:
::
# zfs create -o canmount=off -o mountpoint=none rpool/ROOT
On Solaris systems, the root filesystem is cloned and the suffix is
incremented for major system changes through ``pkg image-update`` or
``beadm``. Similar functionality for APT is possible but currently
unimplemented. Even without such a tool, it can still be used for
manually created clones.
3.2 Create a filesystem dataset for the root filesystem of the Ubuntu
system:
::
# zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu
# zfs mount rpool/ROOT/ubuntu
With ZFS, it is not normally necessary to use a mount command (either
``mount`` or ``zfs mount``). This situation is an exception because of
``canmount=noauto``.
3.3 Create datasets:
::
# zfs create -o setuid=off rpool/home
# zfs create -o mountpoint=/root rpool/home/root
# zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
# zfs create -o com.sun:auto-snapshot=false rpool/var/cache
# zfs create rpool/var/log
# zfs create rpool/var/spool
# zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
If you use /srv on this system:
# zfs create rpool/srv
If this system will have games installed:
# zfs create rpool/var/games
If this system will store local email in /var/mail:
# zfs create rpool/var/mail
If this system will use NFS (locking):
# zfs create -o com.sun:auto-snapshot=false \
-o mountpoint=/var/lib/nfs rpool/var/nfs
The primary goal of this dataset layout is to separate the OS from user
data. This allows the root filesystem to be rolled back without rolling
back user data such as logs (in ``/var/log``). This will be especially
important if/when a ``beadm`` or similar utility is integrated. Since we
are creating multiple datasets anyway, it is trivial to add some
restrictions (for extra security) at the same time. The
``com.sun.auto-snapshot`` setting is used by some ZFS snapshot utilities
to exclude transient data.
3.4 For LUKS installs only:
::
# mke2fs -t ext2 /dev/disk/by-id/scsi-SATA_disk1-part4
# mkdir /mnt/boot
# mount /dev/disk/by-id/scsi-SATA_disk1-part4 /mnt/boot
3.5 Install the minimal system:
::
# chmod 1777 /mnt/var/tmp
# debootstrap xenial /mnt
# zfs set devices=off rpool
The ``debootstrap`` command leaves the new system in an unconfigured
state. An alternative to using ``debootstrap`` is to copy the entirety
of a working system into the new ZFS root.
Step 4: System Configuration
----------------------------
4.1 Configure the hostname (change ``HOSTNAME`` to the desired
hostname).
::
# echo HOSTNAME > /mnt/etc/hostname
# vi /mnt/etc/hosts
Add a line:
127.0.1.1 HOSTNAME
or if the system has a real name in DNS:
127.0.1.1 FQDN HOSTNAME
**Hint:** Use ``nano`` if you find ``vi`` confusing.
4.2 Configure the network interface:
::
Find the interface name:
# ip addr show
# vi /mnt/etc/network/interfaces.d/NAME
auto NAME
iface NAME inet dhcp
Customize this file if the system is not a DHCP client.
4.3 Configure the package sources:
::
# vi /mnt/etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu xenial main universe
deb-src http://archive.ubuntu.com/ubuntu xenial main universe
deb http://security.ubuntu.com/ubuntu xenial-security main universe
deb-src http://security.ubuntu.com/ubuntu xenial-security main universe
deb http://archive.ubuntu.com/ubuntu xenial-updates main universe
deb-src http://archive.ubuntu.com/ubuntu xenial-updates main universe
4.4 Bind the virtual filesystems from the LiveCD environment to the new
system and ``chroot`` into it:
::
# mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login
**Note:** This is using ``--rbind``, not ``--bind``.
4.5 Configure a basic system environment:
::
# locale-gen en_US.UTF-8
Even if you prefer a non-English system language, always ensure that
``en_US.UTF-8`` is available.
::
# echo LANG=en_US.UTF-8 > /etc/default/locale
# dpkg-reconfigure tzdata
# ln -s /proc/self/mounts /etc/mtab
# apt update
# apt install --yes ubuntu-minimal
If you prefer nano over vi, install it:
# apt install --yes nano
4.6 Install ZFS in the chroot environment for the new system:
::
# apt install --yes --no-install-recommends linux-image-generic
# apt install --yes zfs-initramfs
4.7 For LUKS installs only:
::
# echo UUID=$(blkid -s UUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part4) \
/boot ext2 defaults 0 2 >> /etc/fstab
# apt install --yes cryptsetup
# echo luks1 UUID=$(blkid -s UUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part1) none \
luks,discard,initramfs > /etc/crypttab
# vi /etc/udev/rules.d/99-local-crypt.rules
ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}"
ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}"
# ln -s /dev/mapper/luks1 /dev/luks1
**Notes:**
- The use of ``initramfs`` is a work-around for `cryptsetup does not
support
ZFS <https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__.
- The 99-local-crypt.rules file and symlink in /dev are a work-around
for `grub-probe assuming all devices are in
/dev <https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1527727>`__.
4.8 Install GRUB
Choose one of the following options:
4.8a Install GRUB for legacy (MBR) booting
::
# apt install --yes grub-pc
Install GRUB to the disk(s), not the partition(s).
4.8b Install GRUB for UEFI booting
::
# apt install dosfstools
# mkdosfs -F 32 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part3
# mkdir /boot/efi
# echo PARTUUID=$(blkid -s PARTUUID -o value \
/dev/disk/by-id/scsi-SATA_disk1-part3) \
/boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
# mount /boot/efi
# apt install --yes grub-efi-amd64
4.9 Setup system groups:
::
# addgroup --system lpadmin
# addgroup --system sambashare
4.10 Set a root password
::
# passwd
4.11 Fix filesystem mount ordering
`Until ZFS gains a systemd mount
generator <https://github.com/zfsonlinux/zfs/issues/4898>`__, there are
races between mounting filesystems and starting certain daemons. In
practice, the issues (e.g.
`#5754 <https://github.com/zfsonlinux/zfs/issues/5754>`__) seem to be
with certain filesystems in ``/var``, specifically ``/var/log`` and
``/var/tmp``. Setting these to use ``legacy`` mounting, and listing them
in ``/etc/fstab`` makes systemd aware that these are separate
mountpoints. In turn, ``rsyslog.service`` depends on ``var-log.mount``
by way of ``local-fs.target`` and services using the ``PrivateTmp``
feature of systemd automatically use ``After=var-tmp.mount``.
::
# zfs set mountpoint=legacy rpool/var/log
# zfs set mountpoint=legacy rpool/var/tmp
# cat >> /etc/fstab << EOF
rpool/var/log /var/log zfs defaults 0 0
rpool/var/tmp /var/tmp zfs defaults 0 0
EOF
Step 5: GRUB Installation
-------------------------
5.1 Verify that the ZFS root filesystem is recognized:
::
# grub-probe /
zfs
**Note:** GRUB uses ``zpool status`` in order to determine the location
of devices. `grub-probe assumes all devices are in
/dev <https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1527727>`__.
The ``zfs-initramfs`` package `ships udev rules that create
symlinks <https://packages.ubuntu.com/xenial-updates/all/zfs-initramfs/filelist>`__
to `work around the
problem <https://bugs.launchpad.net/ubuntu/+source/zfs-initramfs/+bug/1530953>`__,
but `there have still been reports of
problems <https://github.com/zfsonlinux/grub/issues/5#issuecomment-249427634>`__.
If this happens, you will get an error saying
``grub-probe: error: failed to get canonical path`` and should run the
following:
::
# export ZPOOL_VDEV_NAME_PATH=YES
5.2 Refresh the initrd files:
::
# update-initramfs -c -k all
update-initramfs: Generating /boot/initrd.img-4.4.0-21-generic
**Note:** When using LUKS, this will print "WARNING could not determine
root device from /etc/fstab". This is because `cryptsetup does not
support
ZFS <https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__.
5.3 Optional (but highly recommended): Make debugging GRUB easier:
::
# vi /etc/default/grub
Comment out: GRUB_HIDDEN_TIMEOUT=0
Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
Uncomment: GRUB_TERMINAL=console
Save and quit.
Later, once the system has rebooted twice and you are sure everything is
working, you can undo these changes, if desired.
5.4 Update the boot configuration:
::
# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.4.0-21-generic
Found initrd image: /boot/initrd.img-4.4.0-21-generic
done
5.5 Install the boot loader
5.5a For legacy (MBR) booting, install GRUB to the MBR:
::
# grub-install /dev/disk/by-id/scsi-SATA_disk1
Installing for i386-pc platform.
Installation finished. No error reported.
Do not reboot the computer until you get exactly that result message.
Note that you are installing GRUB to the whole disk, not a partition.
If you are creating a mirror, repeat the grub-install command for each
disk in the pool.
5.5b For UEFI booting, install GRUB:
::
# grub-install --target=x86_64-efi --efi-directory=/boot/efi \
--bootloader-id=ubuntu --recheck --no-floppy
5.6 Verify that the ZFS module is installed:
::
# ls /boot/grub/*/zfs.mod
Step 6: First Boot
------------------
6.1 Snapshot the initial installation:
::
# zfs snapshot rpool/ROOT/ubuntu@install
In the future, you will likely want to take snapshots before each
upgrade, and remove old snapshots (including this one) at some point to
save space.
6.2 Exit from the ``chroot`` environment back to the LiveCD environment:
::
# exit
6.3 Run these commands in the LiveCD environment to unmount all
filesystems:
::
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
# zpool export rpool
6.4 Reboot:
::
# reboot
6.5 Wait for the newly installed system to boot normally. Login as root.
6.6 Create a user account:
Choose one of the following options:
6.6a Unencrypted or LUKS:
::
# zfs create rpool/home/YOURUSERNAME
# adduser YOURUSERNAME
# cp -a /etc/skel/.[!.]* /home/YOURUSERNAME
# chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
6.6b eCryptfs:
::
# apt install ecryptfs-utils
# zfs create -o compression=off -o mountpoint=/home/.ecryptfs/YOURUSERNAME \
rpool/home/temp-YOURUSERNAME
# adduser --encrypt-home YOURUSERNAME
# zfs rename rpool/home/temp-YOURUSERNAME rpool/home/YOURUSERNAME
The temporary name for the dataset is required to work-around `a bug in
ecryptfs-setup-private <https://bugs.launchpad.net/ubuntu/+source/ecryptfs-utils/+bug/1574174>`__.
Otherwise, it will fail with an error saying the home directory is
already mounted; that check is not specific enough in the pattern it
uses.
**Note:** Automatically mounted snapshots (i.e. the ``.zfs/snapshots``
directory) will not work through eCryptfs. You can do another eCryptfs
mount manually if you need to access files in a snapshot. A script to
automate the mounting should be possible, but has not yet been
implemented.
6.7 Add your user account to the default set of groups for an
administrator:
::
# usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo YOURUSERNAME
6.8 Mirror GRUB
If you installed to multiple disks, install GRUB on the additional
disks:
6.8a For legacy (MBR) booting:
::
# dpkg-reconfigure grub-pc
Hit enter until you get to the device selection screen.
Select (using the space bar) all of the disks (not partitions) in your pool.
6.8b UEFI
::
# umount /boot/efi
For the second and subsequent disks (increment ubuntu-2 to -3, etc.):
# dd if=/dev/disk/by-id/scsi-SATA_disk1-part3 \
of=/dev/disk/by-id/scsi-SATA_disk2-part3
# efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
-p 3 -L "ubuntu-2" -l '\EFI\Ubuntu\grubx64.efi'
# mount /boot/efi
Step 7: Configure Swap
----------------------
7.1 Create a volume dataset (zvol) for use as a swap device:
::
# zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false rpool/swap
You can adjust the size (the ``4G`` part) to your needs.
The compression algorithm is set to ``zle`` because it is the cheapest
available algorithm. As this guide recommends ``ashift=12`` (4 kiB
blocks on disk), the common case of a 4 kiB page size means that no
compression algorithm can reduce I/O. The exception is all-zero pages,
which are dropped by ZFS; but some form of compression has to be enabled
to get this behavior.
7.2 Configure the swap device:
Choose one of the following options:
7.2a Unencrypted or LUKS:
**Caution**: Always use long ``/dev/zvol`` aliases in configuration
files. Never use a short ``/dev/zdX`` device name.
::
# mkswap -f /dev/zvol/rpool/swap
# echo /dev/zvol/rpool/swap none swap defaults 0 0 >> /etc/fstab
7.2b eCryptfs:
::
# apt install cryptsetup
# echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom \
swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab
# systemctl daemon-reload
# systemctl start systemd-cryptsetup@cryptswap1.service
# echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab
7.3 Enable the swap device:
::
# swapon -av
Step 8: Full Software Installation
----------------------------------
8.1 Upgrade the minimal system:
::
# apt dist-upgrade --yes
8.2 Install a regular set of software:
Choose one of the following options:
8.2a Install a command-line environment only:
::
# apt install --yes ubuntu-standard
8.2b Install a full GUI environment:
::
# apt install --yes ubuntu-desktop
**Hint**: If you are installing a full GUI environment, you will likely
want to manage your network with NetworkManager. In that case,
``rm /etc/network/interfaces.d/eth0``.
8.3 Optional: Disable log compression:
As ``/var/log`` is already compressed by ZFS, logrotates compression is
going to burn CPU and disk I/O for (in most cases) very little gain.
Also, if you are making snapshots of ``/var/log``, logrotates
compression will actually waste space, as the uncompressed data will
live on in the snapshot. You can edit the files in ``/etc/logrotate.d``
by hand to comment out ``compress``, or use this loop (copy-and-paste
highly recommended):
::
# for file in /etc/logrotate.d/* ; do
if grep -Eq "(^|[^#y])compress" "$file" ; then
sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
fi
done
8.4 Reboot:
::
# reboot
Step 9: Final Cleanup
~~~~~~~~~~~~~~~~~~~~~
9.1 Wait for the system to boot normally. Login using the account you
created. Ensure the system (including networking) works normally.
9.2 Optional: Delete the snapshot of the initial installation:
::
$ sudo zfs destroy rpool/ROOT/ubuntu@install
9.3 Optional: Disable the root password
::
$ sudo usermod -p '*' root
9.4 Optional:
If you prefer the graphical boot process, you can re-enable it now. If
you are using LUKS, it makes the prompt look nicer.
::
$ sudo vi /etc/default/grub
Uncomment GRUB_HIDDEN_TIMEOUT=0
Add quiet and splash to GRUB_CMDLINE_LINUX_DEFAULT
Comment out GRUB_TERMINAL=console
Save and quit.
$ sudo update-grub
Troubleshooting
---------------
Rescuing using a Live CD
~~~~~~~~~~~~~~~~~~~~~~~~
Boot the Live CD and open a terminal.
Become root and install the ZFS utilities:
::
$ sudo -i
# apt update
# apt install --yes zfsutils-linux
This will automatically import your pool. Export it and re-import it to
get the mounts right:
::
# zpool export -a
# zpool import -N -R /mnt rpool
# zfs mount rpool/ROOT/ubuntu
# zfs mount -a
If needed, you can chroot into your installed environment:
::
# mount --rbind /dev /mnt/dev
# mount --rbind /proc /mnt/proc
# mount --rbind /sys /mnt/sys
# chroot /mnt /bin/bash --login
Do whatever you need to do to fix your system.
When done, cleanup:
::
# mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
# zpool export rpool
# reboot
MPT2SAS
~~~~~~~
Most problem reports for this tutorial involve ``mpt2sas`` hardware that
does slow asynchronous drive initialization, like some IBM M1015 or
OEM-branded cards that have been flashed to the reference LSI firmware.
The basic problem is that disks on these controllers are not visible to
the Linux kernel until after the regular system is started, and ZoL does
not hotplug pool members. See
`https://github.com/zfsonlinux/zfs/issues/330 <https://github.com/zfsonlinux/zfs/issues/330>`__.
Most LSI cards are perfectly compatible with ZoL. If your card has this
glitch, try setting rootdelay=X in GRUB_CMDLINE_LINUX. The system will
wait up to X seconds for all drives to appear before importing the pool.
Areca
~~~~~
Systems that require the ``arcsas`` blob driver should add it to the
``/etc/initramfs-tools/modules`` file and run
``update-initramfs -c -k all``.
Upgrade or downgrade the Areca driver if something like
``RIP: 0010:[<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20``
appears anywhere in kernel log. ZoL is unstable on systems that emit
this error message.
VMware
~~~~~~
- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere
configuration. Doing this ensures that ``/dev/disk`` aliases are
created in the guest.
QEMU/KVM/XEN
~~~~~~~~~~~~
Set a unique serial number on each virtual disk using libvirt or qemu
(e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``).
To be able to use UEFI in guests (instead of only BIOS booting), run
this on the host:
::
$ sudo apt install ovmf
$ sudo vi /etc/libvirt/qemu.conf
Uncomment these lines:
nvram = [
"/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd",
"/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd"
]
$ sudo service libvirt-bin restart

File diff suppressed because it is too large Load Diff

10
docs/Ubuntu.rst Normal file
View File

@@ -0,0 +1,10 @@
ZFS packages are `provided by the
distribution <https://wiki.ubuntu.com/Kernel/Reference/ZFS>`__.
If you want to use ZFS as your root filesystem, see these instructions:
- [[Ubuntu 18.04 Root on ZFS]]
For troubleshooting existing installations, see:
- 16.04: [[Ubuntu 16.04 Root on ZFS]]

View File

@@ -0,0 +1,11 @@
Accept a PR
===========
After a PR is generated, it is available to be commented upon by project
members. They may request additional changes, please work with them.
In addition, project members may accept PRs; this is not an automatic
process. By convention, PRs aren't accepted for at least a day, to allow
all members a chance to comment.
After a PR has been accepted, it is available to be merged.

View File

@@ -0,0 +1,2 @@
Close a PR
==========

View File

@@ -0,0 +1,24 @@
Commit Often
============
When writing complex code, it is strongly suggested that developers save
their changes, and commit those changes to their local repository, on a
frequent basis. In general, this means every hour or two, or when a
specific milestone is hit in the development. This allows you to easily
*checkpoint* your work.
Details of this process can be found in the `Commit the
changes <https://github.com/zfsonlinux/zfs/wiki/Workflow-Commit>`__
page.
In addition, it is suggested that the changes be pushed to your forked
Github repository with the ``git push`` command at least every day, as a
backup. Changes should also be pushed prior to running a test, in case
your system crashes. This project works with kernel software. A crash
while testing development software could easily cause loss of data.
For developers who want to keep their development branches clean, it
might be useful to
`squash <https://github.com/zfsonlinux/zfs/wiki/Workflow-Squash>`__
commits from time to time, even before you're ready to `create a
PR <https://github.com/zfsonlinux/zfs/wiki/Workflow-Create-PR>`__.

76
docs/Workflow-Commit.rst Normal file
View File

@@ -0,0 +1,76 @@
Commit the Changes
==================
In order for your changes to be merged into the ZFS on Linux project,
you must first send the changes made in your *topic* branch to your
*local* repository. This can be done with the ``git commit -sa``. If
there are any new files, they will be reported as *untracked*, and they
will not be created in the *local* repository. To add newly created
files to the *local* repository, use the ``git add (file-name) ...``
command.
The ``-s`` option adds a *signed off by* line to the commit. This
*signed off by* line is required for the ZFS on Linux project. It
performs the following functions:
- It is an acceptance of the `License
Terms <https://github.com/zfsonlinux/zfs/blob/master/COPYRIGHT>`__ of
the project.
- It is the developer's certification that they have the right to
submit the patch for inclusion into the code base.
- It indicates agreement to the `Developer's Certificate of
Origin <https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin>`__.
The ``-a`` option causes all modified files in the current branch to be
*staged* prior to performing the commit. A list of the modified files in
the *local* branch can be created by the use of the ``git status``
command. If there are files that have been modified that shouldn't be
part of the commit, they can either be rolled back in the current
branch, or the files can be manually staged with the
``git add (file-name) ...`` command, and the ``git commit -s`` command
can be run without the ``-a`` option.
When you run the ``git commit`` command, an editor will appear to allow
you to enter the commit messages. The following requirements apply to a
commit message:
- The first line is a title for the commit, and must be bo longer than
50 characters.
- The second line should be blank, separating the title of the commit
message from the body of the commit message.
- There may be one or more lines in the commit message describing the
reason for the changes (the body of the commit message). These lines
must be no longer than 72 characters, and may contain blank lines.
- If the commit closes an Issue, there should be a line in the body
with the string ``Closes``, followed by the issue number. If
multiple issues are closed, multiple lines should be used.
- After the body of the commit message, there should be a blank line.
This separates the body from the *signed off by* line.
- The *signed off by* line should have been created by the
``git commit -s`` command. If not, the line has the following format:
- The string "Signed-off-by:"
- The name of the developer. Please do not use any no pseudonyms or
make any anonymous contributions.
- The email address of the developer, enclosed by angle brackets
("<>").
- An example of this is
``Signed-off-by: Random Developer <random@developer.example.org>``
- If the commit changes only documentation, the line
``Requires-builders: style`` may be included in the body. This will
cause only the *style* testing to be run. This can save a significant
amount of time when Github runs the automated testing. For
information on other testing options, please see the `Buildbot
options <https://github.com/zfsonlinux/zfs/wiki/Buildbot-Options>`__
page.
For more information about writing commit messages, please visit `How to
Write a Git Commit
Message <https://chris.beams.io/posts/git-commit/>`__.
After the changes have been committed to your *local* repository, they
should be pushed to your *forked* repository. This is done with the
``git push`` command.

View File

@@ -0,0 +1,2 @@
Fix Conflicts
=============

View File

@@ -0,0 +1,32 @@
Create a Branch
===============
With small projects, it's possible to develop code as commits directly
on the *master* branch. In the ZFS-on-Linux project, that sort of
development would create havoc and make it difficult to open a PR or
rebase the code. For this reason, development in the ZFS-on-Linux
project is done on *topic* branches.
The following commands will perform the required functions:
::
$ cd zfs
$ git fetch upstream master
$ git checkout master
$ git merge upstream/master
$ git branch (topic-branch-name)
$ git checkout (topic-branch-name)
1. Navigate to your *local* repository.
2. Fetch the updates from the *upstream* repository.
3. Set the current branch to *master*.
4. Merge the fetched updates into the *local* repository.
5. Create a new *topic* branch on the updated *master* branch. The name
of the branch should be either the name of the feature (preferred for
development of features) or an indication of the issue being worked
on (preferred for bug fixes).
6. Set the current branch to the newly created *topic* branch.
**Pro Tip**: The ``git checkout -b (topic-branch-name)`` command can be
used to create and checkout a new branch with one command.

View File

@@ -0,0 +1,18 @@
Create a Github Account
=======================
This page goes over how to create a Github account. There are no special
settings needed to use your Github account on the `ZFS on Linux
Project <https://github.com/zfsonlinux>`__.
Github did an excellent job of documenting how to create an account. The
following link provides everything you need to know to get your Github
account up and running.
`https://help.github.com/articles/signing-up-for-a-new-github-account/ <https://help.github.com/articles/signing-up-for-a-new-github-account/>`__
In addition, the following articles might be useful:
- `https://help.github.com/articles/keeping-your-account-and-data-secure/ <https://help.github.com/articles/keeping-your-account-and-data-secure/>`__
- `https://help.github.com/articles/securing-your-account-with-two-factor-authentication-2fa/ <https://help.github.com/articles/securing-your-account-with-two-factor-authentication-2fa/>`__
- `https://help.github.com/articles/adding-a-fallback-authentication-method-with-recover-accounts-elsewhere/ <https://help.github.com/articles/adding-a-fallback-authentication-method-with-recover-accounts-elsewhere/>`__

View File

@@ -0,0 +1,2 @@
Create a New Test
=================

View File

@@ -0,0 +1,11 @@
Delete a Branch
===============
When a commit has been accepted and merged into the main ZFS repository,
the developer's topic branch should be deleted. This is also appropriate
if the developer abandons the change, and could be appropriate if they
change the direction of the change.
To delete a topic branch, navigate to the base directory of your local
Git repository and use the ``git branch -d (branch-name)`` command. The
name of the branch should be the same as the branch that was created.

View File

@@ -0,0 +1,2 @@
Generate a PR
=============

View File

@@ -0,0 +1,52 @@
.. raw:: html
<!--- When this page is updated, please also check the 'Get-the-Source-Code' page -->
Get the Source Code
===================
This document goes over how a developer can get the ZFS source code for
the purpose of making changes to it. For other purposes, please see the
`Get the Source
Code <https://github.com/zfsonlinux/zfs/wiki/Get-the-Source-Code>`__
page.
The Git *master* branch contains the latest version of the software,
including changes that weren't included in the released tarball. This is
the preferred source code location and procedure for ZFS development. If
you would like to do development work for the `ZFS on Linux
Project <https://github.com/zfsonlinux>`__, you can fork the Github
repository and prepare the source by using the following process.
1. Go to the `ZFS on Linux Project <https://github.com/zfsonlinux>`__
and fork both the ZFS and SPL repositories. This will create two new
repositories (your *forked* repositories) under your account.
Detailed instructions can be found at
`https://help.github.com/articles/fork-a-repo/ <https://help.github.com/articles/fork-a-repo/>`__.
2. Clone both of these repositories onto your development system. This
will create your *local* repositories. As an example, if your Github
account is *newzfsdeveloper*, the commands to clone the repositories
would be:
::
$ mkdir zfs-on-linux
$ cd zfs-on-linux
$ git clone https://github.com/newzfsdeveloper/spl.git
$ git clone https://github.com/newzfsdeveloper/zfs.git
3. Enter the following commands to make the necessary linkage to the
*upstream master* repositories and prepare the source to be compiled:
::
$ cd spl
$ git remote add upstream https://github.com/zfsonlinux/spl.git
$ ./autogen.sh
$ cd ../zfs
$ git remote add upstream https://github.com/zfsonlinux/zfs.git
$ ./autogen.sh
cd ..
The ``./autogen.sh`` script generates the build files. If the build
system is updated by any developer, these scripts need to be run again.

View File

@@ -0,0 +1,50 @@
Install Git
===========
To work with the ZFS software on Github, it's necessary to install the
Git software on your computer and set it up. This page covers that
process for some common Linux operating systems. Other Linux operating
systems should be similar.
Install the Software Package
----------------------------
The first step is to actually install the Git software package. This
package can be found in the repositories used by most Linux
distributions. If your distribution isn't listed here, or you'd like to
install from source, please have a look in the `official Git
documentation <https://git-scm.com/download/linux>`__.
Red Hat and CentOS
~~~~~~~~~~~~~~~~~~
::
# yum install git
Fedora
~~~~~~
::
$ sudo dnf install git
Debian and Ubuntu
~~~~~~~~~~~~~~~~~
::
$ sudo apt install git
Configuring Git
---------------
Your user name and email address must be set within Git before you can
make commits to the ZFS project. In addition, your preferred text editor
should be set to whatever you would like to use.
::
$ git config --global user.name "John Doe"
$ git config --global user.email johndoe@example.com
$ git config --global core.editor emacs

View File

@@ -0,0 +1,2 @@
Adding Large Features
=====================

View File

@@ -0,0 +1,9 @@
Merge a PR
==========
Once all the feedback has been addressed, the PR will be merged into the
*master* branch by a member with write permission (most members don't
have this permission).
After the PR has been merged, it is eligible to be added to the
*release* branch.

28
docs/Workflow-Rebase.rst Normal file
View File

@@ -0,0 +1,28 @@
Rebase the Update
=================
Updates to the ZFS on Linux project should always be based on the
current *master* branch. This makes them easier to merge into the
repository.
There are two steps in the rebase process. The first step is to update
the *local master* branch from the *upstream master* repository. This
can be done by entering the following commands:
::
$ git fetch upstream master
$ git checkout master
$ git merge upstream/master
The second step is to perform the actual rebase of the updates. This is
done by entering the command ``git rebase upstream/master``. If there
are any conflicts between the updates in your *local* branch and the
updates in the *upstream master* branch, you will be informed of them,
and allowed to correct them (see the
`Conflicts <https://github.com/zfsonlinux/zfs/wiki/Workflow-Conflicts>`__
page).
This would also be a good time to
`squash <https://github.com/zfsonlinux/zfs/wiki/Workflow-Squash>`__ your
commits.

2
docs/Workflow-Squash.rst Normal file
View File

@@ -0,0 +1,2 @@
Squash the Commits
==================

106
docs/Workflow-Test.rst Normal file
View File

@@ -0,0 +1,106 @@
Testing Changes to ZFS
======================
The code in the ZFS on Linux project is quite complex. A minor error in
a change could easily introduce new bugs into the software, causing
unforeseeable problems. In an attempt to avoid this, the ZTS (ZFS Test
Suite) was developed. This test suite is run against multiple
architectures and distributions by the Github system when a PR (Pull
Request) is submitted.
A subset of the full test suite can be run by the developer to perform a
preliminary verification of the changes in their *local* repository.
Style Testing
-------------
The first part of the testing is to verify that the software meets the
project's style guidelines. To verify that the code meets those
guidelines, run ``make checkstyle`` from the *local* repository.
Basic Functionality Testing
---------------------------
The second part of the testing is to verify basic functionality. This is
to ensure that the changes made don't break previous functionality.
There are a few helper scripts provided in the top-level scripts
directory designed to aid developers working with in-tree builds.
- **zfs-helper.sh:** Certain functionality (i.e. /dev/zvol/) depends on
the ZFS provided udev helper scripts being installed on the system.
This script can be used to create symlinks on the system from the
installation location to the in-tree helper. These links must be in
place to successfully run the ZFS Test Suite. The ``-i`` and ``-r``
options can be used to install and remove the symlinks.
::
$ sudo ./scripts/zfs-helpers.sh -i
- **zfs.sh:** The freshly built kernel modules from the *local*
repository can be loaded using ``zfs.sh``. This script will load
those modules, **even if there are ZFS modules loaded** from another
location, which could cause long-term problems if any of the
non-testing file-systems on the system use ZFS.
This script can latter be used to unload the kernel modules with the
``-u`` option.
::
$ sudo ./scripts/zfs.sh
- **zfs-tests.sh:** A wrapper which can be used to launch the ZFS Test
Suite. Three loopback devices are created on top of sparse files
located in ``/var/tmp/`` and used for the regression test. Detailed
directions for the running the ZTS can be found in the `ZTS
Readme <https://github.com/zfsonlinux/zfs/tree/master/tests>`__ file.
**WARNING**: This script should **only** be run on a development system.
It makes configuration changes to the system to run the tests, and it
*tries* to remove those changes after completion, but the change removal
could fail, and dynamic canges of this nature are usually undesirable on
a production system. For more information on the changes made, please
see the `ZTS
Readme <https://github.com/zfsonlinux/zfs/tree/master/tests>`__ file.
::
$ sudo ./scripts/zfs-tests.sh -vx
**tip:** The **delegate** tests will be skipped unless group read
permission is set on the zfs directory and its parents.
- **zloop.sh:** A wrapper to run ztest repeatedly with randomized
arguments. The ztest command is a user space stress test designed to
detect correctness issues by concurrently running a random set of
test cases. If a crash is encountered, the ztest logs, any associated
vdev files, and core file (if one exists) are collected and moved to
the output directory for analysis.
If there are any failures in this test, please see the `zloop
debugging <https://github.com/zfsonlinux/zfs/wiki/Workflow-Zloop-Debugging>`__
page.
::
$ sudo ./scripts/zloop.sh
Change Testing
--------------
Finally, it's necessary to verify that the changes made actually do what
they were intended to do. The extent of the testing would depend on the
complexity of the changes.
After the changes are tested, if the testing can be automated for
addition to ZTS, a `new
test <https://github.com/zfsonlinux/zfs/wiki/Workflow-Create-Test>`__
should be created. This test should be part of the PR that resolves the
issue or adds the feature. If the festure is split into multiple PRs,
some testing should be included in the first, with additions to the test
as required.
It should be noted that if the change adds too many lines of code that
don't get tested by ZTS, the change will not pass testing.

View File

@@ -0,0 +1,2 @@
Update a PR
===========

View File

@@ -0,0 +1,2 @@
Debugging *Zloop* Failures
==========================

View File

@@ -0,0 +1,105 @@
ZFS Transaction Delay
~~~~~~~~~~~~~~~~~~~~~
ZFS write operations are delayed when the backend storage isn't able to
accommodate the rate of incoming writes. This delay process is known as
the ZFS write throttle.
If there is already a write transaction waiting, the delay is relative
to when that transaction will finish waiting. Thus the calculated delay
time is independent of the number of threads concurrently executing
transactions.
If there is only one waiter, the delay is relative to when the
transaction started, rather than the current time. This credits the
transaction for "time already served." For example, if a write
transaction requires reading indirect blocks first, then the delay is
counted at the start of the transaction, just prior to the indirect
block reads.
The minimum time for a transaction to take is calculated as:
::
min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
min_time is then capped at 100 milliseconds
The delay has two degrees of freedom that can be adjusted via tunables:
1. The percentage of dirty data at which we start to delay is defined by
zfs_delay_min_dirty_percent. This is typically be at or above
zfs_vdev_async_write_active_max_dirty_percent so delays occur after
writing at full speed has failed to keep up with the incoming write
rate.
2. The scale of the curve is defined by zfs_delay_scale. Roughly
speaking, this variable determines the amount of delay at the
midpoint of the curve.
::
delay
10ms +-------------------------------------------------------------*+
| *|
9ms + *+
| *|
8ms + *+
| * |
7ms + * +
| * |
6ms + * +
| * |
5ms + * +
| * |
4ms + * +
| * |
3ms + * +
| * |
2ms + (midpoint) * +
| | ** |
1ms + v *** +
| zfs_delay_scale ----------> ******** |
0 +-------------------------------------*********----------------+
0% <- zfs_dirty_data_max -> 100%
Note that since the delay is added to the outstanding time remaining on
the most recent transaction, the delay is effectively the inverse of
IOPS. Here the midpoint of 500 microseconds translates to 2000 IOPS. The
shape of the curve was chosen such that small changes in the amount of
accumulated dirty data in the first 3/4 of the curve yield relatively
small differences in the amount of delay.
The effects can be easier to understand when the amount of delay is
represented on a log scale:
::
delay
100ms +-------------------------------------------------------------++
+ +
| |
+ *+
10ms + *+
+ ** +
| (midpoint) ** |
+ | ** +
1ms + v **** +
+ zfs_delay_scale ----------> ***** +
| **** |
+ **** +
100us + ** +
+ * +
| * |
+ * +
10us + * +
+ +
| |
+ +
+--------------------------------------------------------------+
0% <- zfs_dirty_data_max -> 100%
Note here that only as the amount of dirty data approaches its limit
does the delay start to increase rapidly. The goal of a properly tuned
system should be to keep the amount of dirty data out of that range by
first ensuring that the appropriate limits are set for the I/O scheduler
to reach optimal throughput on the backend storage, and then by changing
the value of zfs_delay_scale to increase the steepness of the curve.

File diff suppressed because it is too large Load Diff

98
docs/ZIO-Scheduler.rst Normal file
View File

@@ -0,0 +1,98 @@
ZFS I/O (ZIO) Scheduler
=======================
ZFS issues I/O operations to leaf vdevs (usually devices) to satisfy and
complete I/Os. The ZIO scheduler determines when and in what order those
operations are issued. Operations into five I/O classes prioritized in
the following order:
+----------+-------------+-------------------------------------------+
| Priority | I/O Class | Description |
+==========+=============+===========================================+
| highest | sync read | most reads |
+----------+-------------+-------------------------------------------+
| | sync write | as defined by application or via 'zfs' |
| | | 'sync' property |
+----------+-------------+-------------------------------------------+
| | async read | prefetch reads |
+----------+-------------+-------------------------------------------+
| | async write | most writes |
+----------+-------------+-------------------------------------------+
| lowest | scrub read | scan read: includes both scrub and |
| | | resilver |
+----------+-------------+-------------------------------------------+
Each queue defines the minimum and maximum number of concurrent
operations issued to the device. In addition, the device has an
aggregate maximum, zfs_vdev_max_active. Note that the sum of the
per-queue minimums must not exceed the aggregate maximum. If the sum of
the per-queue maximums exceeds the aggregate maximum, then the number of
active I/Os may reach zfs_vdev_max_active, in which case no further I/Os
are issued regardless of whether all per-queue minimums have been met.
+-------------+--------------------------+--------------------------+
| I/O Class | Min Active Parameter | Max Active Parameter |
+=============+==========================+==========================+
| sync read | zfs_v | zfs_v |
| | dev_sync_read_min_active | dev_sync_read_max_active |
+-------------+--------------------------+--------------------------+
| sync write | zfs_vd | zfs_vd |
| | ev_sync_write_min_active | ev_sync_write_max_active |
+-------------+--------------------------+--------------------------+
| async read | zfs_vd | zfs_vd |
| | ev_async_read_min_active | ev_async_read_max_active |
+-------------+--------------------------+--------------------------+
| async write | zfs_vde | zfs_vde |
| | v_async_write_min_active | v_async_write_max_active |
+-------------+--------------------------+--------------------------+
| scrub read | z | z |
| | fs_vdev_scrub_min_active | fs_vdev_scrub_max_active |
+-------------+--------------------------+--------------------------+
For many physical devices, throughput increases with the number of
concurrent operations, but latency typically suffers. Further, physical
devices typically have a limit at which more concurrent operations have
no effect on throughput or can actually cause it to performance to
decrease.
The ZIO scheduler selects the next operation to issue by first looking
for an I/O class whose minimum has not been satisfied. Once all are
satisfied and the aggregate maximum has not been hit, the scheduler
looks for classes whose maximum has not been satisfied. Iteration
through the I/O classes is done in the order specified above. No further
operations are issued if the aggregate maximum number of concurrent
operations has been hit or if there are no operations queued for an I/O
class that has not hit its maximum. Every time an I/O is queued or an
operation completes, the I/O scheduler looks for new operations to
issue.
In general, smaller max_active's will lead to lower latency of
synchronous operations. Larger max_active's may lead to higher overall
throughput, depending on underlying storage and the I/O mix.
The ratio of the queues' max_actives determines the balance of
performance between reads, writes, and scrubs. For example, when there
is contention, increasing zfs_vdev_scrub_max_active will cause the scrub
or resilver to complete more quickly, but reads and writes to have
higher latency and lower throughput.
All I/O classes have a fixed maximum number of outstanding operations
except for the async write class. Asynchronous writes represent the data
that is committed to stable storage during the syncing stage for
transaction groups (txgs). Transaction groups enter the syncing state
periodically so the number of queued async writes quickly bursts up and
then reduce down to zero. The zfs_txg_timeout tunable (default=5
seconds) sets the target interval for txg sync. Thus a burst of async
writes every 5 seconds is a normal ZFS I/O pattern.
Rather than servicing I/Os as quickly as possible, the ZIO scheduler
changes the maximum number of active async write I/Os according to the
amount of dirty data in the pool. Since both throughput and latency
typically increase as the number of concurrent operations issued to
physical devices, reducing the burstiness in the number of concurrent
operations also stabilizes the response time of operations from other
queues. This is particular important for the sync read and write queues,
where the periodic async write bursts of the txg sync can lead to
device-level contention. In broad strokes, the ZIO scheduler issues more
concurrent operations from the async write queue as there's more dirty
data in the pool.

5
docs/_Footer.rst Normal file
View File

@@ -0,0 +1,5 @@
[[Home]] / [[Project and Community]] / [[Developer Resources]] /
[[License]] |Creative Commons License|
.. |Creative Commons License| image:: https://i.creativecommons.org/l/by-sa/3.0/80x15.png
:target: http://creativecommons.org/licenses/by-sa/3.0/

49
docs/_Sidebar.rst Normal file
View File

@@ -0,0 +1,49 @@
- [[Home]]
- [[Getting Started]]
- `ArchLinux <https://wiki.archlinux.org/index.php/ZFS>`__
- [[Debian]]
- [[Fedora]]
- `FreeBSD <https://zfsonfreebsd.github.io/ZoF/>`__
- `Gentoo <https://wiki.gentoo.org/wiki/ZFS>`__
- `openSUSE <https://software.opensuse.org/package/zfs>`__
- [[RHEL and CentOS]]
- [[Ubuntu]]
- [[Project and Community]]
- [[Admin Documentation]]
- [[FAQ]]
- [[Mailing Lists]]
- `Releases <https://github.com/zfsonlinux/zfs/releases>`__
- [[Signing Keys]]
- `Issue Tracker <https://github.com/zfsonlinux/zfs/issues>`__
- `Roadmap <https://github.com/zfsonlinux/zfs/milestones>`__
- [[Developer Resources]]
- [[Custom Packages]]
- [[Building ZFS]]
- `Buildbot
Status <http://build.zfsonlinux.org/tgrid?length=100&branch=master&category=Platforms&rev_order=desc>`__
- `Buildbot Issue
Tracking <http://build.zfsonlinux.org/known-issues.html>`__
- `Buildbot
Options <https://github.com/zfsonlinux/zfs/wiki/Buildbot-Options>`__
- `OpenZFS
Tracking <http://build.zfsonlinux.org/openzfs-tracking.html>`__
- [[OpenZFS Patches]]
- [[OpenZFS Exceptions]]
- `OpenZFS
Documentation <http://open-zfs.org/wiki/Developer_resources>`__
- [[Git and GitHub for beginners]]
- Performance and Tuning
- [[ZFS on Linux Module Parameters]]
- `ZFS Transaction Delay and Write
Throttle <https://github.com/zfsonlinux/zfs/wiki/ZFS-Transaction-Delay>`__
- [[ZIO Scheduler]]
- [[Checksums]]
- `Asynchronous
Writes <https://github.com/zfsonlinux/zfs/wiki/Async-Write>`__

411
docs/dRAID-HOWTO.rst Normal file
View File

@@ -0,0 +1,411 @@
Introduction
============
raidz vs draid
--------------
ZFS users are most likely very familiar with raidz already, so a
comparison with draid would help. The illustrations below are
simplified, but sufficient for the purpose of a comparison. For example,
31 drives can be configured as a zpool of 6 raidz1 vdevs and a hot
spare: |raidz1|
As shown above, if drive 0 fails and is replaced by the hot spare, only
5 out of the 30 surviving drives will work to resilver: drives 1-4 read,
and drive 30 writes.
The same 30 drives can be configured as 1 draid1 vdev of the same level
of redundancy (i.e. single parity, 1/4 parity ratio) and single spare
capacity: |draid1|
The drives are shuffled in a way that, after drive 0 fails, all 30
surviving drives will work together to restore the lost data/parity:
- All 30 drives read, because unlike the raidz1 configuration shown
above, in the draid1 configuration the neighbor drives of the failed
drive 0 (i.e. drives in a same data+parity group) are not fixed.
- All 30 drives write, because now there is no dedicated spare drive.
Instead, spare blocks come from all drives.
To summarize:
- Normal application IO: draid and raidz are very similar. There's a
slight advantage in draid, since there's no dedicated spare drive
which is idle when not in use.
- Restore lost data/parity: for raidz, not all surviving drives will
work to rebuild, and in addition it's bounded by the write throughput
of a single replacement drive. For draid, the rebuild speed will
scale with the total number of drives because all surviving drives
will work to rebuild.
The dRAID vdev must shuffle its child drives in a way that regardless of
which drive has failed, the rebuild IO (both read and write) will
distribute evenly among all surviving drives, so the rebuild speed will
scale. The exact mechanism used by the dRAID vdev driver is beyond the
scope of this simple introduction here. If interested, please refer to
the recommended readings in the next section.
Recommended Reading
-------------------
Parity declustering (the fancy term for shuffling drives) has been an
active research topic, and many papers have been published in this area.
The `Permutation Development Data
Layout <http://www.cse.scu.edu/~tschwarz/TechReports/hpca.pdf>`__ is a
good paper to begin. The dRAID vdev driver uses a shuffling algorithm
loosely based on the mechanism described in this paper.
Using dRAID
===========
First get the code `here <https://github.com/openzfs/zfs/pull/10102>`__,
build zfs with *configure --enable-debug*, and install. Then load the
zfs kernel module with the following options which help dRAID rebuild
performance.
- zfs_vdev_scrub_max_active=10
- zfs_vdev_async_write_min_active=4
Create a dRAID vdev
-------------------
Similar to raidz vdev a dRAID vdev can be created using the
``zpool create`` command:
::
# zpool create <pool> draid[1,2,3][ <vdevs...>
Unlike raidz, additional options may be provided as part of the
``draid`` vdev type to specify an exact dRAID layout. When unspecific
reasonable defaults will be chosen.
::
# zpool create <pool> draid[1,2,3][:<groups>g][:<spares>s][:<data>d][:<iterations>] <vdevs...>
- groups - Number of redundancy groups (default: 1 group per 12 vdevs)
- spares - Number of distributed hot spares (default: 1)
- data - Number of data devices per group (default: determined by
number of groups)
- iterations - Number of iterations to perform generating a valid dRAID
mapping (default 3).
*Notes*:
- The default values are not set in stone and may change.
- For the majority of common configurations we intend to provide
pre-computed balanced dRAID mappings.
- When *data* is specified then: (draid_children - spares) % (parity +
data) == 0, otherwise the pool creation will fail.
Now the dRAID vdev is online and ready for IO:
::
pool: tank
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
draid2:4g:2s-0 ONLINE 0 0 0
L0 ONLINE 0 0 0
L1 ONLINE 0 0 0
L2 ONLINE 0 0 0
L3 ONLINE 0 0 0
...
L50 ONLINE 0 0 0
L51 ONLINE 0 0 0
L52 ONLINE 0 0 0
spares
s0-draid2:4g:2s-0 AVAIL
s1-draid2:4g:2s-0 AVAIL
errors: No known data errors
There are two logical hot spare vdevs shown above at the bottom:
- The names begin with a ``s<id>-`` followed by the name of the parent
dRAID vdev.
- These hot spares are logical, made from reserved blocks on all the 53
child drives of the dRAID vdev.
- Unlike traditional hot spares, the distributed spare can only replace
a drive in its parent dRAID vdev.
The dRAID vdev behaves just like a raidz vdev of the same parity level.
You can do IO to/from it, scrub it, fail a child drive and it'd operate
in degraded mode.
Rebuild to distributed spare
----------------------------
When there's a failed/offline child drive, the dRAID vdev supports a
completely new mechanism to reconstruct lost data/parity, in addition to
the resilver. First of all, resilver is still supported - if a failed
drive is replaced by another physical drive, the resilver process is
used to reconstruct lost data/parity to the new replacement drive, which
is the same as a resilver in a raidz vdev.
But if a child drive is replaced with a distributed spare, a new process
called rebuild is used instead of resilver:
::
# zpool offline tank sdo
# zpool replace tank sdo '%draid1-0-s0'
# zpool status
pool: tank
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: rebuilt 2.00G in 0h0m5s with 0 errors on Fri Feb 24 20:37:06 2017
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
draid1-0 DEGRADED 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdu ONLINE 0 0 0
sdj ONLINE 0 0 0
sdv ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
spare-11 DEGRADED 0 0 0
sdo OFFLINE 0 0 0
%draid1-0-s0 ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdr ONLINE 0 0 0
sds ONLINE 0 0 0
sdt ONLINE 0 0 0
spares
%draid1-0-s0 INUSE currently in use
%draid1-0-s1 AVAIL
The scan status line of the *zpool status* output now says *"rebuilt"*
instead of *"resilvered"*, because the lost data/parity was rebuilt to
the distributed spare by a brand new process called *"rebuild"*. The
main differences from *resilver* are:
- The rebuild process does not scan the whole block pointer tree.
Instead, it only scans the spacemap objects.
- The IO from rebuild is sequential, because it rebuilds metaslabs one
by one in sequential order.
- The rebuild process is not limited to block boundaries. For example,
if 10 64K blocks are allocated contiguously, then rebuild will fix
640K at one time. So rebuild process will generate larger IOs than
resilver.
- For all the benefits above, there is one price to pay. The rebuild
process cannot verify block checksums, since it doesn't have block
pointers.
- Moreover, the rebuild process requires support from on-disk format,
and **only** works on draid and mirror vdevs. Resilver, on the other
hand, works with any vdev (including draid).
Although rebuild process creates larger IOs, the drives will not
necessarily see large IO requests. The block device queue parameter
*/sys/block/*/queue/max_sectors_kb* must be tuned accordingly. However,
since the rebuild IO is already sequential, the benefits of enabling
larger IO requests might be marginal.
At this point, redundancy has been fully restored without adding any new
drive to the pool. If another drive is offlined, the pool is still able
to do IO:
::
# zpool offline tank sdj
# zpool status
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: rebuilt 2.00G in 0h0m5s with 0 errors on Fri Feb 24 20:37:06 2017
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
draid1-0 DEGRADED 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdu ONLINE 0 0 0
sdj OFFLINE 0 0 0
sdv ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
spare-11 DEGRADED 0 0 0
sdo OFFLINE 0 0 0
%draid1-0-s0 ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdr ONLINE 0 0 0
sds ONLINE 0 0 0
sdt ONLINE 0 0 0
spares
%draid1-0-s0 INUSE currently in use
%draid1-0-s1 AVAIL
As shown above, the *draid1-0* vdev is still in *DEGRADED* mode although
two child drives have failed and it's only single-parity. Since the
*%draid1-0-s1* is still *AVAIL*, full redundancy can be restored by
replacing *sdj* with it, without adding new drive to the pool:
::
# zpool replace tank sdj '%draid1-0-s1'
# zpool status
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: rebuilt 2.13G in 0h0m5s with 0 errors on Fri Feb 24 23:20:59 2017
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
draid1-0 DEGRADED 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdu ONLINE 0 0 0
spare-6 DEGRADED 0 0 0
sdj OFFLINE 0 0 0
%draid1-0-s1 ONLINE 0 0 0
sdv ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
spare-11 DEGRADED 0 0 0
sdo OFFLINE 0 0 0
%draid1-0-s0 ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdr ONLINE 0 0 0
sds ONLINE 0 0 0
sdt ONLINE 0 0 0
spares
%draid1-0-s0 INUSE currently in use
%draid1-0-s1 INUSE currently in use
Again, full redundancy has been restored without adding any new drive.
If another drive fails, the pool will still be able to handle IO, but
there'd be no more distributed spare to rebuild (both are in *INUSE*
state now). At this point, there's no urgency to add a new replacement
drive because the pool can survive yet another drive failure.
Rebuild for mirror vdev
~~~~~~~~~~~~~~~~~~~~~~~
The sequential rebuild process also works for the mirror vdev, when a
drive is attached to a mirror or a mirror child vdev is replaced.
By default, rebuild for mirror vdev is turned off. It can be turned on
using the zfs module option *spa_rebuild_mirror=1*.
Rebuild throttling
~~~~~~~~~~~~~~~~~~
The rebuild process may delay *zio* by *spa_vdev_scan_delay* if the
draid vdev has seen any important IO in the recent *spa_vdev_scan_idle*
period. But when a dRAID vdev has lost all redundancy, e.g. a draid2
with 2 faulted child drives, the rebuild process will go full speed by
ignoring *spa_vdev_scan_delay* and *spa_vdev_scan_idle* altogether
because the vdev is now in critical state.
After delaying, the rebuild zio is issued using priority
*ZIO_PRIORITY_SCRUB* for reads and *ZIO_PRIORITY_ASYNC_WRITE* for
writes. Therefore the options that control the queuing of these two IO
priorities will affect rebuild *zio* as well, for example
*zfs_vdev_scrub_min_active*, *zfs_vdev_scrub_max_active*,
*zfs_vdev_async_write_min_active*, and
*zfs_vdev_async_write_max_active*.
Rebalance
---------
Distributed spare space can be made available again by simply replacing
any failed drive with a new drive. This process is called *rebalance*
which is essentially a *resilver*:
::
# zpool replace -f tank sdo sdw
# zpool status
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: resilvered 2.21G in 0h0m58s with 0 errors on Fri Feb 24 23:31:45 2017
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
draid1-0 DEGRADED 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdu ONLINE 0 0 0
spare-6 DEGRADED 0 0 0
sdj OFFLINE 0 0 0
%draid1-0-s1 ONLINE 0 0 0
sdv ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
sdw ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdr ONLINE 0 0 0
sds ONLINE 0 0 0
sdt ONLINE 0 0 0
spares
%draid1-0-s0 AVAIL
%draid1-0-s1 INUSE currently in use
Note that the scan status now says *"resilvered"*. Also, the state of
*%draid1-0-s0* has become *AVAIL* again. Since the resilver process
checks block checksums, it makes up for the lack of checksum
verification during previous rebuild.
The dRAID1 vdev in this example shuffles three (4 data + 1 parity)
redundancy groups to the 17 drives. For any single drive failure, only
about 1/3 of the blocks are affected (and should be resilvered/rebuilt).
The rebuild process is able to avoid unnecessary work, but the resilver
process by default will not. The rebalance (which is essentially
resilver) can speed up a lot by setting module option
*zfs_no_resilver_skip* to 0. This feature is turned off by default
because of issue
`https://github.com/zfsonlinux/zfs/issues/5806 <https://github.com/zfsonlinux/zfs/issues/5806>`__.
Troubleshooting
===============
Please report bugs to `the dRAID
PR <https://github.com/zfsonlinux/zfs/pull/10102>`__, as long as the
code is not merged upstream.
.. |raidz1| image:: https://cloud.githubusercontent.com/assets/6722662/23642396/9790e432-02b7-11e7-8198-ae9f17c61d85.png
.. |draid1| image:: https://cloud.githubusercontent.com/assets/6722662/23642395/9783ef8e-02b7-11e7-8d7e-31d1053ee4ff.png

62
docs/hole_birth-FAQ.rst Normal file
View File

@@ -0,0 +1,62 @@
Short explanation
~~~~~~~~~~~~~~~~~
The hole_birth feature has/had bugs, the result of which is that, if you
do a ``zfs send -i`` (or ``-R``, since it uses ``-i``) from an affected
dataset, the receiver will not see any checksum or other errors, but the
resulting destination snapshot will not match the source.
ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring the
faulty metadata which causes this issue *on the sender side*.
FAQ
~~~
I have a pool with hole_birth enabled, how do I know if I am affected?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is technically possible to calculate whether you have any affected
files, but it requires scraping zdb output for each file in each
snapshot in each dataset, which is a combinatoric nightmare. (If you
really want it, there is a proof of concept
`here <https://github.com/rincebrain/hole_birth_test>`__.
Is there any less painful way to fix this if we have already received an affected snapshot?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
No, the data you need was simply not present in the send stream,
unfortunately, and cannot feasibly be rewritten in place.
Long explanation
~~~~~~~~~~~~~~~~
hole_birth is a feature to speed up ZFS send -i - in particular, ZFS
used to not store metadata on when "holes" (sparse regions) in files
were created, so every zfs send -i needed to include every hole.
hole_birth, as the name implies, added tracking for the txg (transaction
group) when a hole was created, so that zfs send -i could only send
holes that had a birth_time between (starting snapshot txg) and (ending
snapshot txg), and life was wonderful.
Unfortunately, hole_birth had a number of edge cases where it could
"forget" to set the birth_time of holes in some cases, causing it to
record the birth_time as 0 (the value used prior to hole_birth, and
essentially equivalent to "since file creation").
This meant that, when you did a zfs send -i, since zfs send does not
have any knowledge of the surrounding snapshots when sending a given
snapshot, it would see the creation txg as 0, conclude "oh, it is 0, I
must have already sent this before", and not include it.
This means that, on the receiving side, it does not know those holes
should exist, and does not create them. This leads to differences
between the source and the destination.
ZoL versions 0.6.5.8 and 0.7.0-rc1 (and above) default to ignoring this
metadata and always sending holes with birth_time 0, configurable using
the tunable known as ``ignore_hole_birth`` or
``send_holes_without_birth_time``. The latter is what OpenZFS
standardized on. ZoL version 0.6.5.8 only has the former, but for any
ZoL version with ``send_holes_without_birth_time``, they point to the
same value, so changing either will work.