diff --git a/docs/Getting Started/Debian/Debian Buster Root on ZFS.rst b/docs/Getting Started/Debian/Debian Buster Root on ZFS.rst
index 6a1c90e..b75216d 100644
--- a/docs/Getting Started/Debian/Debian Buster Root on ZFS.rst
+++ b/docs/Getting Started/Debian/Debian Buster Root on ZFS.rst
@@ -19,19 +19,19 @@ Caution
System Requirements
~~~~~~~~~~~~~~~~~~~
-- `64-bit Debian GNU/Linux Buster Live CD w/ GUI (e.g. gnome
- iso) `__
-- `A 64-bit kernel is strongly
- encouraged. `__
-- Installing on a drive which presents 4 KiB logical sectors (a “4Kn”
- drive) only works with UEFI booting. This not unique to ZFS. `GRUB
- does not and will not work on 4Kn with legacy (BIOS)
- booting. `__
+- `64-bit Debian GNU/Linux Buster Live CD w/ GUI (e.g. gnome iso)
+ `__
+- `A 64-bit kernel is strongly encouraged.
+ `__
+- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive)
+ only works with UEFI booting. This not unique to ZFS. `GRUB does not and
+ will not work on 4Kn with legacy (BIOS) booting.
+ `__
-Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of
-memory is recommended for normal performance in basic workloads. If you
-wish to use deduplication, you will need `massive amounts of
-RAM `__. Enabling
+Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory
+is recommended for normal performance in basic workloads. If you wish to use
+deduplication, you will need `massive amounts of RAM
+`__. Enabling
deduplication is a permanent change that cannot be easily reverted.
Support
@@ -99,855 +99,894 @@ encrypted once per disk.
Step 1: Prepare The Install Environment
---------------------------------------
-1.1 Boot the Debian GNU/Linux Live CD. If prompted, login with the
-username ``user`` and password ``live``. Connect your system to the
-Internet as appropriate (e.g. join your WiFi network).
+#. Boot the Debian GNU/Linux Live CD. If prompted, login with the username
+ ``user`` and password ``live``. Connect your system to the Internet as
+ appropriate (e.g. join your WiFi network). Open a terminal.
-1.2 Optional: Install and start the OpenSSH server in the Live CD
-environment:
+#. Setup and update the repositories::
-If you have a second system, using SSH to access the target system can
-be convenient::
+ sudo vi /mnt/etc/apt/sources.list
- sudo apt update
- sudo apt install --yes openssh-server
- sudo systemctl restart ssh
+ .. code-block:: sourceslist
-**Hint:** You can find your IP address with
-``ip addr show scope global | grep inet``. Then, from your main machine,
-connect with ``ssh user@IP``.
+ deb http://deb.debian.org/debian buster main contrib
+ deb-src http://deb.debian.org/debian buster main contrib
-1.3 Become root::
+ ::
- sudo -i
+ sudo apt update
-1.4 Setup and update the repositories::
+#. Optional: Install and start the OpenSSH server in the Live CD environment:
- echo deb http://deb.debian.org/debian buster contrib >> /etc/apt/sources.list
- echo deb http://deb.debian.org/debian buster-backports main contrib >> /etc/apt/sources.list
- apt update
+ If you have a second system, using SSH to access the target system can be
+ convenient::
-1.5 Install ZFS in the Live CD environment::
+ sudo apt install --yes openssh-server
+ sudo systemctl restart ssh
- apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r)
- apt install --yes -t buster-backports --no-install-recommends zfs-dkms
- modprobe zfs
- apt install --yes -t buster-backports zfsutils-linux
+ **Hint:** You can find your IP address with
+ ``ip addr show scope global | grep inet``. Then, from your main machine,
+ connect with ``ssh user@IP``.
-- The dkms dependency is installed manually just so it comes from
- buster and not buster-backports. This is not critical.
-- We need to get the module built and loaded before installing
- zfsutils-linux or `zfs-mount.service will fail to
- start `__.
+#. Become root::
+
+ sudo -i
+
+#. Install ZFS in the Live CD environment::
+
+ apt install --yes debootstrap gdisk dkms dpkg-dev \
+ linux-headers-$(uname -r)
+ apt install --yes -t buster-backports --no-install-recommends zfs-dkms
+ modprobe zfs
+ apt install --yes -t buster-backports zfsutils-linux
+
+ - The dkms dependency is installed manually just so it comes from buster
+ and not buster-backports. This is not critical.
+ - We need to get the module built and loaded before installing
+ zfsutils-linux or `zfs-mount.service will fail to start
+ `__.
Step 2: Disk Formatting
-----------------------
-2.1 Set a variable with the disk name::
+#. Set a variable with the disk name::
- DISK=/dev/disk/by-id/scsi-SATA_disk1
+ DISK=/dev/disk/by-id/scsi-SATA_disk1
-Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the
-``/dev/sd*`` device nodes directly can cause sporadic import failures,
-especially on systems that have more than one storage pool.
+ Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the
+ ``/dev/sd*`` device nodes directly can cause sporadic import failures,
+ especially on systems that have more than one storage pool.
-**Hints:**
+ **Hints:**
-- ``ls -la /dev/disk/by-id`` will list the aliases.
-- Are you doing this in a virtual machine? If your virtual disk is
- missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using
- KVM with virtio; otherwise, read the
- `troubleshooting <#troubleshooting>`__ section.
+ - ``ls -la /dev/disk/by-id`` will list the aliases.
+ - Are you doing this in a virtual machine? If your virtual disk is missing
+ from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with
+ virtio; otherwise, read the `troubleshooting <#troubleshooting>`__
+ section.
-2.2 If you are re-using a disk, clear it as necessary:
+#. If you are re-using a disk, clear it as necessary:
-If the disk was previously used in an MD array, zero the superblock::
+ If the disk was previously used in an MD array, zero the superblock::
- apt install --yes mdadm
- mdadm --zero-superblock --force $DISK
+ apt install --yes mdadm
+ mdadm --zero-superblock --force $DISK
-Clear the partition table::
+ Clear the partition table::
- sgdisk --zap-all $DISK
+ sgdisk --zap-all $DISK
-2.3 Partition your disk(s):
+#. Partition your disk(s):
-Run this if you need legacy (BIOS) booting::
+ Run this if you need legacy (BIOS) booting::
- sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
+ sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK
-Run this for UEFI booting (for use now or in the future)::
+ Run this for UEFI booting (for use now or in the future)::
- sgdisk -n2:1M:+512M -t2:EF00 $DISK
+ sgdisk -n2:1M:+512M -t2:EF00 $DISK
-Run this for the boot pool::
+ Run this for the boot pool::
- sgdisk -n3:0:+1G -t3:BF01 $DISK
+ sgdisk -n3:0:+1G -t3:BF01 $DISK
-Choose one of the following options:
+ Choose one of the following options:
-2.3a Unencrypted or ZFS native encryption::
+ - Unencrypted or ZFS native encryption::
- sgdisk -n4:0:0 -t4:BF01 $DISK
+ sgdisk -n4:0:0 -t4:BF01 $DISK
-2.3b LUKS::
+ - LUKS::
- sgdisk -n4:0:0 -t4:8300 $DISK
+ sgdisk -n4:0:0 -t4:8300 $DISK
-If you are creating a mirror or raidz topology, repeat the partitioning
-commands for all the disks which will be part of the pool.
+ If you are creating a mirror or raidz topology, repeat the partitioning
+ commands for all the disks which will be part of the pool.
-2.4 Create the boot pool::
+#. Create the boot pool::
- zpool create -o ashift=12 -d \
- -o feature@async_destroy=enabled \
- -o feature@bookmarks=enabled \
- -o feature@embedded_data=enabled \
- -o feature@empty_bpobj=enabled \
- -o feature@enabled_txg=enabled \
- -o feature@extensible_dataset=enabled \
- -o feature@filesystem_limits=enabled \
- -o feature@hole_birth=enabled \
- -o feature@large_blocks=enabled \
- -o feature@lz4_compress=enabled \
- -o feature@spacemap_histogram=enabled \
- -o feature@zpool_checkpoint=enabled \
- -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
- -O normalization=formD -O relatime=on -O xattr=sa \
- -O mountpoint=/boot -R /mnt bpool ${DISK}-part3
+ zpool create \
+ -o ashift=12 -d \
+ -o feature@async_destroy=enabled \
+ -o feature@bookmarks=enabled \
+ -o feature@embedded_data=enabled \
+ -o feature@empty_bpobj=enabled \
+ -o feature@enabled_txg=enabled \
+ -o feature@extensible_dataset=enabled \
+ -o feature@filesystem_limits=enabled \
+ -o feature@hole_birth=enabled \
+ -o feature@large_blocks=enabled \
+ -o feature@lz4_compress=enabled \
+ -o feature@spacemap_histogram=enabled \
+ -o feature@zpool_checkpoint=enabled \
+ -O acltype=posixacl -O canmount=off -O compression=lz4 \
+ -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
+ -O mountpoint=/boot -R /mnt \
+ bpool ${DISK}-part3
-You should not need to customize any of the options for the boot pool.
+ You should not need to customize any of the options for the boot pool.
-GRUB does not support all of the zpool features. See
-``spa_feature_names`` in
-`grub-core/fs/zfs/zfs.c `__.
-This step creates a separate boot pool for ``/boot`` with the features
-limited to only those that GRUB supports, allowing the root pool to use
-any/all features. Note that GRUB opens the pool read-only, so all
-read-only compatible features are “supported” by GRUB.
+ GRUB does not support all of the zpool features. See ``spa_feature_names``
+ in `grub-core/fs/zfs/zfs.c
+ `__.
+ This step creates a separate boot pool for ``/boot`` with the features
+ limited to only those that GRUB supports, allowing the root pool to use
+ any/all features. Note that GRUB opens the pool read-only, so all
+ read-only compatible features are “supported” by GRUB.
-**Hints:**
+ **Hints:**
-- If you are creating a mirror or raidz topology, create the pool using
- ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3``
- (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and
- list the partitions from additional disks).
-- The pool name is arbitrary. If changed, the new name must be used
- consistently. The ``bpool`` convention originated in this HOWTO.
+ - If you are creating a mirror topology, create the pool using::
-**Feature Notes:**
+ zpool create \
+ ... \
+ bpool mirror \
+ /dev/disk/by-id/scsi-SATA_disk1-part3 \
+ /dev/disk/by-id/scsi-SATA_disk2-part3
-- The ``allocation_classes`` feature should be safe to use. However, unless
- one is using it (i.e. a ``special`` vdev), there is no point to enabling it.
- It is extremely unlikely that someone would use this feature for a boot
- pool. If one cares about speeding up the boot pool, it would make more sense
- to put the whole pool on the faster disk rather than using it as a
- ``special`` vdev.
-- The ``project_quota`` feature has been tested and is safe to use. This
- feature is extremely unlikely to matter for the boot pool.
-- The ``resilver_defer`` should be safe but the boot pool is small enough that
- it is unlikely to be necessary.
-- The ``spacemap_v2`` feature has been tested and is safe to use. The boot
- pool is small, so this does not matter in practice.
-- As a read-only compatible feature, the ``userobj_accounting`` feature should
- be compatible in theory, but in practice, GRUB can fail with an “invalid
- dnode type” error. This feature does not matter for ``/boot`` anyway.
+ - For raidz topologies, replace ``mirror`` in the above command with
+ ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from
+ additional disks.
+ - The pool name is arbitrary. If changed, the new name must be used
+ consistently. The ``bpool`` convention originated in this HOWTO.
-2.5 Create the root pool:
+ **Feature Notes:**
-Choose one of the following options:
+ - The ``allocation_classes`` feature should be safe to use. However, unless
+ one is using it (i.e. a ``special`` vdev), there is no point to enabling
+ it. It is extremely unlikely that someone would use this feature for a
+ boot pool. If one cares about speeding up the boot pool, it would make
+ more sense to put the whole pool on the faster disk rather than using it
+ as a ``special`` vdev.
+ - The ``project_quota`` feature has been tested and is safe to use. This
+ feature is extremely unlikely to matter for the boot pool.
+ - The ``resilver_defer`` should be safe but the boot pool is small enough
+ that it is unlikely to be necessary.
+ - The ``spacemap_v2`` feature has been tested and is safe to use. The boot
+ pool is small, so this does not matter in practice.
+ - As a read-only compatible feature, the ``userobj_accounting`` feature
+ should be compatible in theory, but in practice, GRUB can fail with an
+ “invalid dnode type” error. This feature does not matter for ``/boot``
+ anyway.
-2.5a Unencrypted::
+#. Create the root pool:
- zpool create -o ashift=12 \
- -O acltype=posixacl -O canmount=off -O compression=lz4 \
- -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
- -O mountpoint=/ -R /mnt rpool ${DISK}-part4
+ Choose one of the following options:
-2.5b ZFS native encryption::
+ - Unencrypted::
- zpool create -o ashift=12 \
- -O acltype=posixacl -O canmount=off -O compression=lz4 \
- -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
- -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
- -O mountpoint=/ -R /mnt rpool ${DISK}-part4
+ zpool create \
+ -o ashift=12 \
+ -O acltype=posixacl -O canmount=off -O compression=lz4 \
+ -O dnodesize=auto -O normalization=formD -O relatime=on \
+ -O xattr=sa -O mountpoint=/ -R /mnt \
+ rpool ${DISK}-part4
-2.5c LUKS::
+ - ZFS native encryption::
- apt install --yes cryptsetup
- cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
- cryptsetup luksOpen ${DISK}-part4 luks1
- zpool create -o ashift=12 \
- -O acltype=posixacl -O canmount=off -O compression=lz4 \
- -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
- -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1
+ zpool create \
+ -o ashift=12 \
+ -O encryption=aes-256-gcm \
+ -O keylocation=prompt -O keyformat=passphrase \
+ -O acltype=posixacl -O canmount=off -O compression=lz4 \
+ -O dnodesize=auto -O normalization=formD -O relatime=on \
+ -O xattr=sa -O mountpoint=/ -R /mnt \
+ rpool ${DISK}-part4
-**Notes:**
+ - LUKS::
-- The use of ``ashift=12`` is recommended here because many drives
- today have 4 KiB (or larger) physical sectors, even though they
- present 512 B logical sectors. Also, a future replacement drive may
- have 4 KiB physical sectors (in which case ``ashift=12`` is desirable)
- or 4 KiB logical sectors (in which case ``ashift=12`` is required).
-- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you
- do not want this, remove that option, but later add
- ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create``
- for ``/var/log``, as `journald requires
- ACLs `__
-- Setting ``normalization=formD`` eliminates some corner cases relating
- to UTF-8 filename normalization. It also implies ``utf8only=on``,
- which means that only UTF-8 filenames are allowed. If you care to
- support non-UTF-8 filenames, do not use this option. For a discussion
- of why requiring UTF-8 filenames may be a bad idea, see `The problems
- with enforced UTF-8 only
- filenames `__.
-- ``recordsize`` is unset (leaving it at the default of 128 KiB). If you want to
- tune it (e.g. ``-o recordsize=1M``), see `these
- `__ `various
- `__ `blog
- `__
- `posts
- `__.
-- Setting ``relatime=on`` is a middle ground between classic POSIX
- ``atime`` behavior (with its significant performance impact) and
- ``atime=off`` (which provides the best performance by completely
- disabling atime updates). Since Linux 2.6.30, ``relatime`` has been
- the default for other filesystems. See `RedHat’s
- documentation `__
- for further information.
-- Setting ``xattr=sa`` `vastly improves the performance of extended
- attributes `__.
- Inside ZFS, extended attributes are used to implement POSIX ACLs.
- Extended attributes can also be used by user-space applications.
- `They are used by some desktop GUI
- applications. `__
- `They can be used by Samba to store Windows ACLs and DOS attributes;
- they are required for a Samba Active Directory domain
- controller. `__
- Note that ``xattr=sa`` is
- `Linux-specific `__.
- If you move your ``xattr=sa`` pool to another OpenZFS implementation
- besides ZFS-on-Linux, extended attributes will not be readable
- (though your data will be). If portability of extended attributes is
- important to you, omit the ``-O xattr=sa`` above. Even if you do not
- want ``xattr=sa`` for the whole pool, it is probably fine to use it
- for ``/var/log``.
-- Make sure to include the ``-part4`` portion of the drive path. If you
- forget that, you are specifying the whole disk, which ZFS will then
- re-partition, and you will lose the bootloader partition(s).
-- ZFS native encryption defaults to ``aes-256-ccm``, but `the default has
- changed upstream `__
- to ``aes-256-gcm``. `AES-GCM seems to be generally preferred over AES-CCM
- `__,
- `is faster now
- `__, and
- `will be even faster in the future
- `__.
-- For LUKS, the key size chosen is 512 bits. However, XTS mode requires
- two keys, so the LUKS key is split in half. Thus, ``-s 512`` means
- AES-256.
-- Your passphrase will likely be the weakest link. Choose wisely. See
- `section 5 of the cryptsetup
- FAQ `__
- for guidance.
+ apt install --yes cryptsetup
+ cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
+ cryptsetup luksOpen ${DISK}-part4 luks1
+ zpool create \
+ -o ashift=12 \
+ -O acltype=posixacl -O canmount=off -O compression=lz4 \
+ -O dnodesize=auto -O normalization=formD -O relatime=on \
+ -O xattr=sa -O mountpoint=/ -R /mnt \
+ rpool /dev/mapper/luks1
-**Hints:**
+ **Notes:**
-- If you are creating a mirror or raidz topology, create the pool using
- ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4``
- (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and
- list the partitions from additional disks). For LUKS, use
- ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will
- have to create using ``cryptsetup``.
-- The pool name is arbitrary. If changed, the new name must be used
- consistently. On systems that can automatically install to ZFS, the
- root pool is named ``rpool`` by default.
+ - The use of ``ashift=12`` is recommended here because many drives
+ today have 4 KiB (or larger) physical sectors, even though they
+ present 512 B logical sectors. Also, a future replacement drive may
+ have 4 KiB physical sectors (in which case ``ashift=12`` is desirable)
+ or 4 KiB logical sectors (in which case ``ashift=12`` is required).
+ - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you
+ do not want this, remove that option, but later add
+ ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create``
+ for ``/var/log``, as `journald requires ACLs
+ `__
+ - Setting ``normalization=formD`` eliminates some corner cases relating
+ to UTF-8 filename normalization. It also implies ``utf8only=on``,
+ which means that only UTF-8 filenames are allowed. If you care to
+ support non-UTF-8 filenames, do not use this option. For a discussion
+ of why requiring UTF-8 filenames may be a bad idea, see `The problems
+ with enforced UTF-8 only filenames
+ `__.
+ - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you
+ want to tune it (e.g. ``-o recordsize=1M``), see `these
+ `__ `various
+ `__ `blog
+ `__
+ `posts
+ `__.
+ - Setting ``relatime=on`` is a middle ground between classic POSIX
+ ``atime`` behavior (with its significant performance impact) and
+ ``atime=off`` (which provides the best performance by completely
+ disabling atime updates). Since Linux 2.6.30, ``relatime`` has been
+ the default for other filesystems. See `RedHat’s documentation
+ `__
+ for further information.
+ - Setting ``xattr=sa`` `vastly improves the performance of extended
+ attributes
+ `__.
+ Inside ZFS, extended attributes are used to implement POSIX ACLs.
+ Extended attributes can also be used by user-space applications.
+ `They are used by some desktop GUI applications.
+ `__
+ `They can be used by Samba to store Windows ACLs and DOS attributes;
+ they are required for a Samba Active Directory domain controller.
+ `__
+ Note that ``xattr=sa`` is `Linux-specific
+ `__. If you move your
+ ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux,
+ extended attributes will not be readable (though your data will be). If
+ portability of extended attributes is important to you, omit the
+ ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole
+ pool, it is probably fine to use it for ``/var/log``.
+ - Make sure to include the ``-part4`` portion of the drive path. If you
+ forget that, you are specifying the whole disk, which ZFS will then
+ re-partition, and you will lose the bootloader partition(s).
+ - ZFS native encryption defaults to ``aes-256-ccm``, but `the default has
+ changed upstream
+ `__
+ to ``aes-256-gcm``. `AES-GCM seems to be generally preferred over AES-CCM
+ `__,
+ `is faster now
+ `__,
+ and `will be even faster in the future
+ `__.
+ - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two
+ keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256.
+ - Your passphrase will likely be the weakest link. Choose wisely. See
+ `section 5 of the cryptsetup FAQ
+ `__
+ for guidance.
+
+ **Hints:**
+
+ - If you are creating a mirror topology, create the pool using::
+
+ zpool create \
+ ... \
+ bpool mirror \
+ /dev/disk/by-id/scsi-SATA_disk1-part3 \
+ /dev/disk/by-id/scsi-SATA_disk2-part3
+
+ - For raidz topologies, replace ``mirror`` in the above command with
+ ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from
+ additional disks.
+ - When using LUKS with mirror or raidz topologies, use
+ ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have
+ to create using ``cryptsetup``.
+ - The pool name is arbitrary. If changed, the new name must be used
+ consistently. On systems that can automatically install to ZFS, the root
+ pool is named ``rpool`` by default.
Step 3: System Installation
---------------------------
-3.1 Create filesystem datasets to act as containers::
+#. Create filesystem datasets to act as containers::
- zfs create -o canmount=off -o mountpoint=none rpool/ROOT
- zfs create -o canmount=off -o mountpoint=none bpool/BOOT
+ zfs create -o canmount=off -o mountpoint=none rpool/ROOT
+ zfs create -o canmount=off -o mountpoint=none bpool/BOOT
-On Solaris systems, the root filesystem is cloned and the suffix is
-incremented for major system changes through ``pkg image-update`` or
-``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with the
-``zsys`` tool, though its dataset layout is more complicated. Even without
-such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still be used
-for manually created clones. That said, this HOWTO assumes a single filesystem
-for ``/boot`` for simplicity.
+ On Solaris systems, the root filesystem is cloned and the suffix is
+ incremented for major system changes through ``pkg image-update`` or
+ ``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with
+ the ``zsys`` tool, though its dataset layout is more complicated. Even
+ without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still
+ be used for manually created clones. That said, this HOWTO assumes a single
+ filesystem for ``/boot`` for simplicity.
-3.2 Create filesystem datasets for the root and boot filesystems::
+#. Create filesystem datasets for the root and boot filesystems::
- zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
- zfs mount rpool/ROOT/debian
+ zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
+ zfs mount rpool/ROOT/debian
- zfs create -o mountpoint=/boot bpool/BOOT/debian
- zfs mount bpool/BOOT/debian
+ zfs create -o mountpoint=/boot bpool/BOOT/debian
+ zfs mount bpool/BOOT/debian
-With ZFS, it is not normally necessary to use a mount command (either
-``mount`` or ``zfs mount``). This situation is an exception because of
-``canmount=noauto``.
+ With ZFS, it is not normally necessary to use a mount command (either
+ ``mount`` or ``zfs mount``). This situation is an exception because of
+ ``canmount=noauto``.
-3.3 Create datasets::
+#. Create datasets::
- zfs create rpool/home
- zfs create -o mountpoint=/root rpool/home/root
- zfs create -o canmount=off rpool/var
- zfs create -o canmount=off rpool/var/lib
- zfs create rpool/var/log
- zfs create rpool/var/spool
+ zfs create rpool/home
+ zfs create -o mountpoint=/root rpool/home/root
+ zfs create -o canmount=off rpool/var
+ zfs create -o canmount=off rpool/var/lib
+ zfs create rpool/var/log
+ zfs create rpool/var/spool
-The datasets below are optional, depending on your preferences and/or
-software choices.
+ The datasets below are optional, depending on your preferences and/or
+ software choices.
-If you wish to exclude these from snapshots::
+ If you wish to exclude these from snapshots::
- zfs create -o com.sun:auto-snapshot=false rpool/var/cache
- zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
- chmod 1777 /mnt/var/tmp
+ zfs create -o com.sun:auto-snapshot=false rpool/var/cache
+ zfs create -o com.sun:auto-snapshot=false rpool/var/tmp
+ chmod 1777 /mnt/var/tmp
-If you use /opt on this system::
+ If you use /opt on this system::
- zfs create rpool/opt
+ zfs create rpool/opt
-If you use /srv on this system::
+ If you use /srv on this system::
- zfs create rpool/srv
+ zfs create rpool/srv
-If you use /usr/local on this system::
+ If you use /usr/local on this system::
- zfs create -o canmount=off rpool/usr
- zfs create rpool/usr/local
+ zfs create -o canmount=off rpool/usr
+ zfs create rpool/usr/local
-If this system will have games installed::
+ If this system will have games installed::
- zfs create rpool/var/games
+ zfs create rpool/var/games
-If this system will store local email in /var/mail::
+ If this system will store local email in /var/mail::
- zfs create rpool/var/mail
+ zfs create rpool/var/mail
-If this system will use Snap packages::
+ If this system will use Snap packages::
- zfs create rpool/var/snap
+ zfs create rpool/var/snap
-If you use /var/www on this system::
+ If you use /var/www on this system::
- zfs create rpool/var/www
+ zfs create rpool/var/www
-If this system will use GNOME::
+ If this system will use GNOME::
- zfs create rpool/var/lib/AccountsService
+ zfs create rpool/var/lib/AccountsService
-If this system will use Docker (which manages its own datasets &
-snapshots)::
+ If this system will use Docker (which manages its own datasets &
+ snapshots)::
- zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
+ zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker
-If this system will use NFS (locking)::
+ If this system will use NFS (locking)::
- zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
+ zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
-A tmpfs is recommended later, but if you want a separate dataset for
-``/tmp``::
+ A tmpfs is recommended later, but if you want a separate dataset for
+ ``/tmp``::
- zfs create -o com.sun:auto-snapshot=false rpool/tmp
- chmod 1777 /mnt/tmp
+ zfs create -o com.sun:auto-snapshot=false rpool/tmp
+ chmod 1777 /mnt/tmp
-The primary goal of this dataset layout is to separate the OS from user data.
-This allows the root filesystem to be rolled back without rolling back user
-data. The ``com.sun.auto-snapshot`` setting is used by some ZFS
-snapshot utilities to exclude transient data.
+ The primary goal of this dataset layout is to separate the OS from user
+ data. This allows the root filesystem to be rolled back without rolling
+ back user data.
-If you do nothing extra, ``/tmp`` will be stored as part of the root
-filesystem. Alternatively, you can create a separate dataset for
-``/tmp``, as shown above. This keeps the ``/tmp`` data out of snapshots
-of your root filesystem. It also allows you to set a quota on
-``rpool/tmp``, if you want to limit the maximum space used. Otherwise,
-you can use a tmpfs (RAM filesystem) later.
+ If you do nothing extra, ``/tmp`` will be stored as part of the root
+ filesystem. Alternatively, you can create a separate dataset for ``/tmp``,
+ as shown above. This keeps the ``/tmp`` data out of snapshots of your root
+ filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want
+ to limit the maximum space used. Otherwise, you can use a tmpfs (RAM
+ filesystem) later.
-3.4 Install the minimal system::
+#. Install the minimal system::
- debootstrap buster /mnt
- zfs set devices=off rpool
+ debootstrap buster /mnt
+ zfs set devices=off rpool
-The ``debootstrap`` command leaves the new system in an unconfigured
-state. An alternative to using ``debootstrap`` is to copy the entirety
-of a working system into the new ZFS root.
+ The ``debootstrap`` command leaves the new system in an unconfigured state.
+ An alternative to using ``debootstrap`` is to copy the entirety of a
+ working system into the new ZFS root.
Step 4: System Configuration
----------------------------
-4.1 Configure the hostname:
+#. Configure the hostname:
-Replace ``HOSTNAME`` with the desired hostname::
+ Replace ``HOSTNAME`` with the desired hostname::
- echo HOSTNAME > /mnt/etc/hostname
- vi /mnt/etc/hosts
+ echo HOSTNAME > /mnt/etc/hostname
+ vi /mnt/etc/hosts
-.. code-block:: text
+ .. code-block:: text
- Add a line:
- 127.0.1.1 HOSTNAME
- or if the system has a real name in DNS:
- 127.0.1.1 FQDN HOSTNAME
+ Add a line:
+ 127.0.1.1 HOSTNAME
+ or if the system has a real name in DNS:
+ 127.0.1.1 FQDN HOSTNAME
-**Hint:** Use ``nano`` if you find ``vi`` confusing.
+ **Hint:** Use ``nano`` if you find ``vi`` confusing.
-4.2 Configure the network interface:
+#. Configure the network interface:
-Find the interface name::
+ Find the interface name::
- ip addr show
+ ip addr show
-Adjust NAME below to match your interface name::
+ Adjust ``NAME`` below to match your interface name::
- vi /mnt/etc/network/interfaces.d/NAME
+ vi /mnt/etc/network/interfaces.d/NAME
-.. code-block:: text
+ .. code-block:: text
- auto NAME
- iface NAME inet dhcp
+ auto NAME
+ iface NAME inet dhcp
-Customize this file if the system is not a DHCP client.
+ Customize this file if the system is not a DHCP client.
-4.3 Configure the package sources::
+#. Configure the package sources::
- vi /mnt/etc/apt/sources.list
+ vi /mnt/etc/apt/sources.list
-.. code-block:: sourceslist
+ .. code-block:: sourceslist
- deb http://deb.debian.org/debian buster main contrib
- deb-src http://deb.debian.org/debian buster main contrib
+ deb http://deb.debian.org/debian buster main contrib
+ deb-src http://deb.debian.org/debian buster main contrib
-::
+ ::
- vi /mnt/etc/apt/sources.list.d/buster-backports.list
+ vi /mnt/etc/apt/sources.list.d/buster-backports.list
-.. code-block:: sourceslist
+ .. code-block:: sourceslist
- deb http://deb.debian.org/debian buster-backports main contrib
- deb-src http://deb.debian.org/debian buster-backports main contrib
+ deb http://deb.debian.org/debian buster-backports main contrib
+ deb-src http://deb.debian.org/debian buster-backports main contrib
-::
+ ::
- vi /mnt/etc/apt/preferences.d/90_zfs
+ vi /mnt/etc/apt/preferences.d/90_zfs
-.. code-block:: control
+ .. code-block:: control
- Package: libnvpair1linux libuutil1linux libzfs2linux libzfslinux-dev libzpool2linux python3-pyzfs pyzfs-doc spl spl-dkms zfs-dkms zfs-dracut zfs-initramfs zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed
- Pin: release n=buster-backports
- Pin-Priority: 990
+ Package: libnvpair1linux libuutil1linux libzfs2linux libzfslinux-dev libzpool2linux python3-pyzfs pyzfs-doc spl spl-dkms zfs-dkms zfs-dracut zfs-initramfs zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed
+ Pin: release n=buster-backports
+ Pin-Priority: 990
-4.4 Bind the virtual filesystems from the LiveCD environment to the new
-system and ``chroot`` into it::
+#. Bind the virtual filesystems from the LiveCD environment to the new
+ system and ``chroot`` into it::
- mount --rbind /dev /mnt/dev
- mount --rbind /proc /mnt/proc
- mount --rbind /sys /mnt/sys
- chroot /mnt /usr/bin/env DISK=$DISK bash --login
+ mount --rbind /dev /mnt/dev
+ mount --rbind /proc /mnt/proc
+ mount --rbind /sys /mnt/sys
+ chroot /mnt /usr/bin/env DISK=$DISK bash --login
-**Note:** This is using ``--rbind``, not ``--bind``.
+ **Note:** This is using ``--rbind``, not ``--bind``.
-4.5 Configure a basic system environment::
+#. Configure a basic system environment::
- ln -s /proc/self/mounts /etc/mtab
- apt update
+ ln -s /proc/self/mounts /etc/mtab
+ apt update
- apt install --yes locales
- dpkg-reconfigure locales
+ apt install --yes locales
+ dpkg-reconfigure locales
-Even if you prefer a non-English system language, always ensure that
-``en_US.UTF-8`` is available::
+ Even if you prefer a non-English system language, always ensure that
+ ``en_US.UTF-8`` is available::
- dpkg-reconfigure tzdata
+ dpkg-reconfigure tzdata
-4.6 Install ZFS in the chroot environment for the new system::
+#. Install ZFS in the chroot environment for the new system::
- apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64
- apt install --yes zfs-initramfs
- echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
+ apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64
+ apt install --yes zfs-initramfs
+ echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
-4.7 For LUKS installs only, setup ``/etc/crypttab``::
+#. For LUKS installs only, setup ``/etc/crypttab``::
- apt install --yes cryptsetup
+ apt install --yes cryptsetup
- echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
- luks,discard,initramfs > /etc/crypttab
+ echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
+ luks,discard,initramfs > /etc/crypttab
-The use of ``initramfs`` is a work-around for `cryptsetup does not support ZFS
-`__.\
+ The use of ``initramfs`` is a work-around for `cryptsetup does not support
+ ZFS `__.
-**Hint:** If you are creating a mirror or raidz topology, repeat the
-``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk.
+ **Hint:** If you are creating a mirror or raidz topology, repeat the
+ ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk.
-4.8 Install GRUB
+#. Install GRUB
-Choose one of the following options:
+ Choose one of the following options:
-4.8a Install GRUB for legacy (BIOS) booting::
+ - Install GRUB for legacy (BIOS) booting::
- apt install --yes grub-pc
+ apt install --yes grub-pc
-Select (using the space bar) all of the disks (not partitions) in your pool.
+ Select (using the space bar) all of the disks (not partitions) in your
+ pool.
-4.8b Install GRUB for UEFI booting::
+ - Install GRUB for UEFI booting::
- apt install dosfstools
- mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
- mkdir /boot/efi
- echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \
- /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
- mount /boot/efi
- apt install --yes grub-efi-amd64 shim-signed
+ apt install dosfstools
+ mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
+ mkdir /boot/efi
+ echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \
+ /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab
+ mount /boot/efi
+ apt install --yes grub-efi-amd64 shim-signed
-**Notes:**
+ **Notes:**
-- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present
- 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size
- (given the partition size of 512 MiB) for FAT32. It also works fine on
- drives which present 512 B sectors.
-- For a mirror or raidz topology, this step only installs GRUB on the
- first disk. The other disk(s) will be handled later.
+ - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present
+ 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size
+ (given the partition size of 512 MiB) for FAT32. It also works fine on
+ drives which present 512 B sectors.
+ - For a mirror or raidz topology, this step only installs GRUB on the
+ first disk. The other disk(s) will be handled later.
-4.9 (Optional): Remove os-prober::
+#. Optional: Remove os-prober::
- dpkg --purge os-prober
+ dpkg --purge os-prober
-This avoids error messages from `update-grub`. `os-prober` is only necessary
-in dual-boot configurations.
+ This avoids error messages from `update-grub`. `os-prober` is only
+ necessary in dual-boot configurations.
-4.10 Set a root password::
+#. Set a root password::
- passwd
+ passwd
-4.11 Enable importing bpool
+#. Enable importing bpool
-This ensures that ``bpool`` is always imported, regardless of whether
-``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not,
-or whether ``zfs-import-scan.service`` is enabled.
+ This ensures that ``bpool`` is always imported, regardless of whether
+ ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not,
+ or whether ``zfs-import-scan.service`` is enabled.
-::
+ ::
- vi /etc/systemd/system/zfs-import-bpool.service
+ vi /etc/systemd/system/zfs-import-bpool.service
-.. code-block:: ini
+ .. code-block:: ini
- [Unit]
- DefaultDependencies=no
- Before=zfs-import-scan.service
- Before=zfs-import-cache.service
+ [Unit]
+ DefaultDependencies=no
+ Before=zfs-import-scan.service
+ Before=zfs-import-cache.service
- [Service]
- Type=oneshot
- RemainAfterExit=yes
- ExecStart=/sbin/zpool import -N -o cachefile=none bpool
+ [Service]
+ Type=oneshot
+ RemainAfterExit=yes
+ ExecStart=/sbin/zpool import -N -o cachefile=none bpool
- [Install]
- WantedBy=zfs-import.target
+ [Install]
+ WantedBy=zfs-import.target
-::
+ ::
- systemctl enable zfs-import-bpool.service
+ systemctl enable zfs-import-bpool.service
-4.12 Optional (but recommended): Mount a tmpfs to ``/tmp``
+#. Optional (but recommended): Mount a tmpfs to ``/tmp``
-If you chose to create a ``/tmp`` dataset above, skip this step, as they
-are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a
-tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit.
+ If you chose to create a ``/tmp`` dataset above, skip this step, as they
+ are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a
+ tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit.
-::
+ ::
- cp /usr/share/systemd/tmp.mount /etc/systemd/system/
- systemctl enable tmp.mount
+ cp /usr/share/systemd/tmp.mount /etc/systemd/system/
+ systemctl enable tmp.mount
-4.13 Optional (but kindly requested): Install popcon
+#. Optional (but kindly requested): Install popcon
-The ``popularity-contest`` package reports the list of packages install
-on your system. Showing that ZFS is popular may be helpful in terms of
-long-term attention from the distro.
+ The ``popularity-contest`` package reports the list of packages install
+ on your system. Showing that ZFS is popular may be helpful in terms of
+ long-term attention from the distro.
-::
+ ::
- apt install --yes popularity-contest
+ apt install --yes popularity-contest
-Choose Yes at the prompt.
+ Choose Yes at the prompt.
Step 5: GRUB Installation
-------------------------
-5.1 Verify that the ZFS boot filesystem is recognized::
+#. Verify that the ZFS boot filesystem is recognized::
- grub-probe /boot
+ grub-probe /boot
-5.2 Refresh the initrd files::
+#. Refresh the initrd files::
- update-initramfs -c -k all
+ update-initramfs -c -k all
-**Note:** When using LUKS, this will print “WARNING could not determine
-root device from /etc/fstab”. This is because `cryptsetup does not
-support ZFS
-`__.
+ **Note:** When using LUKS, this will print “WARNING could not determine
+ root device from /etc/fstab”. This is because `cryptsetup does not
+ support ZFS
+ `__.
-5.3 Workaround GRUB's missing zpool-features support::
+#. Workaround GRUB's missing zpool-features support::
- vi /etc/default/grub
- # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
+ vi /etc/default/grub
+ # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
-5.4 Optional (but highly recommended): Make debugging GRUB easier::
+#. Optional (but highly recommended): Make debugging GRUB easier::
- vi /etc/default/grub
- # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
- # Uncomment: GRUB_TERMINAL=console
- # Save and quit.
+ vi /etc/default/grub
+ # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
+ # Uncomment: GRUB_TERMINAL=console
+ # Save and quit.
-Later, once the system has rebooted twice and you are sure everything is
-working, you can undo these changes, if desired.
+ Later, once the system has rebooted twice and you are sure everything is
+ working, you can undo these changes, if desired.
-5.5 Update the boot configuration::
+#. Update the boot configuration::
- update-grub
+ update-grub
-**Note:** Ignore errors from ``osprober``, if present.
+ **Note:** Ignore errors from ``osprober``, if present.
-5.6 Install the boot loader:
+#. Install the boot loader:
-5.6a For legacy (BIOS) booting, install GRUB to the MBR::
+ #. For legacy (BIOS) booting, install GRUB to the MBR::
- grub-install $DISK
+ grub-install $DISK
-Note that you are installing GRUB to the whole disk, not a partition.
+ Note that you are installing GRUB to the whole disk, not a partition.
-If you are creating a mirror or raidz topology, repeat the
-``grub-install`` command for each disk in the pool.
+ If you are creating a mirror or raidz topology, repeat the ``grub-install``
+ command for each disk in the pool.
-5.6b For UEFI booting, install GRUB::
+ #. For UEFI booting, install GRUB to the ESP::
- grub-install --target=x86_64-efi --efi-directory=/boot/efi \
- --bootloader-id=debian --recheck --no-floppy
+ grub-install --target=x86_64-efi --efi-directory=/boot/efi \
+ --bootloader-id=debian --recheck --no-floppy
-It is not necessary to specify the disk here. If you are creating a
-mirror or raidz topology, the additional disks will be handled later.
+ It is not necessary to specify the disk here. If you are creating a
+ mirror or raidz topology, the additional disks will be handled later.
-5.7 Fix filesystem mount ordering:
+#. Fix filesystem mount ordering:
-We need to activate ``zfs-mount-generator``. This makes systemd aware of
-the separate mountpoints, which is important for things like
-``/var/log`` and ``/var/tmp``. In turn, ``rsyslog.service`` depends on
-``var-log.mount`` by way of ``local-fs.target`` and services using the
-``PrivateTmp`` feature of systemd automatically use
-``After=var-tmp.mount``.
+ We need to activate ``zfs-mount-generator``. This makes systemd aware of
+ the separate mountpoints, which is important for things like ``/var/log``
+ and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount``
+ by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature
+ of systemd automatically use ``After=var-tmp.mount``.
-::
+ ::
- mkdir /etc/zfs/zfs-list.cache
- touch /etc/zfs/zfs-list.cache/bpool
- touch /etc/zfs/zfs-list.cache/rpool
- ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
- zed -F &
+ mkdir /etc/zfs/zfs-list.cache
+ touch /etc/zfs/zfs-list.cache/bpool
+ touch /etc/zfs/zfs-list.cache/rpool
+ ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
+ zed -F &
-Verify that ``zed`` updated the cache by making sure these are not empty::
+ Verify that ``zed`` updated the cache by making sure these are not empty::
- cat /etc/zfs/zfs-list.cache/bpool
- cat /etc/zfs/zfs-list.cache/rpool
+ cat /etc/zfs/zfs-list.cache/bpool
+ cat /etc/zfs/zfs-list.cache/rpool
-If either is empty, force a cache update and check again::
+ If either is empty, force a cache update and check again::
- zfs set canmount=on bpool/BOOT/debian
- zfs set canmount=noauto rpool/ROOT/debian
+ zfs set canmount=on bpool/BOOT/debian
+ zfs set canmount=noauto rpool/ROOT/debian
-Stop ``zed``::
+ Stop ``zed``::
- fg
- Press Ctrl-C.
+ fg
+ Press Ctrl-C.
-Fix the paths to eliminate ``/mnt``::
+ Fix the paths to eliminate ``/mnt``::
- sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
+ sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
Step 6: First Boot
------------------
-6.1 Snapshot the initial installation::
+#. Optional: Install SSH::
- zfs snapshot bpool/BOOT/debian@install
- zfs snapshot rpool/ROOT/debian@install
+ apt install --yes openssh-server
-In the future, you will likely want to take snapshots before each
-upgrade, and remove old snapshots (including this one) at some point to
-save space.
+ If you want to login as root via SSH, set ``PermitRootLogin yes`` in
+ ``/etc/ssh/sshd_config``. For security, undo this as soon as possible
+ (i.e. once you have your regular user account setup).
-6.2 Exit from the ``chroot`` environment back to the LiveCD environment::
+#. Optional: Snapshot the initial installation::
- exit
+ zfs snapshot bpool/BOOT/debian@install
+ zfs snapshot rpool/ROOT/debian@install
-6.3 Run these commands in the LiveCD environment to unmount all
-filesystems::
+ In the future, you will likely want to take snapshots before each
+ upgrade, and remove old snapshots (including this one) at some point to
+ save space.
- mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
- zpool export -a
+#. Exit from the ``chroot`` environment back to the LiveCD environment::
-6.4 Reboot::
+ exit
- reboot
+#. Run these commands in the LiveCD environment to unmount all
+ filesystems::
-Wait for the newly installed system to boot normally. Login as root.
+ mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+ xargs -i{} umount -lf {}
+ zpool export -a
-6.5 Create a user account:
+#. Reboot::
-Replace ``username`` with your desired username::
+ reboot
- zfs create rpool/home/username
- adduser username
+ Wait for the newly installed system to boot normally. Login as root.
- cp -a /etc/skel/. /home/username
- chown -R username:username /home/username
- usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username
+#. Create a user account:
-6.6 Mirror GRUB
+ Replace ``username`` with your desired username::
-If you installed to multiple disks, install GRUB on the additional
-disks:
+ zfs create rpool/home/username
+ adduser username
-6.6a For legacy (BIOS) booting::
+ cp -a /etc/skel/. /home/username
+ chown -R username:username /home/username
+ usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username
- dpkg-reconfigure grub-pc
- Hit enter until you get to the device selection screen.
- Select (using the space bar) all of the disks (not partitions) in your pool.
+#. Mirror GRUB
-6.6b For UEFI booting::
+ If you installed to multiple disks, install GRUB on the additional
+ disks.
- umount /boot/efi
+ - For legacy (BIOS) booting::
-For the second and subsequent disks (increment debian-2 to -3, etc.)::
+ dpkg-reconfigure grub-pc
- dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
- of=/dev/disk/by-id/scsi-SATA_disk2-part2
- efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
- -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
+ Hit enter until you get to the device selection screen.
+ Select (using the space bar) all of the disks (not partitions) in your pool.
- mount /boot/efi
+ - For UEFI booting::
-Step 7: (Optional) Configure Swap
+ umount /boot/efi
+
+ For the second and subsequent disks (increment debian-2 to -3, etc.)::
+
+ dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \
+ of=/dev/disk/by-id/scsi-SATA_disk2-part2
+ efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \
+ -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi'
+
+ mount /boot/efi
+
+Step 7: Optional: Configure Swap
---------------------------------
**Caution**: On systems with extremely high memory pressure, using a
zvol for swap can result in lockup, regardless of how much swap is still
-available. This issue is currently being investigated in:
-`https://github.com/zfsonlinux/zfs/issues/7734 `__
+available. There is `a bug report upstrea
+`__.
-7.1 Create a volume dataset (zvol) for use as a swap device::
+#. Create a volume dataset (zvol) for use as a swap device::
- zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
- -o logbias=throughput -o sync=always \
- -o primarycache=metadata -o secondarycache=none \
- -o com.sun:auto-snapshot=false rpool/swap
+ zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
+ -o logbias=throughput -o sync=always \
+ -o primarycache=metadata -o secondarycache=none \
+ -o com.sun:auto-snapshot=false rpool/swap
-You can adjust the size (the ``4G`` part) to your needs.
+ You can adjust the size (the ``4G`` part) to your needs.
-The compression algorithm is set to ``zle`` because it is the cheapest
-available algorithm. As this guide recommends ``ashift=12`` (4 kiB
-blocks on disk), the common case of a 4 kiB page size means that no
-compression algorithm can reduce I/O. The exception is all-zero pages,
-which are dropped by ZFS; but some form of compression has to be enabled
-to get this behavior.
+ The compression algorithm is set to ``zle`` because it is the cheapest
+ available algorithm. As this guide recommends ``ashift=12`` (4 kiB
+ blocks on disk), the common case of a 4 kiB page size means that no
+ compression algorithm can reduce I/O. The exception is all-zero pages,
+ which are dropped by ZFS; but some form of compression has to be enabled
+ to get this behavior.
-7.2 Configure the swap device:
+#. Configure the swap device:
-**Caution**: Always use long ``/dev/zvol`` aliases in configuration
-files. Never use a short ``/dev/zdX`` device name.
+ **Caution**: Always use long ``/dev/zvol`` aliases in configuration
+ files. Never use a short ``/dev/zdX`` device name.
-::
+ ::
- mkswap -f /dev/zvol/rpool/swap
- echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
- echo RESUME=none > /etc/initramfs-tools/conf.d/resume
+ mkswap -f /dev/zvol/rpool/swap
+ echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab
+ echo RESUME=none > /etc/initramfs-tools/conf.d/resume
-The ``RESUME=none`` is necessary to disable resuming from hibernation.
-This does not work, as the zvol is not present (because the pool has not
-yet been imported) at the time the resume script runs. If it is not
-disabled, the boot process hangs for 30 seconds waiting for the swap
-zvol to appear.
+ The ``RESUME=none`` is necessary to disable resuming from hibernation.
+ This does not work, as the zvol is not present (because the pool has not
+ yet been imported) at the time the resume script runs. If it is not
+ disabled, the boot process hangs for 30 seconds waiting for the swap
+ zvol to appear.
-7.3 Enable the swap device::
+#. Enable the swap device::
- swapon -av
+ swapon -av
Step 8: Full Software Installation
----------------------------------
-8.1 Upgrade the minimal system::
+#. Upgrade the minimal system::
- apt dist-upgrade --yes
+ apt dist-upgrade --yes
-8.2 Install a regular set of software::
+#. Install a regular set of software:
- tasksel
+ tasksel
-8.3 Optional: Disable log compression:
+#. Optional: Disable log compression:
-As ``/var/log`` is already compressed by ZFS, logrotate’s compression is
-going to burn CPU and disk I/O for (in most cases) very little gain.
-Also, if you are making snapshots of ``/var/log``, logrotate’s
-compression will actually waste space, as the uncompressed data will
-live on in the snapshot. You can edit the files in ``/etc/logrotate.d``
-by hand to comment out ``compress``, or use this loop (copy-and-paste
-highly recommended)::
+ As ``/var/log`` is already compressed by ZFS, logrotate’s compression is
+ going to burn CPU and disk I/O for (in most cases) very little gain. Also,
+ if you are making snapshots of ``/var/log``, logrotate’s compression will
+ actually waste space, as the uncompressed data will live on in the
+ snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment
+ out ``compress``, or use this loop (copy-and-paste highly recommended)::
- for file in /etc/logrotate.d/* ; do
- if grep -Eq "(^|[^#y])compress" "$file" ; then
- sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
- fi
- done
+ for file in /etc/logrotate.d/* ; do
+ if grep -Eq "(^|[^#y])compress" "$file" ; then
+ sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
+ fi
+ done
-8.4 Reboot::
+#. Reboot::
- reboot
+ reboot
Step 9: Final Cleanup
---------------------
-9.1 Wait for the system to boot normally. Login using the account you
-created. Ensure the system (including networking) works normally.
+#. Wait for the system to boot normally. Login using the account you
+ created. Ensure the system (including networking) works normally.
-9.2 Optional: Delete the snapshots of the initial installation::
+#. Optional: Delete the snapshots of the initial installation::
- sudo zfs destroy bpool/BOOT/debian@install
- sudo zfs destroy rpool/ROOT/debian@install
+ sudo zfs destroy bpool/BOOT/debian@install
+ sudo zfs destroy rpool/ROOT/debian@install
-9.3 Optional: Disable the root password::
+#. Optional: Disable the root password::
- sudo usermod -p '*' root
+ sudo usermod -p '*' root
-9.4 Optional: Re-enable the graphical boot process:
+#. Optional: Re-enable the graphical boot process:
-If you prefer the graphical boot process, you can re-enable it now. If
-you are using LUKS, it makes the prompt look nicer.
+ If you prefer the graphical boot process, you can re-enable it now. If
+ you are using LUKS, it makes the prompt look nicer.
-::
+ ::
- sudo vi /etc/default/grub
- # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
- # Comment out GRUB_TERMINAL=console
- # Save and quit.
+ sudo vi /etc/default/grub
+ # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT
+ # Comment out GRUB_TERMINAL=console
+ # Save and quit.
- sudo update-grub
+ sudo update-grub
-**Note:** Ignore errors from ``osprober``, if present.
+ **Note:** Ignore errors from ``osprober``, if present.
-9.5 Optional: For LUKS installs only, backup the LUKS header::
+#. Optional: For LUKS installs only, backup the LUKS header::
- sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
- --header-backup-file luks1-header.dat
+ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
+ --header-backup-file luks1-header.dat
-Store that backup somewhere safe (e.g. cloud storage). It is protected
-by your LUKS passphrase, but you may wish to use additional encryption.
+ Store that backup somewhere safe (e.g. cloud storage). It is protected by
+ your LUKS passphrase, but you may wish to use additional encryption.
-**Hint:** If you created a mirror or raidz topology, repeat this for
-each LUKS volume (``luks2``, etc.).
+ **Hint:** If you created a mirror or raidz topology, repeat this for each
+ LUKS volume (``luks2``, etc.).
Troubleshooting
---------------
@@ -955,8 +994,8 @@ Troubleshooting
Rescuing using a Live CD
~~~~~~~~~~~~~~~~~~~~~~~~
-Go through `Step 1: Prepare The Install
-Environment <#step-1-prepare-the-install-environment>`__.
+Go through `Step 1: Prepare The Install Environment
+<#step-1-prepare-the-install-environment>`__.
For LUKS, first unlock the disk(s)::
@@ -987,45 +1026,38 @@ Do whatever you need to do to fix your system.
When done, cleanup::
exit
- mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
+ mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+ xargs -i{} umount -lf {}
zpool export -a
reboot
-MPT2SAS
-~~~~~~~
-
-Most problem reports for this tutorial involve ``mpt2sas`` hardware that
-does slow asynchronous drive initialization, like some IBM M1015 or
-OEM-branded cards that have been flashed to the reference LSI firmware.
-
-The basic problem is that disks on these controllers are not visible to
-the Linux kernel until after the regular system is started, and ZoL does
-not hotplug pool members. See
-`https://github.com/zfsonlinux/zfs/issues/330 `__.
-
-Most LSI cards are perfectly compatible with ZoL. If your card has this
-glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in
-``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to
-appear before importing the pool.
-
Areca
~~~~~
Systems that require the ``arcsas`` blob driver should add it to the
-``/etc/initramfs-tools/modules`` file and run
-``update-initramfs -c -k all``.
+``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``.
Upgrade or downgrade the Areca driver if something like
``RIP: 0010:[] [] native_read_tsc+0x6/0x20``
-appears anywhere in kernel log. ZoL is unstable on systems that emit
-this error message.
+appears anywhere in kernel log. ZoL is unstable on systems that emit this
+error message.
-VMware
-~~~~~~
+MPT2SAS
+~~~~~~~
-- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere
- configuration. Doing this ensures that ``/dev/disk`` aliases are
- created in the guest.
+Most problem reports for this tutorial involve ``mpt2sas`` hardware that does
+slow asynchronous drive initialization, like some IBM M1015 or OEM-branded
+cards that have been flashed to the reference LSI firmware.
+
+The basic problem is that disks on these controllers are not visible to the
+Linux kernel until after the regular system is started, and ZoL does not
+hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330
+`__.
+
+Most LSI cards are perfectly compatible with ZoL. If your card has this
+glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in
+``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to
+appear before importing the pool.
QEMU/KVM/XEN
~~~~~~~~~~~~
@@ -1053,3 +1085,9 @@ Uncomment these lines:
::
sudo systemctl restart libvirtd.service
+
+VMware
+~~~~~~
+
+- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration.
+ Doing this ensures that ``/dev/disk`` aliases are created in the guest.
diff --git a/docs/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst b/docs/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst
index b9fcde5..5c34bd8 100644
--- a/docs/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst
+++ b/docs/Getting Started/Ubuntu/Ubuntu 20.04 Root on ZFS.rst
@@ -24,9 +24,10 @@ is far easier and faster than doing everything by hand.
If you want a ZFS native encrypted, desktop install, you can `trivially edit
the installer
`__.
-The ``-o recordsize=1M`` there is unrelated to encryption; omit that unless you
-understand it. `Hopefully the installer will gain encryption support in the
-future `__.
+The ``-o recordsize=1M`` there is unrelated to encryption; omit that unless
+you understand it. `Hopefully the installer will gain encryption support in
+the future
+`__.
If you want to setup a mirror or raidz topology, use LUKS encryption, and/or
install a server (no desktop GUI), use this HOWTO.
@@ -44,15 +45,15 @@ System Requirements
- `Ubuntu 20.04 (“Focal”) Desktop CD
`__
(*not* any server images)
-- Installing on a drive which presents 4 KiB logical sectors (a “4Kn”
- drive) only works with UEFI booting. This not unique to ZFS. `GRUB
- does not and will not work on 4Kn with legacy (BIOS)
- booting. `__
+- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive)
+ only works with UEFI booting. This not unique to ZFS. `GRUB does not and
+ will not work on 4Kn with legacy (BIOS) booting.
+ `__
-Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of
-memory is recommended for normal performance in basic workloads. If you
-wish to use deduplication, you will need `massive amounts of
-RAM `__. Enabling
+Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory
+is recommended for normal performance in basic workloads. If you wish to use
+deduplication, you will need `massive amounts of RAM
+`__. Enabling
deduplication is a permanent change that cannot be easily reverted.
Support
@@ -120,833 +121,860 @@ encrypted once per disk.
Step 1: Prepare The Install Environment
---------------------------------------
-1.1 Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to
-the Internet as appropriate (e.g. join your WiFi network). Open a
-terminal (press Ctrl-Alt-T).
+#. Boot the Ubuntu Live CD. Select Try Ubuntu. Connect your system to the
+ Internet as appropriate (e.g. join your WiFi network). Open a terminal
+ (press Ctrl-Alt-T).
-1.2 Setup and update the repositories::
+#. Setup and update the repositories::
- sudo apt-add-repository universe
- sudo apt update
+ sudo apt-add-repository universe
+ sudo apt update
-1.3 Optional: Install and start the OpenSSH server in the Live CD
-environment:
+#. Optional: Install and start the OpenSSH server in the Live CD environment:
-If you have a second system, using SSH to access the target system can
-be convenient::
+ If you have a second system, using SSH to access the target system can be
+ convenient::
- passwd
- # There is no current password; hit enter at that prompt.
- sudo apt install --yes openssh-server
+ passwd
+ # There is no current password; hit enter at that prompt.
+ sudo apt install --yes openssh-server
-**Hint:** You can find your IP address with
-``ip addr show scope global | grep inet``. Then, from your main machine,
-connect with ``ssh ubuntu@IP``.
+ **Hint:** You can find your IP address with
+ ``ip addr show scope global | grep inet``. Then, from your main machine,
+ connect with ``ssh ubuntu@IP``.
-1.4 Become root::
+#. Become root::
- sudo -i
+ sudo -i
-1.5 Install ZFS in the Live CD environment::
+#. Install ZFS in the Live CD environment::
- apt install --yes debootstrap gdisk zfs-initramfs
- systemctl stop zed
+ apt install --yes debootstrap gdisk zfs-initramfs
+ systemctl stop zed
Step 2: Disk Formatting
-----------------------
-2.1 Set a variable with the disk name::
+#. Set a variable with the disk name::
- DISK=/dev/disk/by-id/scsi-SATA_disk1
+ DISK=/dev/disk/by-id/scsi-SATA_disk1
-Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the
-``/dev/sd*`` device nodes directly can cause sporadic import failures,
-especially on systems that have more than one storage pool.
+ Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the
+ ``/dev/sd*`` device nodes directly can cause sporadic import failures,
+ especially on systems that have more than one storage pool.
-**Hints:**
+ **Hints:**
-- ``ls -la /dev/disk/by-id`` will list the aliases.
-- Are you doing this in a virtual machine? If your virtual disk is
- missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using
- KVM with virtio; otherwise, read the
- `troubleshooting <#troubleshooting>`__ section.
+ - ``ls -la /dev/disk/by-id`` will list the aliases.
+ - Are you doing this in a virtual machine? If your virtual disk is missing
+ from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with
+ virtio; otherwise, read the `troubleshooting <#troubleshooting>`__
+ section.
-2.2 If you are re-using a disk, clear it as necessary:
+#. If you are re-using a disk, clear it as necessary:
-If the disk was previously used in an MD array, zero the superblock::
+ If the disk was previously used in an MD array, zero the superblock::
- apt install --yes mdadm
- mdadm --zero-superblock --force $DISK
+ apt install --yes mdadm
+ mdadm --zero-superblock --force $DISK
-Clear the partition table::
+ Clear the partition table::
- sgdisk --zap-all $DISK
+ sgdisk --zap-all $DISK
-2.3 Create bootloader partition(s)::
+#. Create bootloader partition(s)::
- sgdisk -n1:1M:+512M -t1:EF00 $DISK
+ sgdisk -n1:1M:+512M -t1:EF00 $DISK
-**Note:** This partition is setup for UEFI support. For legacy (BIOS) booting,
-this will allow you to move the disk(s) to a new system/motherboard in the
-future without having to rebuild the pool (and restore your data from a
-backup). Additionally, this is used for `/boot/grub` in single-disk scenarios,
-as discussed below.
+ **Note:** This partition is setup for UEFI support. For legacy (BIOS)
+ booting, this will allow you to move the disk(s) to a new
+ system/motherboard in the future without having to rebuild the pool (and
+ restore your data from a backup). Additionally, this is used for
+ ``/boot/grub`` in single-disk scenarios, as discussed below.
-For legacy (BIOS) booting::
+ For legacy (BIOS) booting::
- sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK
+ sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK
-**Note:** For simplicity and forward compatibility, this HOWTO uses GPT
-partition labels for both UEFI and legacy (BIOS) booting. The Ubuntu installer
-uses an MBR label for legacy (BIOS) booting.
+ **Note:** For simplicity and forward compatibility, this HOWTO uses GPT
+ partition labels for both UEFI and legacy (BIOS) booting. The Ubuntu
+ installer uses an MBR label for legacy (BIOS) booting.
-2.4 Create a partition for swap:
+#. Create a partition for swap:
-Previous versions of this HOWTO put swap on a zvol. `Ubuntu recommends against
-this configuration due to deadlocks.
-`__ There is
-`a bug report upstream `__.
+ Previous versions of this HOWTO put swap on a zvol. `Ubuntu recommends
+ against this configuration due to deadlocks.
+ `__ There
+ is `a bug report upstream
+ `__.
-Putting swap on a partition gives up the benefit of ZFS checksums (for your
-swap). That is probably the right trade-off given the reports of ZFS deadlocks
-with swap. If you are bothered by this, simply do not enable swap.
+ Putting swap on a partition gives up the benefit of ZFS checksums (for your
+ swap). That is probably the right trade-off given the reports of ZFS
+ deadlocks with swap. If you are bothered by this, simply do not enable
+ swap.
-Choose one of the following options if you want swap:
+ Choose one of the following options if you want swap:
-2.4a For a single-disk install::
+ - For a single-disk install::
- sgdisk -n2:0:+500M -t2:8200 $DISK
+ sgdisk -n2:0:+500M -t2:8200 $DISK
-2.4b For a mirror or raidz topology::
+ - For a mirror or raidz topology::
- sgdisk -n2:0:+500M -t2:FD00 $DISK
+ sgdisk -n2:0:+500M -t2:FD00 $DISK
-2.5 Create a boot pool partition::
+#. Create a boot pool partition::
- sgdisk -n3:0:+2G -t3:BE00 $DISK
+ sgdisk -n3:0:+2G -t3:BE00 $DISK
-The Ubuntu installer uses 5% of the disk space constrained to a minimum of
-500 MiB and a maximum of 2 GiB. `Making this too small (and 500 MiB might be
-too small) can result in an inability to upgrade the kernel.
-`__
+ The Ubuntu installer uses 5% of the disk space constrained to a minimum of
+ 500 MiB and a maximum of 2 GiB. `Making this too small (and 500 MiB might
+ be too small) can result in an inability to upgrade the kernel.
+ `__
-2.6 Create a root pool partition:
+#. Create a root pool partition:
-Choose one of the following options:
+ Choose one of the following options:
-2.6a Unencrypted or ZFS native encryption::
+ - Unencrypted or ZFS native encryption::
- sgdisk -n4:0:0 -t4:BF00 $DISK
+ sgdisk -n4:0:0 -t4:BF00 $DISK
-2.6b LUKS::
+ - LUKS::
- sgdisk -n4:0:0 -t4:8309 $DISK
+ sgdisk -n4:0:0 -t4:8309 $DISK
-If you are creating a mirror or raidz topology, repeat the partitioning
-commands for all the disks which will be part of the pool.
+ If you are creating a mirror or raidz topology, repeat the partitioning
+ commands for all the disks which will be part of the pool.
-2.7 Create the boot pool::
+#. Create the boot pool::
- zpool create -o ashift=12 -d \
- -o feature@async_destroy=enabled \
- -o feature@bookmarks=enabled \
- -o feature@embedded_data=enabled \
- -o feature@empty_bpobj=enabled \
- -o feature@enabled_txg=enabled \
- -o feature@extensible_dataset=enabled \
- -o feature@filesystem_limits=enabled \
- -o feature@hole_birth=enabled \
- -o feature@large_blocks=enabled \
- -o feature@lz4_compress=enabled \
- -o feature@spacemap_histogram=enabled \
- -o feature@zpool_checkpoint=enabled \
- -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
- -O normalization=formD -O relatime=on -O xattr=sa \
- -O mountpoint=/boot -R /mnt bpool ${DISK}-part3
+ zpool create \
+ -o ashift=12 -d \
+ -o feature@async_destroy=enabled \
+ -o feature@bookmarks=enabled \
+ -o feature@embedded_data=enabled \
+ -o feature@empty_bpobj=enabled \
+ -o feature@enabled_txg=enabled \
+ -o feature@extensible_dataset=enabled \
+ -o feature@filesystem_limits=enabled \
+ -o feature@hole_birth=enabled \
+ -o feature@large_blocks=enabled \
+ -o feature@lz4_compress=enabled \
+ -o feature@spacemap_histogram=enabled \
+ -o feature@zpool_checkpoint=enabled \
+ -O acltype=posixacl -O canmount=off -O compression=lz4 \
+ -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
+ -O mountpoint=/boot -R /mnt \
+ bpool ${DISK}-part3
-You should not need to customize any of the options for the boot pool.
+ You should not need to customize any of the options for the boot pool.
-GRUB does not support all of the zpool features. See
-``spa_feature_names`` in
-`grub-core/fs/zfs/zfs.c `__.
-This step creates a separate boot pool for ``/boot`` with the features
-limited to only those that GRUB supports, allowing the root pool to use
-any/all features. Note that GRUB opens the pool read-only, so all
-read-only compatible features are “supported” by GRUB.
+ GRUB does not support all of the zpool features. See ``spa_feature_names``
+ in `grub-core/fs/zfs/zfs.c
+ `__.
+ This step creates a separate boot pool for ``/boot`` with the features
+ limited to only those that GRUB supports, allowing the root pool to use
+ any/all features. Note that GRUB opens the pool read-only, so all
+ read-only compatible features are “supported” by GRUB.
-**Hints:**
+ **Hints:**
-- If you are creating a mirror or raidz topology, create the pool using
- ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3``
- (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and
- list the partitions from additional disks).
-- The pool name is arbitrary. If changed, the new name must be used
- consistently. The ``bpool`` convention originated in this HOWTO.
+ - If you are creating a mirror topology, create the pool using::
-**Feature Notes:**
+ zpool create \
+ ... \
+ bpool mirror \
+ /dev/disk/by-id/scsi-SATA_disk1-part3 \
+ /dev/disk/by-id/scsi-SATA_disk2-part3
-- The ``allocation_classes`` feature should be safe to use. However, unless
- one is using it (i.e. a ``special`` vdev), there is no point to enabling it.
- It is extremely unlikely that someone would use this feature for a boot
- pool. If one cares about speeding up the boot pool, it would make more sense
- to put the whole pool on the faster disk rather than using it as a
- ``special`` vdev.
-- The ``project_quota`` feature has been tested and is safe to use. This
- feature is extremely unlikely to matter for the boot pool.
-- The ``resilver_defer`` should be safe but the boot pool is small enough that
- it is unlikely to be necessary.
-- The ``spacemap_v2`` feature has been tested and is safe to use. The boot
- pool is small, so this does not matter in practice.
-- As a read-only compatible feature, the ``userobj_accounting`` feature should
- be compatible in theory, but in practice, GRUB can fail with an “invalid
- dnode type” error. This feature does not matter for ``/boot`` anyway.
-- The ``zpool_checkpoint`` feature has been tested and is safe to use. The
- Ubuntu installer does not use it. This HOWTO does, as the feature may be
- desirable for the boot pool.
+ - For raidz topologies, replace ``mirror`` in the above command with
+ ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from
+ additional disks.
+ - The pool name is arbitrary. If changed, the new name must be used
+ consistently. The ``bpool`` convention originated in this HOWTO.
-2.8 Create the root pool:
+ **Feature Notes:**
-Choose one of the following options:
+ - The ``allocation_classes`` feature should be safe to use. However, unless
+ one is using it (i.e. a ``special`` vdev), there is no point to enabling
+ it. It is extremely unlikely that someone would use this feature for a
+ boot pool. If one cares about speeding up the boot pool, it would make
+ more sense to put the whole pool on the faster disk rather than using it
+ as a ``special`` vdev.
+ - The ``project_quota`` feature has been tested and is safe to use. This
+ feature is extremely unlikely to matter for the boot pool.
+ - The ``resilver_defer`` should be safe but the boot pool is small enough
+ that it is unlikely to be necessary.
+ - The ``spacemap_v2`` feature has been tested and is safe to use. The boot
+ pool is small, so this does not matter in practice.
+ - As a read-only compatible feature, the ``userobj_accounting`` feature
+ should be compatible in theory, but in practice, GRUB can fail with an
+ “invalid dnode type” error. This feature does not matter for ``/boot``
+ anyway.
+ - The ``zpool_checkpoint`` feature has been tested and is safe to use. The
+ Ubuntu installer does not use it. This HOWTO does, as the feature may be
+ desirable for the boot pool.
-2.8a Unencrypted::
+#. Create the root pool:
- zpool create -o ashift=12 \
- -O acltype=posixacl -O canmount=off -O compression=lz4 \
- -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
- -O mountpoint=/ -R /mnt rpool ${DISK}-part4
+ Choose one of the following options:
-2.8b ZFS native encryption::
+ - Unencrypted::
- zpool create -o ashift=12 \
- -O acltype=posixacl -O canmount=off -O compression=lz4 \
- -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
- -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
- -O mountpoint=/ -R /mnt rpool ${DISK}-part4
+ zpool create \
+ -o ashift=12 \
+ -O acltype=posixacl -O canmount=off -O compression=lz4 \
+ -O dnodesize=auto -O normalization=formD -O relatime=on \
+ -O xattr=sa -O mountpoint=/ -R /mnt \
+ rpool ${DISK}-part4
-2.8c LUKS::
+ - ZFS native encryption::
- cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
- cryptsetup luksOpen ${DISK}-part4 luks1
- zpool create -o ashift=12 \
- -O acltype=posixacl -O canmount=off -O compression=lz4 \
- -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
- -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1
+ zpool create \
+ -o ashift=12 \
+ -O encryption=aes-256-gcm \
+ -O keylocation=prompt -O keyformat=passphrase \
+ -O acltype=posixacl -O canmount=off -O compression=lz4 \
+ -O dnodesize=auto -O normalization=formD -O relatime=on \
+ -O xattr=sa -O mountpoint=/ -R /mnt \
+ rpool ${DISK}-part4
-**Notes:**
+ - LUKS::
-- The use of ``ashift=12`` is recommended here because many drives
- today have 4 KiB (or larger) physical sectors, even though they
- present 512 B logical sectors. Also, a future replacement drive may
- have 4 KiB physical sectors (in which case ``ashift=12`` is desirable)
- or 4 KiB logical sectors (in which case ``ashift=12`` is required).
-- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you
- do not want this, remove that option, but later add
- ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create``
- for ``/var/log``, as `journald requires
- ACLs `__
-- Setting ``normalization=formD`` eliminates some corner cases relating
- to UTF-8 filename normalization. It also implies ``utf8only=on``,
- which means that only UTF-8 filenames are allowed. If you care to
- support non-UTF-8 filenames, do not use this option. For a discussion
- of why requiring UTF-8 filenames may be a bad idea, see `The problems
- with enforced UTF-8 only
- filenames `__.
-- ``recordsize`` is unset (leaving it at the default of 128 KiB). If you want to
- tune it (e.g. ``-o recordsize=1M``), see `these
- `__ `various
- `__ `blog
- `__
- `posts
- `__.
-- Setting ``relatime=on`` is a middle ground between classic POSIX
- ``atime`` behavior (with its significant performance impact) and
- ``atime=off`` (which provides the best performance by completely
- disabling atime updates). Since Linux 2.6.30, ``relatime`` has been
- the default for other filesystems. See `RedHat’s
- documentation `__
- for further information.
-- Setting ``xattr=sa`` `vastly improves the performance of extended
- attributes `__.
- Inside ZFS, extended attributes are used to implement POSIX ACLs.
- Extended attributes can also be used by user-space applications.
- `They are used by some desktop GUI
- applications. `__
- `They can be used by Samba to store Windows ACLs and DOS attributes;
- they are required for a Samba Active Directory domain
- controller. `__
- Note that ``xattr=sa`` is
- `Linux-specific `__.
- If you move your ``xattr=sa`` pool to another OpenZFS implementation
- besides ZFS-on-Linux, extended attributes will not be readable
- (though your data will be). If portability of extended attributes is
- important to you, omit the ``-O xattr=sa`` above. Even if you do not
- want ``xattr=sa`` for the whole pool, it is probably fine to use it
- for ``/var/log``.
-- Make sure to include the ``-part4`` portion of the drive path. If you
- forget that, you are specifying the whole disk, which ZFS will then
- re-partition, and you will lose the bootloader partition(s).
-- ZFS native encryption defaults to ``aes-256-ccm``, but `the default has
- changed upstream `__
- to ``aes-256-gcm``. `AES-GCM seems to be generally preferred over AES-CCM
- `__,
- `is faster now
- `__, and
- `will be even faster in the future
- `__.
-- For LUKS, the key size chosen is 512 bits. However, XTS mode requires
- two keys, so the LUKS key is split in half. Thus, ``-s 512`` means
- AES-256.
-- Your passphrase will likely be the weakest link. Choose wisely. See
- `section 5 of the cryptsetup
- FAQ `__
- for guidance.
+ cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
+ cryptsetup luksOpen ${DISK}-part4 luks1
+ zpool create \
+ -o ashift=12 \
+ -O acltype=posixacl -O canmount=off -O compression=lz4 \
+ -O dnodesize=auto -O normalization=formD -O relatime=on \
+ -O xattr=sa -O mountpoint=/ -R /mnt \
+ rpool /dev/mapper/luks1
-**Hints:**
+ **Notes:**
-- If you are creating a mirror or raidz topology, create the pool using
- ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4``
- (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and
- list the partitions from additional disks). For LUKS, use
- ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will
- have to create using ``cryptsetup``.
-- The pool name is arbitrary. If changed, the new name must be used
- consistently. On systems that can automatically install to ZFS, the
- root pool is named ``rpool`` by default.
+ - The use of ``ashift=12`` is recommended here because many drives
+ today have 4 KiB (or larger) physical sectors, even though they
+ present 512 B logical sectors. Also, a future replacement drive may
+ have 4 KiB physical sectors (in which case ``ashift=12`` is desirable)
+ or 4 KiB logical sectors (in which case ``ashift=12`` is required).
+ - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you
+ do not want this, remove that option, but later add
+ ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create``
+ for ``/var/log``, as `journald requires ACLs
+ `__
+ - Setting ``normalization=formD`` eliminates some corner cases relating
+ to UTF-8 filename normalization. It also implies ``utf8only=on``,
+ which means that only UTF-8 filenames are allowed. If you care to
+ support non-UTF-8 filenames, do not use this option. For a discussion
+ of why requiring UTF-8 filenames may be a bad idea, see `The problems
+ with enforced UTF-8 only filenames
+ `__.
+ - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you
+ want to tune it (e.g. ``-o recordsize=1M``), see `these
+ `__ `various
+ `__ `blog
+ `__
+ `posts
+ `__.
+ - Setting ``relatime=on`` is a middle ground between classic POSIX
+ ``atime`` behavior (with its significant performance impact) and
+ ``atime=off`` (which provides the best performance by completely
+ disabling atime updates). Since Linux 2.6.30, ``relatime`` has been
+ the default for other filesystems. See `RedHat’s documentation
+ `__
+ for further information.
+ - Setting ``xattr=sa`` `vastly improves the performance of extended
+ attributes
+ `__.
+ Inside ZFS, extended attributes are used to implement POSIX ACLs.
+ Extended attributes can also be used by user-space applications.
+ `They are used by some desktop GUI applications.
+ `__
+ `They can be used by Samba to store Windows ACLs and DOS attributes;
+ they are required for a Samba Active Directory domain controller.
+ `__
+ Note that ``xattr=sa`` is `Linux-specific
+ `__. If you move your
+ ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux,
+ extended attributes will not be readable (though your data will be). If
+ portability of extended attributes is important to you, omit the
+ ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole
+ pool, it is probably fine to use it for ``/var/log``.
+ - Make sure to include the ``-part4`` portion of the drive path. If you
+ forget that, you are specifying the whole disk, which ZFS will then
+ re-partition, and you will lose the bootloader partition(s).
+ - ZFS native encryption defaults to ``aes-256-ccm``, but `the default has
+ changed upstream
+ `__
+ to ``aes-256-gcm``. `AES-GCM seems to be generally preferred over AES-CCM
+ `__,
+ `is faster now
+ `__,
+ and `will be even faster in the future
+ `__.
+ - For LUKS, the key size chosen is 512 bits. However, XTS mode requires two
+ keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256.
+ - Your passphrase will likely be the weakest link. Choose wisely. See
+ `section 5 of the cryptsetup FAQ
+ `__
+ for guidance.
+
+ **Hints:**
+
+ - If you are creating a mirror topology, create the pool using::
+
+ zpool create \
+ ... \
+ bpool mirror \
+ /dev/disk/by-id/scsi-SATA_disk1-part3 \
+ /dev/disk/by-id/scsi-SATA_disk2-part3
+
+ - For raidz topologies, replace ``mirror`` in the above command with
+ ``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from
+ additional disks.
+ - When using LUKS with mirror or raidz topologies, use
+ ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have
+ to create using ``cryptsetup``.
+ - The pool name is arbitrary. If changed, the new name must be used
+ consistently. On systems that can automatically install to ZFS, the root
+ pool is named ``rpool`` by default.
Step 3: System Installation
---------------------------
-3.1 Create filesystem datasets to act as containers::
+#. Create filesystem datasets to act as containers::
- zfs create -o canmount=off -o mountpoint=none rpool/ROOT
- zfs create -o canmount=off -o mountpoint=none bpool/BOOT
+ zfs create -o canmount=off -o mountpoint=none rpool/ROOT
+ zfs create -o canmount=off -o mountpoint=none bpool/BOOT
-3.2 Create filesystem datasets for the root and boot filesystems::
+#. Create filesystem datasets for the root and boot filesystems::
- UUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |
- tr -dc 'a-z0-9' | cut -c-6)
+ UUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |
+ tr -dc 'a-z0-9' | cut -c-6)
- zfs create -o canmount=noauto -o mountpoint=/ \
- -o com.ubuntu.zsys:bootfs=yes \
- -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID
- zfs mount rpool/ROOT/ubuntu_$UUID
+ zfs create -o canmount=noauto -o mountpoint=/ \
+ -o com.ubuntu.zsys:bootfs=yes \
+ -o com.ubuntu.zsys:last-used=$(date +%s) rpool/ROOT/ubuntu_$UUID
+ zfs mount rpool/ROOT/ubuntu_$UUID
- zfs create -o canmount=noauto -o mountpoint=/boot \
- bpool/BOOT/ubuntu_$UUID
- zfs mount bpool/BOOT/ubuntu_$UUID
+ zfs create -o canmount=noauto -o mountpoint=/boot \
+ bpool/BOOT/ubuntu_$UUID
+ zfs mount bpool/BOOT/ubuntu_$UUID
-With ZFS, it is not normally necessary to use a mount command (either
-``mount`` or ``zfs mount``). This situation is an exception because of
-``canmount=noauto``.
+ With ZFS, it is not normally necessary to use a mount command (either
+ ``mount`` or ``zfs mount``). This situation is an exception because of
+ ``canmount=noauto``.
-3.3 Create datasets::
+#. Create datasets::
- zfs create -o com.ubuntu.zsys:bootfs=no \
- rpool/ROOT/ubuntu_$UUID/srv
- zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
- rpool/ROOT/ubuntu_$UUID/usr
- zfs create rpool/ROOT/ubuntu_$UUID/usr/local
- zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
- rpool/ROOT/ubuntu_$UUID/var
- zfs create rpool/ROOT/ubuntu_$UUID/var/games
- zfs create rpool/ROOT/ubuntu_$UUID/var/lib
- zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountServices
- zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt
- zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg
- zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager
- zfs create rpool/ROOT/ubuntu_$UUID/var/log
- zfs create rpool/ROOT/ubuntu_$UUID/var/mail
- zfs create rpool/ROOT/ubuntu_$UUID/var/snap
- zfs create rpool/ROOT/ubuntu_$UUID/var/spool
- zfs create rpool/ROOT/ubuntu_$UUID/var/www
+ zfs create -o com.ubuntu.zsys:bootfs=no \
+ rpool/ROOT/ubuntu_$UUID/srv
+ zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
+ rpool/ROOT/ubuntu_$UUID/usr
+ zfs create rpool/ROOT/ubuntu_$UUID/usr/local
+ zfs create -o com.ubuntu.zsys:bootfs=no -o canmount=off \
+ rpool/ROOT/ubuntu_$UUID/var
+ zfs create rpool/ROOT/ubuntu_$UUID/var/games
+ zfs create rpool/ROOT/ubuntu_$UUID/var/lib
+ zfs create rpool/ROOT/ubuntu_$UUID/var/lib/AccountServices
+ zfs create rpool/ROOT/ubuntu_$UUID/var/lib/apt
+ zfs create rpool/ROOT/ubuntu_$UUID/var/lib/dpkg
+ zfs create rpool/ROOT/ubuntu_$UUID/var/lib/NetworkManager
+ zfs create rpool/ROOT/ubuntu_$UUID/var/log
+ zfs create rpool/ROOT/ubuntu_$UUID/var/mail
+ zfs create rpool/ROOT/ubuntu_$UUID/var/snap
+ zfs create rpool/ROOT/ubuntu_$UUID/var/spool
+ zfs create rpool/ROOT/ubuntu_$UUID/var/www
- zfs create -o canmount=off -o mountpoint=/ \
- rpool/USERDATA
- zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \
- -o canmount=on -o mountpoint=/root \
- rpool/USERDATA/root_$UUID
+ zfs create -o canmount=off -o mountpoint=/ \
+ rpool/USERDATA
+ zfs create -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_$UUID \
+ -o canmount=on -o mountpoint=/root \
+ rpool/USERDATA/root_$UUID
-For a mirror or raidz topology, create a dataset for ``/boot/grub``::
+ For a mirror or raidz topology, create a dataset for ``/boot/grub``::
- zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub
+ zfs create -o com.ubuntu.zsys:bootfs=no bpool/grub
-A tmpfs is recommended later, but if you want a separate dataset for
-``/tmp``::
+ A tmpfs is recommended later, but if you want a separate dataset for
+ ``/tmp``::
- zfs create -o com.ubuntu.zsys:bootfs=no \
- rpool/ROOT/ubuntu_$UUID/tmp
- chmod 1777 /mnt/tmp
+ zfs create -o com.ubuntu.zsys:bootfs=no \
+ rpool/ROOT/ubuntu_$UUID/tmp
+ chmod 1777 /mnt/tmp
-The primary goal of this dataset layout is to separate the OS from user data.
-This allows the root filesystem to be rolled back without rolling back user
-data.
+ The primary goal of this dataset layout is to separate the OS from user
+ data. This allows the root filesystem to be rolled back without rolling
+ back user data.
-If you do nothing extra, ``/tmp`` will be stored as part of the root
-filesystem. Alternatively, you can create a separate dataset for
-``/tmp``, as shown above. This keeps the ``/tmp`` data out of snapshots
-of your root filesystem. It also allows you to set a quota on
-``rpool/tmp``, if you want to limit the maximum space used. Otherwise,
-you can use a tmpfs (RAM filesystem) later.
+ If you do nothing extra, ``/tmp`` will be stored as part of the root
+ filesystem. Alternatively, you can create a separate dataset for ``/tmp``,
+ as shown above. This keeps the ``/tmp`` data out of snapshots of your root
+ filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want
+ to limit the maximum space used. Otherwise, you can use a tmpfs (RAM
+ filesystem) later.
-3.4 Install the minimal system::
+#. Install the minimal system::
- debootstrap focal /mnt
+ debootstrap focal /mnt
-The ``debootstrap`` command leaves the new system in an unconfigured
-state. An alternative to using ``debootstrap`` is to copy the entirety
-of a working system into the new ZFS root.
+ The ``debootstrap`` command leaves the new system in an unconfigured state.
+ An alternative to using ``debootstrap`` is to copy the entirety of a
+ working system into the new ZFS root.
Step 4: System Configuration
----------------------------
-4.1 Configure the hostname:
+#. Configure the hostname:
-Replace ``HOSTNAME`` with the desired hostname::
+ Replace ``HOSTNAME`` with the desired hostname::
- echo HOSTNAME > /mnt/etc/hostname
- vi /mnt/etc/hosts
+ echo HOSTNAME > /mnt/etc/hostname
+ vi /mnt/etc/hosts
-.. code-block:: text
+ .. code-block:: text
- Add a line:
- 127.0.1.1 HOSTNAME
- or if the system has a real name in DNS:
- 127.0.1.1 FQDN HOSTNAME
+ Add a line:
+ 127.0.1.1 HOSTNAME
+ or if the system has a real name in DNS:
+ 127.0.1.1 FQDN HOSTNAME
-**Hint:** Use ``nano`` if you find ``vi`` confusing.
+ **Hint:** Use ``nano`` if you find ``vi`` confusing.
-4.2 Configure the network interface:
+#. Configure the network interface:
-Find the interface name::
+ Find the interface name::
- ip addr show
+ ip addr show
-Adjust NAME below to match your interface name::
+ Adjust ``NAME`` below to match your interface name::
- vi /mnt/etc/netplan/01-netcfg.yaml
+ vi /mnt/etc/netplan/01-netcfg.yaml
-.. code-block:: yaml
+ .. code-block:: yaml
- network:
- version: 2
- ethernets:
- NAME:
- dhcp4: true
+ network:
+ version: 2
+ ethernets:
+ NAME:
+ dhcp4: true
-Customize this file if the system is not a DHCP client.
+ Customize this file if the system is not a DHCP client.
-4.3 Configure the package sources::
+#. Configure the package sources::
- vi /mnt/etc/apt/sources.list
+ vi /mnt/etc/apt/sources.list
-.. code-block:: sourceslist
+ .. code-block:: sourceslist
- deb http://archive.ubuntu.com/ubuntu focal main restricted universe multiverse
- deb http://archive.ubuntu.com/ubuntu focal-updates main restricted universe multiverse
- deb http://archive.ubuntu.com/ubuntu focal-backports main restricted universe multiverse
- deb http://security.ubuntu.com/ubuntu focal-security main restricted universe multiverse
+ deb http://archive.ubuntu.com/ubuntu focal main restricted universe multiverse
+ deb http://archive.ubuntu.com/ubuntu focal-updates main restricted universe multiverse
+ deb http://archive.ubuntu.com/ubuntu focal-backports main restricted universe multiverse
+ deb http://security.ubuntu.com/ubuntu focal-security main restricted universe multiverse
-4.4 Bind the virtual filesystems from the LiveCD environment to the new
-system and ``chroot`` into it::
+#. Bind the virtual filesystems from the LiveCD environment to the new
+ system and ``chroot`` into it::
- mount --rbind /dev /mnt/dev
- mount --rbind /proc /mnt/proc
- mount --rbind /sys /mnt/sys
- chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login
+ mount --rbind /dev /mnt/dev
+ mount --rbind /proc /mnt/proc
+ mount --rbind /sys /mnt/sys
+ chroot /mnt /usr/bin/env DISK=$DISK UUID=$UUID bash --login
-**Note:** This is using ``--rbind``, not ``--bind``.
+ **Note:** This is using ``--rbind``, not ``--bind``.
-4.5 Configure a basic system environment::
+#. Configure a basic system environment::
- apt update
+ apt update
- dpkg-reconfigure locales
+ dpkg-reconfigure locales
-Even if you prefer a non-English system language, always ensure that
-``en_US.UTF-8`` is available::
+ Even if you prefer a non-English system language, always ensure that
+ ``en_US.UTF-8`` is available::
- dpkg-reconfigure tzdata
+ dpkg-reconfigure tzdata
-If you prefer ``nano`` over ``vi``, install it::
+ If you prefer ``nano`` over ``vi``, install it::
- apt install --yes nano
+ apt install --yes nano
-4.6 For LUKS installs only, setup ``/etc/crypttab``::
+#. For LUKS installs only, setup ``/etc/crypttab``::
- apt install --yes cryptsetup
+ apt install --yes cryptsetup
- echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
- luks,discard,initramfs > /etc/crypttab
+ echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
+ luks,discard,initramfs > /etc/crypttab
-The use of ``initramfs`` is a work-around for `cryptsetup does not support ZFS
-`__.
+ The use of ``initramfs`` is a work-around for `cryptsetup does not support
+ ZFS `__.
-**Hint:** If you are creating a mirror or raidz topology, repeat the
-``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk.
+ **Hint:** If you are creating a mirror or raidz topology, repeat the
+ ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk.
-4.7 Create the EFI filesystem:
+#. Create the EFI filesystem:
-Perform these steps for both UEFI and legacy (BIOS) booting::
+ Perform these steps for both UEFI and legacy (BIOS) booting::
- apt install --yes dosfstools
+ apt install --yes dosfstools
- mkdosfs -F 32 -s 1 -n EFI ${DISK}-part1
- mkdir /boot/efi
- echo UUID=$(blkid -s UUID -o value ${DISK}-part1) \
- /boot/efi vfat umask=0022,fmask=0022,dmask=0022 0 1 >> /etc/fstab
- mount /boot/efi
+ mkdosfs -F 32 -s 1 -n EFI ${DISK}-part1
+ mkdir /boot/efi
+ echo UUID=$(blkid -s UUID -o value ${DISK}-part1) \
+ /boot/efi vfat umask=0022,fmask=0022,dmask=0022 0 1 >> /etc/fstab
+ mount /boot/efi
-For a mirror or raidz topology, repeat these steps for the additional disks,
-using ``/boot/efi2``, ``/boot/efi3``, etc.
+ For a mirror or raidz topology, repeat these steps for the additional
+ disks, using ``/boot/efi2``, ``/boot/efi3``, etc.
-**Note:** The ``-s 1`` for ``mkdosfs`` is only necessary for drives which
-present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size
-(given the partition size of 512 MiB) for FAT32. It also works fine on drives
-which present 512 B sectors.
+ **Note:** The ``-s 1`` for ``mkdosfs`` is only necessary for drives which
+ present 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster
+ size (given the partition size of 512 MiB) for FAT32. It also works fine on
+ drives which present 512 B sectors.
-4.8 Install GRUB/Linux/ZFS in the chroot environment for the new system:
+#. Install GRUB/Linux/ZFS in the chroot environment for the new system:
-For a single-disk install only::
+ For a single-disk install only::
- mkdir /boot/efi/grub /boot/grub
- echo /boot/efi/grub /boot/grub none defaults,bind 0 0 >> /etc/fstab
- mount /boot/grub
+ mkdir /boot/efi/grub /boot/grub
+ echo /boot/efi/grub /boot/grub none defaults,bind 0 0 >> /etc/fstab
+ mount /boot/grub
-**Note:** This puts `/boot/grub` on the EFI System Partition. This allows GRUB
-to write to it, which means that `/boot/grub/grubenv` and the `recordfail`
-feature works as expected: if the boot fails, the normally hidden GRUB menu
-will be shown on the next boot. For a mirror or raidz topology, we do not want
-GRUB writing to the EFI System Partition. This is becase we duplicate it at
-install without a mechanism to update the copies when the GRUB configuration
-changes (e.g. as the kernel is upgraded). Thus, we keep `/boot/grub` on the
-boot pool for the mirror or raidz topologies. This preserves correct
-mirroring/raidz behavior, at the expense of being able to write to
-`/boot/grub/grubenv` and thus the `recordfail` behavior.
+ **Note:** This puts ``/boot/grub`` on the EFI System Partition. This allows
+ GRUB to write to it, which means that ``/boot/grub/grubenv`` and the
+ ``recordfail`` feature works as expected: if the boot fails, the normally
+ hidden GRUB menu will be shown on the next boot. For a mirror or raidz
+ topology, we do not want GRUB writing to the EFI System Partition. This is
+ becase we duplicate it at install without a mechanism to update the copies
+ when the GRUB configuration changes (e.g. as the kernel is upgraded). Thus,
+ we keep ``/boot/grub`` on the boot pool for the mirror or raidz topologies.
+ This preserves correct mirroring/raidz behavior, at the expense of being
+ able to write to ``/boot/grub/grubenv`` and thus the ``recordfail``
+ behavior.
-Choose one of the following options:
+ Choose one of the following options:
-4.8a Install GRUB/Linux/ZFS for legacy (BIOS) booting::
+ - Install GRUB/Linux/ZFS for legacy (BIOS) booting::
- apt install --yes grub-pc linux-image-generic zfs-initramfs zsys
+ apt install --yes grub-pc linux-image-generic zfs-initramfs zsys
-Select (using the space bar) all of the disks (not partitions) in your pool.
+ Select (using the space bar) all of the disks (not partitions) in your
+ pool.
-4.8b Install GRUB/Linux/ZFS for UEFI booting::
+ - Install GRUB/Linux/ZFS for UEFI booting::
- apt install --yes \
- grub-efi-amd64 grub-efi-amd64-signed linux-image-generic shim-signed \
- zfs-initramfs zsys
+ apt install --yes \
+ grub-efi-amd64 grub-efi-amd64-signed linux-image-generic \
+ shim-signed zfs-initramfs zsys
-**Note:** For a mirror or raidz topology, this step only installs GRUB on the
-first disk. The other disk(s) will be handled later.
+ **Note:** For a mirror or raidz topology, this step only installs GRUB
+ on the first disk. The other disk(s) will be handled later.
-4.9 (Optional): Remove os-prober::
+#. Optional: Remove os-prober::
- dpkg --purge os-prober
+ dpkg --purge os-prober
-This avoids error messages from `update-grub`. `os-prober` is only necessary
-in dual-boot configurations.
+ This avoids error messages from `update-grub`. `os-prober` is only
+ necessary in dual-boot configurations.
-4.10 Set a root password::
+#. Set a root password::
- passwd
+ passwd
-4.11 Configure swap:
+#. Configure swap:
-Choose one of the following options if you want swap:
+ Choose one of the following options if you want swap:
-4.11a For an unencrypted single-disk install::
+ - For an unencrypted single-disk install::
- mkswap -f ${DISK}-part2
- echo UUID=$(blkid -s UUID -o value ${DISK}-part2) \
- none swap discard 0 0 >> /etc/fstab
- swapon -a
+ mkswap -f ${DISK}-part2
+ echo UUID=$(blkid -s UUID -o value ${DISK}-part2) \
+ none swap discard 0 0 >> /etc/fstab
+ swapon -a
-4.11b For an unencrypted mirror or raidz topology::
+ - For an unencrypted mirror or raidz topology::
- apt install --yes mdadm
- # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and raid-devices
- # if necessary and specify the actual devices.
- mdadm --create /dev/md0 --metadata=1.2 --level=mirror --raid-devices=2 \
- ${DISK1}-part2 ${DISK2}-part2
- mkswap -f /dev/md0
- echo UUID=$(blkid -s UUID -o value /dev/md0) \
- none swap discard 0 0 >> /etc/fstab
- swapon -a
+ apt install --yes mdadm
+ # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and
+ # raid-devices if necessary and specify the actual devices.
+ mdadm --create /dev/md0 --metadata=1.2 --level=mirror \
+ --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2
+ mkswap -f /dev/md0
+ echo UUID=$(blkid -s UUID -o value /dev/md0) \
+ none swap discard 0 0 >> /etc/fstab
+ swapon -a
-4.11c For an encrypted (LUKS or ZFS native encryption) single-disk install::
+ - For an encrypted (LUKS or ZFS native encryption) single-disk install::
- apt install --yes cryptsetup
- echo swap ${DISK}-part2 /dev/urandom \
- swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
- echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
+ apt install --yes cryptsetup
+ echo swap ${DISK}-part2 /dev/urandom \
+ swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
+ echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
-4.11d For an encrypted (LUKS or ZFS native encryption) mirror or raidz
-topology::
+ - For an encrypted (LUKS or ZFS native encryption) mirror or raidz
+ topology::
- apt install --yes cryptsetup mdadm
- # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and raid-devices
- # if necessary and specify the actual devices.
- mdadm --create /dev/md0 --metadata=1.2 --level=mirror --raid-devices=2 \
- ${DISK1}-part2 ${DISK2}-part2
- echo swap /dev/md0 /dev/urandom \
- swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
- echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
+ apt install --yes cryptsetup mdadm
+ # Adjust the level (ZFS raidz = MD raid5, raidz2 = raid6) and
+ # raid-devices if necessary and specify the actual devices.
+ mdadm --create /dev/md0 --metadata=1.2 --level=mirror \
+ --raid-devices=2 ${DISK1}-part2 ${DISK2}-part2
+ echo swap /dev/md0 /dev/urandom \
+ swap,cipher=aes-xts-plain64:sha256,size=512 >> /etc/crypttab
+ echo /dev/mapper/swap none swap defaults 0 0 >> /etc/fstab
-4.12 Optional (but recommended): Mount a tmpfs to ``/tmp``
+#. Optional (but recommended): Mount a tmpfs to ``/tmp``
-If you chose to create a ``/tmp`` dataset above, skip this step, as they
-are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a
-tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit.
+ If you chose to create a ``/tmp`` dataset above, skip this step, as they
+ are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a
+ tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit.
-::
+ ::
- cp /usr/share/systemd/tmp.mount /etc/systemd/system/
- systemctl enable tmp.mount
+ cp /usr/share/systemd/tmp.mount /etc/systemd/system/
+ systemctl enable tmp.mount
-4.13 Setup system groups::
+#. Setup system groups::
- addgroup --system lpadmin
- addgroup --system lxd
- addgroup --system sambashare
+ addgroup --system lpadmin
+ addgroup --system lxd
+ addgroup --system sambashare
Step 5: GRUB Installation
-------------------------
-5.1 Verify that the ZFS boot filesystem is recognized::
+#. Verify that the ZFS boot filesystem is recognized::
- grub-probe /boot
+ grub-probe /boot
-5.2 Refresh the initrd files::
+#. Refresh the initrd files::
- update-initramfs -c -k all
+ update-initramfs -c -k all
-**Note:** When using LUKS, this will print “WARNING could not determine
-root device from /etc/fstab”. This is because `cryptsetup does not
-support ZFS
-`__.
+ **Note:** When using LUKS, this will print “WARNING could not determine
+ root device from /etc/fstab”. This is because `cryptsetup does not
+ support ZFS
+ `__.
-5.3 Disable memory zeroing::
+#. Disable memory zeroing::
- vi /etc/default/grub
- # Add init_on_alloc=0 to: GRUB_CMDLINE_LINUX_DEFAULT
- # Save and quit.
+ vi /etc/default/grub
+ # Add init_on_alloc=0 to: GRUB_CMDLINE_LINUX_DEFAULT
+ # Save and quit.
-This is to address `performance regressions
-`__.
+ This is to address `performance regressions
+ `__.
-5.4 Optional (but highly recommended): Make debugging GRUB easier::
+#. Optional (but highly recommended): Make debugging GRUB easier::
- vi /etc/default/grub
- # Comment out: GRUB_TIMEOUT_STYLE=hidden
- # Set: GRUB_TIMEOUT=5
- # Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5
- # Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
- # Uncomment: GRUB_TERMINAL=console
- # Save and quit.
+ vi /etc/default/grub
+ # Comment out: GRUB_TIMEOUT_STYLE=hidden
+ # Set: GRUB_TIMEOUT=5
+ # Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5
+ # Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT
+ # Uncomment: GRUB_TERMINAL=console
+ # Save and quit.
-Later, once the system has rebooted twice and you are sure everything is
-working, you can undo these changes, if desired.
+ Later, once the system has rebooted twice and you are sure everything is
+ working, you can undo these changes, if desired.
-5.5 Update the boot configuration::
+#. Update the boot configuration::
- update-grub
+ update-grub
-**Note:** Ignore errors from ``osprober``, if present.
+ **Note:** Ignore errors from ``osprober``, if present.
-5.6 Install the boot loader:
+#. Install the boot loader:
-5.6a For legacy (BIOS) booting, install GRUB to the MBR::
+ #. For legacy (BIOS) booting, install GRUB to the MBR::
- grub-install $DISK
+ grub-install $DISK
-Note that you are installing GRUB to the whole disk, not a partition.
+ Note that you are installing GRUB to the whole disk, not a partition.
-If you are creating a mirror or raidz topology, repeat the
-``grub-install`` command for each disk in the pool.
+ If you are creating a mirror or raidz topology, repeat the ``grub-install``
+ command for each disk in the pool.
-5.6b For UEFI booting, install GRUB::
+ #. For UEFI booting, install GRUB to the ESP::
- grub-install --target=x86_64-efi --efi-directory=/boot/efi \
- --bootloader-id=ubuntu --recheck --no-floppy
+ grub-install --target=x86_64-efi --efi-directory=/boot/efi \
+ --bootloader-id=ubuntu --recheck --no-floppy
-For a mirror or raidz topology, run this for the additional disk(s),
-incrementing the “2” to “3” and so on for both ``/boot/efi2`` and
-``ubuntu-2``::
+ For a mirror or raidz topology, run this for the additional disk(s),
+ incrementing the “2” to “3” and so on for both ``/boot/efi2`` and
+ ``ubuntu-2``::
- cp -a /boot/efi/EFI /boot/efi2
- grub-install --target=x86_64-efi --efi-directory=/boot/efi2 \
- --bootloader-id=ubuntu-2 --recheck --no-floppy
+ cp -a /boot/efi/EFI /boot/efi2
+ grub-install --target=x86_64-efi --efi-directory=/boot/efi2 \
+ --bootloader-id=ubuntu-2 --recheck --no-floppy
-5.7 Fix filesystem mount ordering:
+#. Fix filesystem mount ordering:
-We need to activate ``zfs-mount-generator``. This makes systemd aware of
-the separate mountpoints, which is important for things like
-``/var/log`` and ``/var/tmp``. In turn, ``rsyslog.service`` depends on
-``var-log.mount`` by way of ``local-fs.target`` and services using the
-``PrivateTmp`` feature of systemd automatically use
-``After=var-tmp.mount``.
+ We need to activate ``zfs-mount-generator``. This makes systemd aware of
+ the separate mountpoints, which is important for things like ``/var/log``
+ and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount``
+ by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature
+ of systemd automatically use ``After=var-tmp.mount``.
-::
+ ::
- mkdir /etc/zfs/zfs-list.cache
- touch /etc/zfs/zfs-list.cache/bpool
- touch /etc/zfs/zfs-list.cache/rpool
- ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
- zed -F &
+ mkdir /etc/zfs/zfs-list.cache
+ touch /etc/zfs/zfs-list.cache/bpool
+ touch /etc/zfs/zfs-list.cache/rpool
+ ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
+ zed -F &
-Verify that ``zed`` updated the cache by making sure these are not empty::
+ Verify that ``zed`` updated the cache by making sure these are not empty::
- cat /etc/zfs/zfs-list.cache/bpool
- cat /etc/zfs/zfs-list.cache/rpool
+ cat /etc/zfs/zfs-list.cache/bpool
+ cat /etc/zfs/zfs-list.cache/rpool
-If either is empty, force a cache update and check again::
+ If either is empty, force a cache update and check again::
- zfs set canmount=noauto bpool/BOOT/ubuntu_$UUID
- zfs set canmount=noauto rpool/ROOT/ubuntu_$UUID
+ zfs set canmount=noauto bpool/BOOT/ubuntu_$UUID
+ zfs set canmount=noauto rpool/ROOT/ubuntu_$UUID
-Stop ``zed``::
+ Stop ``zed``::
- fg
- Press Ctrl-C.
+ fg
+ Press Ctrl-C.
-Fix the paths to eliminate ``/mnt``::
+ Fix the paths to eliminate ``/mnt``::
- sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
+ sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/*
Step 6: First Boot
------------------
-6.1 (Optional): Install SSH::
+#. Optional: Install SSH::
- apt install --yes openssh-server
+ apt install --yes openssh-server
-If you want to login as root via SSH, set ``PermitRootLogin yes`` in
-``/etc/ssh/sshd_config``. For security, undo this as soon as possible (i.e.
-once you have your regular user account setup).
+ If you want to login as root via SSH, set ``PermitRootLogin yes`` in
+ ``/etc/ssh/sshd_config``. For security, undo this as soon as possible
+ (i.e. once you have your regular user account setup).
-6.2 Exit from the ``chroot`` environment back to the LiveCD environment::
+#. Exit from the ``chroot`` environment back to the LiveCD environment::
- exit
+ exit
-6.3 Run these commands in the LiveCD environment to unmount all
-filesystems::
+#. Run these commands in the LiveCD environment to unmount all
+ filesystems::
- mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
- zpool export -a
+ mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+ xargs -i{} umount -lf {}
+ zpool export -a
-6.4 Reboot::
+#. Reboot::
- reboot
+ reboot
-Wait for the newly installed system to boot normally. Login as root.
+ Wait for the newly installed system to boot normally. Login as root.
-6.5 Create a user account:
+#. Create a user account:
-Replace ``username`` with your desired username::
+ Replace ``username`` with your desired username::
- UUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |
- tr -dc 'a-z0-9' | cut -c-6)
- ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}')
- zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \
- -o canmount=on -o mountpoint=/home/username \
- rpool/USERDATA/username_$UUID
- adduser username
+ UUID=$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |
+ tr -dc 'a-z0-9' | cut -c-6)
+ ROOT_DS=$(zfs list -o name | awk '/ROOT\/ubuntu_/{print $1;exit}')
+ zfs create -o com.ubuntu.zsys:bootfs-datasets=$ROOT_DS \
+ -o canmount=on -o mountpoint=/home/username \
+ rpool/USERDATA/username_$UUID
+ adduser username
- cp -a /etc/skel/. /home/username
- chown -R username:username /home/username
- usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo username
+ cp -a /etc/skel/. /home/username
+ chown -R username:username /home/username
+ usermod -a -G adm,cdrom,dip,lpadmin,lxd,plugdev,sambashare,sudo username
Step 7: Full Software Installation
----------------------------------
-7.1 Upgrade the minimal system::
+#. Upgrade the minimal system::
- apt dist-upgrade --yes
+ apt dist-upgrade --yes
-7.2 Install a regular set of software:
+#. Install a regular set of software:
-Choose one of the following options:
+ Choose one of the following options:
-7.2a Install a command-line environment only::
+ - Install a command-line environment only::
- apt install --yes ubuntu-standard
+ apt install --yes ubuntu-standard
-7.2b Install a full GUI environment::
+ - Install a full GUI environment::
- apt install --yes ubuntu-desktop
- vi /etc/gdm3/custom.conf
- # In the [daemon] section, add: InitialSetupEnable=false
+ apt install --yes ubuntu-desktop
+ vi /etc/gdm3/custom.conf
+ # In the [daemon] section, add: InitialSetupEnable=false
-**Hint**: If you are installing a full GUI environment, you will likely
-want to manage your network with NetworkManager::
+ **Hint**: If you are installing a full GUI environment, you will likely
+ want to manage your network with NetworkManager::
- rm /mnt/etc/netplan/01-netcfg.yaml
- vi /etc/netplan/01-network-manager-all.yaml
+ rm /mnt/etc/netplan/01-netcfg.yaml
+ vi /etc/netplan/01-network-manager-all.yaml
-.. code-block:: yaml
+ .. code-block:: yaml
- network:
- version: 2
- renderer: NetworkManager
+ network:
+ version: 2
+ renderer: NetworkManager
-7.3 Optional: Disable log compression:
+#. Optional: Disable log compression:
-As ``/var/log`` is already compressed by ZFS, logrotate’s compression is
-going to burn CPU and disk I/O for (in most cases) very little gain.
-Also, if you are making snapshots of ``/var/log``, logrotate’s
-compression will actually waste space, as the uncompressed data will
-live on in the snapshot. You can edit the files in ``/etc/logrotate.d``
-by hand to comment out ``compress``, or use this loop (copy-and-paste
-highly recommended)::
+ As ``/var/log`` is already compressed by ZFS, logrotate’s compression is
+ going to burn CPU and disk I/O for (in most cases) very little gain. Also,
+ if you are making snapshots of ``/var/log``, logrotate’s compression will
+ actually waste space, as the uncompressed data will live on in the
+ snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment
+ out ``compress``, or use this loop (copy-and-paste highly recommended)::
- for file in /etc/logrotate.d/* ; do
- if grep -Eq "(^|[^#y])compress" "$file" ; then
- sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
- fi
- done
+ for file in /etc/logrotate.d/* ; do
+ if grep -Eq "(^|[^#y])compress" "$file" ; then
+ sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file"
+ fi
+ done
-7.4 Reboot::
+#. Reboot::
- reboot
+ reboot
Step 8: Final Cleanup
---------------------
-8.1 Wait for the system to boot normally. Login using the account you
-created. Ensure the system (including networking) works normally.
+#. Wait for the system to boot normally. Login using the account you
+ created. Ensure the system (including networking) works normally.
-8.2 Optional: Disable the root password::
+#. Optional: Disable the root password::
- sudo usermod -p '*' root
+ sudo usermod -p '*' root
-8.3 Optional: Re-enable the graphical boot process:
+#. Optional: Re-enable the graphical boot process:
-If you prefer the graphical boot process, you can re-enable it now. If
-you are using LUKS, it makes the prompt look nicer.
+ If you prefer the graphical boot process, you can re-enable it now. If
+ you are using LUKS, it makes the prompt look nicer.
-::
+ ::
- sudo vi /etc/default/grub
- # Uncomment: GRUB_TIMEOUT_STYLE=hidden
- # Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT
- # Comment out: GRUB_TERMINAL=console
- # Save and quit.
+ sudo vi /etc/default/grub
+ # Uncomment: GRUB_TIMEOUT_STYLE=hidden
+ # Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT
+ # Comment out: GRUB_TERMINAL=console
+ # Save and quit.
- sudo update-grub
+ sudo update-grub
-**Note:** Ignore errors from ``osprober``, if present.
+ **Note:** Ignore errors from ``osprober``, if present.
-8.4 Optional: For LUKS installs only, backup the LUKS header::
+#. Optional: For LUKS installs only, backup the LUKS header::
- sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
- --header-backup-file luks1-header.dat
+ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
+ --header-backup-file luks1-header.dat
-Store that backup somewhere safe (e.g. cloud storage). It is protected
-by your LUKS passphrase, but you may wish to use additional encryption.
+ Store that backup somewhere safe (e.g. cloud storage). It is protected by
+ your LUKS passphrase, but you may wish to use additional encryption.
-**Hint:** If you created a mirror or raidz topology, repeat this for
-each LUKS volume (``luks2``, etc.).
+ **Hint:** If you created a mirror or raidz topology, repeat this for each
+ LUKS volume (``luks2``, etc.).
Troubleshooting
---------------
@@ -954,8 +982,8 @@ Troubleshooting
Rescuing using a Live CD
~~~~~~~~~~~~~~~~~~~~~~~~
-Go through `Step 1: Prepare The Install
-Environment <#step-1-prepare-the-install-environment>`__.
+Go through `Step 1: Prepare The Install Environment
+<#step-1-prepare-the-install-environment>`__.
For LUKS, first unlock the disk(s)::
@@ -985,45 +1013,38 @@ Do whatever you need to do to fix your system.
When done, cleanup::
exit
- mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
+ mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
+ xargs -i{} umount -lf {}
zpool export -a
reboot
-MPT2SAS
-~~~~~~~
-
-Most problem reports for this tutorial involve ``mpt2sas`` hardware that
-does slow asynchronous drive initialization, like some IBM M1015 or
-OEM-branded cards that have been flashed to the reference LSI firmware.
-
-The basic problem is that disks on these controllers are not visible to
-the Linux kernel until after the regular system is started, and ZoL does
-not hotplug pool members. See
-`https://github.com/zfsonlinux/zfs/issues/330 `__.
-
-Most LSI cards are perfectly compatible with ZoL. If your card has this
-glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in
-``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to
-appear before importing the pool.
-
Areca
~~~~~
Systems that require the ``arcsas`` blob driver should add it to the
-``/etc/initramfs-tools/modules`` file and run
-``update-initramfs -c -k all``.
+``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``.
Upgrade or downgrade the Areca driver if something like
``RIP: 0010:[] [] native_read_tsc+0x6/0x20``
-appears anywhere in kernel log. ZoL is unstable on systems that emit
-this error message.
+appears anywhere in kernel log. ZoL is unstable on systems that emit this
+error message.
-VMware
-~~~~~~
+MPT2SAS
+~~~~~~~
-- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere
- configuration. Doing this ensures that ``/dev/disk`` aliases are
- created in the guest.
+Most problem reports for this tutorial involve ``mpt2sas`` hardware that does
+slow asynchronous drive initialization, like some IBM M1015 or OEM-branded
+cards that have been flashed to the reference LSI firmware.
+
+The basic problem is that disks on these controllers are not visible to the
+Linux kernel until after the regular system is started, and ZoL does not
+hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330
+`__.
+
+Most LSI cards are perfectly compatible with ZoL. If your card has this
+glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in
+``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to
+appear before importing the pool.
QEMU/KVM/XEN
~~~~~~~~~~~~
@@ -1052,3 +1073,9 @@ Uncomment these lines:
::
sudo systemctl restart libvirtd.service
+
+VMware
+~~~~~~
+
+- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration.
+ Doing this ensures that ``/dev/disk`` aliases are created in the guest.