Debian/Ubuntu: Autonumber (current) HOWTOs

This also involved a bunch of reindenting and rewrapping.

I reworked the Live CD sources on Debian to be more consistent with
Ubuntu (and the chroot sources configuration on Debian).

I reworked the zpool create wrapping and the mirror/raidz notes.  This
should be more clear on how to create the multi-disk topologies.

I fixed a couple formatting issues too (mainly one backtick where
there should have been two).

Signed-off-by: Richard Laager <rlaager@wiktel.com>
This commit is contained in:
Richard Laager
2020-05-25 01:05:21 -05:00
parent 277ad2a070
commit 67561688af
2 changed files with 1284 additions and 1219 deletions

View File

@@ -19,19 +19,19 @@ Caution
System Requirements
~~~~~~~~~~~~~~~~~~~
- `64-bit Debian GNU/Linux Buster Live CD w/ GUI (e.g. gnome
iso) <https://cdimage.debian.org/mirror/cdimage/release/current-live/amd64/iso-hybrid/>`__
- `A 64-bit kernel is strongly
encouraged. <https://github.com/zfsonlinux/zfs/wiki/FAQ#32-bit-vs-64-bit-systems>`__
- Installing on a drive which presents 4 KiB logical sectors (a “4Kn”
drive) only works with UEFI booting. This not unique to ZFS. `GRUB
does not and will not work on 4Kn with legacy (BIOS)
booting. <http://savannah.gnu.org/bugs/?46700>`__
- `64-bit Debian GNU/Linux Buster Live CD w/ GUI (e.g. gnome iso)
<https://cdimage.debian.org/mirror/cdimage/release/current-live/amd64/iso-hybrid/>`__
- `A 64-bit kernel is strongly encouraged.
<https://github.com/zfsonlinux/zfs/wiki/FAQ#32-bit-vs-64-bit-systems>`__
- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive)
only works with UEFI booting. This not unique to ZFS. `GRUB does not and
will not work on 4Kn with legacy (BIOS) booting.
<http://savannah.gnu.org/bugs/?46700>`__
Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of
memory is recommended for normal performance in basic workloads. If you
wish to use deduplication, you will need `massive amounts of
RAM <http://wiki.freebsd.org/ZFSTuningGuide#Deduplication>`__. Enabling
Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory
is recommended for normal performance in basic workloads. If you wish to use
deduplication, you will need `massive amounts of RAM
<http://wiki.freebsd.org/ZFSTuningGuide#Deduplication>`__. Enabling
deduplication is a permanent change that cannot be easily reverted.
Support
@@ -99,17 +99,28 @@ encrypted once per disk.
Step 1: Prepare The Install Environment
---------------------------------------
1.1 Boot the Debian GNU/Linux Live CD. If prompted, login with the
username ``user`` and password ``live``. Connect your system to the
Internet as appropriate (e.g. join your WiFi network).
#. Boot the Debian GNU/Linux Live CD. If prompted, login with the username
``user`` and password ``live``. Connect your system to the Internet as
appropriate (e.g. join your WiFi network). Open a terminal.
1.2 Optional: Install and start the OpenSSH server in the Live CD
environment:
#. Setup and update the repositories::
If you have a second system, using SSH to access the target system can
be convenient::
sudo vi /mnt/etc/apt/sources.list
.. code-block:: sourceslist
deb http://deb.debian.org/debian buster main contrib
deb-src http://deb.debian.org/debian buster main contrib
::
sudo apt update
#. Optional: Install and start the OpenSSH server in the Live CD environment:
If you have a second system, using SSH to access the target system can be
convenient::
sudo apt install --yes openssh-server
sudo systemctl restart ssh
@@ -117,33 +128,28 @@ be convenient::
``ip addr show scope global | grep inet``. Then, from your main machine,
connect with ``ssh user@IP``.
1.3 Become root::
#. Become root::
sudo -i
1.4 Setup and update the repositories::
#. Install ZFS in the Live CD environment::
echo deb http://deb.debian.org/debian buster contrib >> /etc/apt/sources.list
echo deb http://deb.debian.org/debian buster-backports main contrib >> /etc/apt/sources.list
apt update
1.5 Install ZFS in the Live CD environment::
apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r)
apt install --yes debootstrap gdisk dkms dpkg-dev \
linux-headers-$(uname -r)
apt install --yes -t buster-backports --no-install-recommends zfs-dkms
modprobe zfs
apt install --yes -t buster-backports zfsutils-linux
- The dkms dependency is installed manually just so it comes from
buster and not buster-backports. This is not critical.
- The dkms dependency is installed manually just so it comes from buster
and not buster-backports. This is not critical.
- We need to get the module built and loaded before installing
zfsutils-linux or `zfs-mount.service will fail to
start <https://github.com/zfsonlinux/zfs/issues/9599>`__.
zfsutils-linux or `zfs-mount.service will fail to start
<https://github.com/zfsonlinux/zfs/issues/9599>`__.
Step 2: Disk Formatting
-----------------------
2.1 Set a variable with the disk name::
#. Set a variable with the disk name::
DISK=/dev/disk/by-id/scsi-SATA_disk1
@@ -154,12 +160,12 @@ especially on systems that have more than one storage pool.
**Hints:**
- ``ls -la /dev/disk/by-id`` will list the aliases.
- Are you doing this in a virtual machine? If your virtual disk is
missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using
KVM with virtio; otherwise, read the
`troubleshooting <#troubleshooting>`__ section.
- Are you doing this in a virtual machine? If your virtual disk is missing
from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using KVM with
virtio; otherwise, read the `troubleshooting <#troubleshooting>`__
section.
2.2 If you are re-using a disk, clear it as necessary:
#. If you are re-using a disk, clear it as necessary:
If the disk was previously used in an MD array, zero the superblock::
@@ -170,7 +176,7 @@ Clear the partition table::
sgdisk --zap-all $DISK
2.3 Partition your disk(s):
#. Partition your disk(s):
Run this if you need legacy (BIOS) booting::
@@ -186,20 +192,21 @@ Run this for the boot pool::
Choose one of the following options:
2.3a Unencrypted or ZFS native encryption::
- Unencrypted or ZFS native encryption::
sgdisk -n4:0:0 -t4:BF01 $DISK
2.3b LUKS::
- LUKS::
sgdisk -n4:0:0 -t4:8300 $DISK
If you are creating a mirror or raidz topology, repeat the partitioning
commands for all the disks which will be part of the pool.
2.4 Create the boot pool::
#. Create the boot pool::
zpool create -o ashift=12 -d \
zpool create \
-o ashift=12 -d \
-o feature@async_destroy=enabled \
-o feature@bookmarks=enabled \
-o feature@embedded_data=enabled \
@@ -212,15 +219,16 @@ commands for all the disks which will be part of the pool.
-o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \
-o feature@zpool_checkpoint=enabled \
-O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
-O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/boot -R /mnt bpool ${DISK}-part3
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/boot -R /mnt \
bpool ${DISK}-part3
You should not need to customize any of the options for the boot pool.
GRUB does not support all of the zpool features. See
``spa_feature_names`` in
`grub-core/fs/zfs/zfs.c <http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276>`__.
GRUB does not support all of the zpool features. See ``spa_feature_names``
in `grub-core/fs/zfs/zfs.c
<http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276>`__.
This step creates a separate boot pool for ``/boot`` with the features
limited to only those that GRUB supports, allowing the root pool to use
any/all features. Note that GRUB opens the pool read-only, so all
@@ -228,59 +236,74 @@ read-only compatible features are “supported” by GRUB.
**Hints:**
- If you are creating a mirror or raidz topology, create the pool using
``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3``
(or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and
list the partitions from additional disks).
- If you are creating a mirror topology, create the pool using::
zpool create \
... \
bpool mirror \
/dev/disk/by-id/scsi-SATA_disk1-part3 \
/dev/disk/by-id/scsi-SATA_disk2-part3
- For raidz topologies, replace ``mirror`` in the above command with
``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from
additional disks.
- The pool name is arbitrary. If changed, the new name must be used
consistently. The ``bpool`` convention originated in this HOWTO.
**Feature Notes:**
- The ``allocation_classes`` feature should be safe to use. However, unless
one is using it (i.e. a ``special`` vdev), there is no point to enabling it.
It is extremely unlikely that someone would use this feature for a boot
pool. If one cares about speeding up the boot pool, it would make more sense
to put the whole pool on the faster disk rather than using it as a
``special`` vdev.
one is using it (i.e. a ``special`` vdev), there is no point to enabling
it. It is extremely unlikely that someone would use this feature for a
boot pool. If one cares about speeding up the boot pool, it would make
more sense to put the whole pool on the faster disk rather than using it
as a ``special`` vdev.
- The ``project_quota`` feature has been tested and is safe to use. This
feature is extremely unlikely to matter for the boot pool.
- The ``resilver_defer`` should be safe but the boot pool is small enough that
it is unlikely to be necessary.
- The ``resilver_defer`` should be safe but the boot pool is small enough
that it is unlikely to be necessary.
- The ``spacemap_v2`` feature has been tested and is safe to use. The boot
pool is small, so this does not matter in practice.
- As a read-only compatible feature, the ``userobj_accounting`` feature should
be compatible in theory, but in practice, GRUB can fail with an “invalid
dnode type” error. This feature does not matter for ``/boot`` anyway.
- As a read-only compatible feature, the ``userobj_accounting`` feature
should be compatible in theory, but in practice, GRUB can fail with an
“invalid dnode type” error. This feature does not matter for ``/boot``
anyway.
2.5 Create the root pool:
#. Create the root pool:
Choose one of the following options:
2.5a Unencrypted::
- Unencrypted::
zpool create -o ashift=12 \
zpool create \
-o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt rpool ${DISK}-part4
-O dnodesize=auto -O normalization=formD -O relatime=on \
-O xattr=sa -O mountpoint=/ -R /mnt \
rpool ${DISK}-part4
2.5b ZFS native encryption::
- ZFS native encryption::
zpool create -o ashift=12 \
zpool create \
-o ashift=12 \
-O encryption=aes-256-gcm \
-O keylocation=prompt -O keyformat=passphrase \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
-O mountpoint=/ -R /mnt rpool ${DISK}-part4
-O dnodesize=auto -O normalization=formD -O relatime=on \
-O xattr=sa -O mountpoint=/ -R /mnt \
rpool ${DISK}-part4
2.5c LUKS::
- LUKS::
apt install --yes cryptsetup
cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
cryptsetup luksOpen ${DISK}-part4 luks1
zpool create -o ashift=12 \
zpool create \
-o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt rpool /dev/mapper/luks1
-O dnodesize=auto -O normalization=formD -O relatime=on \
-O xattr=sa -O mountpoint=/ -R /mnt \
rpool /dev/mapper/luks1
**Notes:**
@@ -292,17 +315,17 @@ Choose one of the following options:
- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you
do not want this, remove that option, but later add
``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create``
for ``/var/log``, as `journald requires
ACLs <https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported>`__
for ``/var/log``, as `journald requires ACLs
<https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported>`__
- Setting ``normalization=formD`` eliminates some corner cases relating
to UTF-8 filename normalization. It also implies ``utf8only=on``,
which means that only UTF-8 filenames are allowed. If you care to
support non-UTF-8 filenames, do not use this option. For a discussion
of why requiring UTF-8 filenames may be a bad idea, see `The problems
with enforced UTF-8 only
filenames <http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames>`__.
- ``recordsize`` is unset (leaving it at the default of 128 KiB). If you want to
tune it (e.g. ``-o recordsize=1M``), see `these
with enforced UTF-8 only filenames
<http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames>`__.
- ``recordsize`` is unset (leaving it at the default of 128 KiB). If you
want to tune it (e.g. ``-o recordsize=1M``), see `these
<https://jrs-s.net/2019/04/03/on-zfs-recordsize/>`__ `various
<http://blog.programster.org/zfs-record-size>`__ `blog
<https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSFileRecordsizeGrowth>`__
@@ -312,74 +335,82 @@ Choose one of the following options:
``atime`` behavior (with its significant performance impact) and
``atime=off`` (which provides the best performance by completely
disabling atime updates). Since Linux 2.6.30, ``relatime`` has been
the default for other filesystems. See `RedHats
documentation <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime>`__
the default for other filesystems. See `RedHats documentation
<https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime>`__
for further information.
- Setting ``xattr=sa`` `vastly improves the performance of extended
attributes <https://github.com/zfsonlinux/zfs/commit/82a37189aac955c81a59a5ecc3400475adb56355>`__.
attributes
<https://github.com/zfsonlinux/zfs/commit/82a37189aac955c81a59a5ecc3400475adb56355>`__.
Inside ZFS, extended attributes are used to implement POSIX ACLs.
Extended attributes can also be used by user-space applications.
`They are used by some desktop GUI
applications. <https://en.wikipedia.org/wiki/Extended_file_attributes#Linux>`__
`They are used by some desktop GUI applications.
<https://en.wikipedia.org/wiki/Extended_file_attributes#Linux>`__
`They can be used by Samba to store Windows ACLs and DOS attributes;
they are required for a Samba Active Directory domain
controller. <https://wiki.samba.org/index.php/Setting_up_a_Share_Using_Windows_ACLs>`__
Note that ``xattr=sa`` is
`Linux-specific <http://open-zfs.org/wiki/Platform_code_differences>`__.
If you move your ``xattr=sa`` pool to another OpenZFS implementation
besides ZFS-on-Linux, extended attributes will not be readable
(though your data will be). If portability of extended attributes is
important to you, omit the ``-O xattr=sa`` above. Even if you do not
want ``xattr=sa`` for the whole pool, it is probably fine to use it
for ``/var/log``.
they are required for a Samba Active Directory domain controller.
<https://wiki.samba.org/index.php/Setting_up_a_Share_Using_Windows_ACLs>`__
Note that ``xattr=sa`` is `Linux-specific
<http://open-zfs.org/wiki/Platform_code_differences>`__. If you move your
``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux,
extended attributes will not be readable (though your data will be). If
portability of extended attributes is important to you, omit the
``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole
pool, it is probably fine to use it for ``/var/log``.
- Make sure to include the ``-part4`` portion of the drive path. If you
forget that, you are specifying the whole disk, which ZFS will then
re-partition, and you will lose the bootloader partition(s).
- ZFS native encryption defaults to ``aes-256-ccm``, but `the default has
changed upstream <https://github.com/openzfs/zfs/commit/31b160f0a6c673c8f926233af2ed6d5354808393>`__
changed upstream
<https://github.com/openzfs/zfs/commit/31b160f0a6c673c8f926233af2ed6d5354808393>`__
to ``aes-256-gcm``. `AES-GCM seems to be generally preferred over AES-CCM
<https://crypto.stackexchange.com/questions/6842/how-to-choose-between-aes-ccm-and-aes-gcm-for-storage-volume-encryption>`__,
`is faster now
<https://github.com/zfsonlinux/zfs/pull/9749#issuecomment-569132997>`__, and
`will be even faster in the future
<https://github.com/zfsonlinux/zfs/pull/9749#issuecomment-569132997>`__,
and `will be even faster in the future
<https://github.com/zfsonlinux/zfs/pull/9749>`__.
- For LUKS, the key size chosen is 512 bits. However, XTS mode requires
two keys, so the LUKS key is split in half. Thus, ``-s 512`` means
AES-256.
- For LUKS, the key size chosen is 512 bits. However, XTS mode requires two
keys, so the LUKS key is split in half. Thus, ``-s 512`` means AES-256.
- Your passphrase will likely be the weakest link. Choose wisely. See
`section 5 of the cryptsetup
FAQ <https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects>`__
`section 5 of the cryptsetup FAQ
<https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects>`__
for guidance.
**Hints:**
- If you are creating a mirror or raidz topology, create the pool using
``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4``
(or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and
list the partitions from additional disks). For LUKS, use
``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will
have to create using ``cryptsetup``.
- If you are creating a mirror topology, create the pool using::
zpool create \
... \
bpool mirror \
/dev/disk/by-id/scsi-SATA_disk1-part3 \
/dev/disk/by-id/scsi-SATA_disk2-part3
- For raidz topologies, replace ``mirror`` in the above command with
``raidz``, ``raidz2``, or ``raidz3`` and list the partitions from
additional disks.
- When using LUKS with mirror or raidz topologies, use
``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will have
to create using ``cryptsetup``.
- The pool name is arbitrary. If changed, the new name must be used
consistently. On systems that can automatically install to ZFS, the
root pool is named ``rpool`` by default.
consistently. On systems that can automatically install to ZFS, the root
pool is named ``rpool`` by default.
Step 3: System Installation
---------------------------
3.1 Create filesystem datasets to act as containers::
#. Create filesystem datasets to act as containers::
zfs create -o canmount=off -o mountpoint=none rpool/ROOT
zfs create -o canmount=off -o mountpoint=none bpool/BOOT
On Solaris systems, the root filesystem is cloned and the suffix is
incremented for major system changes through ``pkg image-update`` or
``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with the
``zsys`` tool, though its dataset layout is more complicated. Even without
such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still be used
for manually created clones. That said, this HOWTO assumes a single filesystem
for ``/boot`` for simplicity.
``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with
the ``zsys`` tool, though its dataset layout is more complicated. Even
without such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still
be used for manually created clones. That said, this HOWTO assumes a single
filesystem for ``/boot`` for simplicity.
3.2 Create filesystem datasets for the root and boot filesystems::
#. Create filesystem datasets for the root and boot filesystems::
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
zfs mount rpool/ROOT/debian
@@ -391,7 +422,7 @@ With ZFS, it is not normally necessary to use a mount command (either
``mount`` or ``zfs mount``). This situation is an exception because of
``canmount=noauto``.
3.3 Create datasets::
#. Create datasets::
zfs create rpool/home
zfs create -o mountpoint=/root rpool/home/root
@@ -457,31 +488,30 @@ A tmpfs is recommended later, but if you want a separate dataset for
zfs create -o com.sun:auto-snapshot=false rpool/tmp
chmod 1777 /mnt/tmp
The primary goal of this dataset layout is to separate the OS from user data.
This allows the root filesystem to be rolled back without rolling back user
data. The ``com.sun.auto-snapshot`` setting is used by some ZFS
snapshot utilities to exclude transient data.
The primary goal of this dataset layout is to separate the OS from user
data. This allows the root filesystem to be rolled back without rolling
back user data.
If you do nothing extra, ``/tmp`` will be stored as part of the root
filesystem. Alternatively, you can create a separate dataset for
``/tmp``, as shown above. This keeps the ``/tmp`` data out of snapshots
of your root filesystem. It also allows you to set a quota on
``rpool/tmp``, if you want to limit the maximum space used. Otherwise,
you can use a tmpfs (RAM filesystem) later.
filesystem. Alternatively, you can create a separate dataset for ``/tmp``,
as shown above. This keeps the ``/tmp`` data out of snapshots of your root
filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want
to limit the maximum space used. Otherwise, you can use a tmpfs (RAM
filesystem) later.
3.4 Install the minimal system::
#. Install the minimal system::
debootstrap buster /mnt
zfs set devices=off rpool
The ``debootstrap`` command leaves the new system in an unconfigured
state. An alternative to using ``debootstrap`` is to copy the entirety
of a working system into the new ZFS root.
The ``debootstrap`` command leaves the new system in an unconfigured state.
An alternative to using ``debootstrap`` is to copy the entirety of a
working system into the new ZFS root.
Step 4: System Configuration
----------------------------
4.1 Configure the hostname:
#. Configure the hostname:
Replace ``HOSTNAME`` with the desired hostname::
@@ -497,13 +527,13 @@ Replace ``HOSTNAME`` with the desired hostname::
**Hint:** Use ``nano`` if you find ``vi`` confusing.
4.2 Configure the network interface:
#. Configure the network interface:
Find the interface name::
ip addr show
Adjust NAME below to match your interface name::
Adjust ``NAME`` below to match your interface name::
vi /mnt/etc/network/interfaces.d/NAME
@@ -514,7 +544,7 @@ Adjust NAME below to match your interface name::
Customize this file if the system is not a DHCP client.
4.3 Configure the package sources::
#. Configure the package sources::
vi /mnt/etc/apt/sources.list
@@ -542,7 +572,7 @@ Customize this file if the system is not a DHCP client.
Pin: release n=buster-backports
Pin-Priority: 990
4.4 Bind the virtual filesystems from the LiveCD environment to the new
#. Bind the virtual filesystems from the LiveCD environment to the new
system and ``chroot`` into it::
mount --rbind /dev /mnt/dev
@@ -552,7 +582,7 @@ system and ``chroot`` into it::
**Note:** This is using ``--rbind``, not ``--bind``.
4.5 Configure a basic system environment::
#. Configure a basic system environment::
ln -s /proc/self/mounts /etc/mtab
apt update
@@ -565,36 +595,37 @@ Even if you prefer a non-English system language, always ensure that
dpkg-reconfigure tzdata
4.6 Install ZFS in the chroot environment for the new system::
#. Install ZFS in the chroot environment for the new system::
apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64
apt install --yes zfs-initramfs
echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
4.7 For LUKS installs only, setup ``/etc/crypttab``::
#. For LUKS installs only, setup ``/etc/crypttab``::
apt install --yes cryptsetup
echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
luks,discard,initramfs > /etc/crypttab
The use of ``initramfs`` is a work-around for `cryptsetup does not support ZFS
<https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__.\
The use of ``initramfs`` is a work-around for `cryptsetup does not support
ZFS <https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__.
**Hint:** If you are creating a mirror or raidz topology, repeat the
``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk.
4.8 Install GRUB
#. Install GRUB
Choose one of the following options:
4.8a Install GRUB for legacy (BIOS) booting::
- Install GRUB for legacy (BIOS) booting::
apt install --yes grub-pc
Select (using the space bar) all of the disks (not partitions) in your pool.
Select (using the space bar) all of the disks (not partitions) in your
pool.
4.8b Install GRUB for UEFI booting::
- Install GRUB for UEFI booting::
apt install dosfstools
mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2
@@ -613,18 +644,18 @@ Select (using the space bar) all of the disks (not partitions) in your pool.
- For a mirror or raidz topology, this step only installs GRUB on the
first disk. The other disk(s) will be handled later.
4.9 (Optional): Remove os-prober::
#. Optional: Remove os-prober::
dpkg --purge os-prober
This avoids error messages from `update-grub`. `os-prober` is only necessary
in dual-boot configurations.
This avoids error messages from `update-grub`. `os-prober` is only
necessary in dual-boot configurations.
4.10 Set a root password::
#. Set a root password::
passwd
4.11 Enable importing bpool
#. Enable importing bpool
This ensures that ``bpool`` is always imported, regardless of whether
``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not,
@@ -653,7 +684,7 @@ or whether ``zfs-import-scan.service`` is enabled.
systemctl enable zfs-import-bpool.service
4.12 Optional (but recommended): Mount a tmpfs to ``/tmp``
#. Optional (but recommended): Mount a tmpfs to ``/tmp``
If you chose to create a ``/tmp`` dataset above, skip this step, as they
are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a
@@ -664,7 +695,7 @@ tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit.
cp /usr/share/systemd/tmp.mount /etc/systemd/system/
systemctl enable tmp.mount
4.13 Optional (but kindly requested): Install popcon
#. Optional (but kindly requested): Install popcon
The ``popularity-contest`` package reports the list of packages install
on your system. Showing that ZFS is popular may be helpful in terms of
@@ -679,11 +710,11 @@ Choose Yes at the prompt.
Step 5: GRUB Installation
-------------------------
5.1 Verify that the ZFS boot filesystem is recognized::
#. Verify that the ZFS boot filesystem is recognized::
grub-probe /boot
5.2 Refresh the initrd files::
#. Refresh the initrd files::
update-initramfs -c -k all
@@ -692,12 +723,12 @@ root device from /etc/fstab”. This is because `cryptsetup does not
support ZFS
<https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__.
5.3 Workaround GRUB's missing zpool-features support::
#. Workaround GRUB's missing zpool-features support::
vi /etc/default/grub
# Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian"
5.4 Optional (but highly recommended): Make debugging GRUB easier::
#. Optional (but highly recommended): Make debugging GRUB easier::
vi /etc/default/grub
# Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT
@@ -707,24 +738,24 @@ support ZFS
Later, once the system has rebooted twice and you are sure everything is
working, you can undo these changes, if desired.
5.5 Update the boot configuration::
#. Update the boot configuration::
update-grub
**Note:** Ignore errors from ``osprober``, if present.
5.6 Install the boot loader:
#. Install the boot loader:
5.6a For legacy (BIOS) booting, install GRUB to the MBR::
#. For legacy (BIOS) booting, install GRUB to the MBR::
grub-install $DISK
Note that you are installing GRUB to the whole disk, not a partition.
If you are creating a mirror or raidz topology, repeat the
``grub-install`` command for each disk in the pool.
If you are creating a mirror or raidz topology, repeat the ``grub-install``
command for each disk in the pool.
5.6b For UEFI booting, install GRUB::
#. For UEFI booting, install GRUB to the ESP::
grub-install --target=x86_64-efi --efi-directory=/boot/efi \
--bootloader-id=debian --recheck --no-floppy
@@ -732,14 +763,13 @@ If you are creating a mirror or raidz topology, repeat the
It is not necessary to specify the disk here. If you are creating a
mirror or raidz topology, the additional disks will be handled later.
5.7 Fix filesystem mount ordering:
#. Fix filesystem mount ordering:
We need to activate ``zfs-mount-generator``. This makes systemd aware of
the separate mountpoints, which is important for things like
``/var/log`` and ``/var/tmp``. In turn, ``rsyslog.service`` depends on
``var-log.mount`` by way of ``local-fs.target`` and services using the
``PrivateTmp`` feature of systemd automatically use
``After=var-tmp.mount``.
the separate mountpoints, which is important for things like ``/var/log``
and ``/var/tmp``. In turn, ``rsyslog.service`` depends on ``var-log.mount``
by way of ``local-fs.target`` and services using the ``PrivateTmp`` feature
of systemd automatically use ``After=var-tmp.mount``.
::
@@ -771,7 +801,15 @@ Fix the paths to eliminate ``/mnt``::
Step 6: First Boot
------------------
6.1 Snapshot the initial installation::
#. Optional: Install SSH::
apt install --yes openssh-server
If you want to login as root via SSH, set ``PermitRootLogin yes`` in
``/etc/ssh/sshd_config``. For security, undo this as soon as possible
(i.e. once you have your regular user account setup).
#. Optional: Snapshot the initial installation::
zfs snapshot bpool/BOOT/debian@install
zfs snapshot rpool/ROOT/debian@install
@@ -780,23 +818,24 @@ In the future, you will likely want to take snapshots before each
upgrade, and remove old snapshots (including this one) at some point to
save space.
6.2 Exit from the ``chroot`` environment back to the LiveCD environment::
#. Exit from the ``chroot`` environment back to the LiveCD environment::
exit
6.3 Run these commands in the LiveCD environment to unmount all
#. Run these commands in the LiveCD environment to unmount all
filesystems::
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
xargs -i{} umount -lf {}
zpool export -a
6.4 Reboot::
#. Reboot::
reboot
Wait for the newly installed system to boot normally. Login as root.
6.5 Create a user account:
#. Create a user account:
Replace ``username`` with your desired username::
@@ -807,18 +846,19 @@ Replace ``username`` with your desired username::
chown -R username:username /home/username
usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username
6.6 Mirror GRUB
#. Mirror GRUB
If you installed to multiple disks, install GRUB on the additional
disks:
disks.
6.6a For legacy (BIOS) booting::
- For legacy (BIOS) booting::
dpkg-reconfigure grub-pc
Hit enter until you get to the device selection screen.
Select (using the space bar) all of the disks (not partitions) in your pool.
6.6b For UEFI booting::
- For UEFI booting::
umount /boot/efi
@@ -831,15 +871,15 @@ For the second and subsequent disks (increment debian-2 to -3, etc.)::
mount /boot/efi
Step 7: (Optional) Configure Swap
Step 7: Optional: Configure Swap
---------------------------------
**Caution**: On systems with extremely high memory pressure, using a
zvol for swap can result in lockup, regardless of how much swap is still
available. This issue is currently being investigated in:
`https://github.com/zfsonlinux/zfs/issues/7734 <https://github.com/zfsonlinux/zfs/issues/7734>`__
available. There is `a bug report upstrea
<https://github.com/zfsonlinux/zfs/issues/7734>`__.
7.1 Create a volume dataset (zvol) for use as a swap device::
#. Create a volume dataset (zvol) for use as a swap device::
zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \
-o logbias=throughput -o sync=always \
@@ -855,7 +895,7 @@ compression algorithm can reduce I/O. The exception is all-zero pages,
which are dropped by ZFS; but some form of compression has to be enabled
to get this behavior.
7.2 Configure the swap device:
#. Configure the swap device:
**Caution**: Always use long ``/dev/zvol`` aliases in configuration
files. Never use a short ``/dev/zdX`` device name.
@@ -872,30 +912,29 @@ yet been imported) at the time the resume script runs. If it is not
disabled, the boot process hangs for 30 seconds waiting for the swap
zvol to appear.
7.3 Enable the swap device::
#. Enable the swap device::
swapon -av
Step 8: Full Software Installation
----------------------------------
8.1 Upgrade the minimal system::
#. Upgrade the minimal system::
apt dist-upgrade --yes
8.2 Install a regular set of software::
#. Install a regular set of software:
tasksel
8.3 Optional: Disable log compression:
#. Optional: Disable log compression:
As ``/var/log`` is already compressed by ZFS, logrotates compression is
going to burn CPU and disk I/O for (in most cases) very little gain.
Also, if you are making snapshots of ``/var/log``, logrotates
compression will actually waste space, as the uncompressed data will
live on in the snapshot. You can edit the files in ``/etc/logrotate.d``
by hand to comment out ``compress``, or use this loop (copy-and-paste
highly recommended)::
going to burn CPU and disk I/O for (in most cases) very little gain. Also,
if you are making snapshots of ``/var/log``, logrotates compression will
actually waste space, as the uncompressed data will live on in the
snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment
out ``compress``, or use this loop (copy-and-paste highly recommended)::
for file in /etc/logrotate.d/* ; do
if grep -Eq "(^|[^#y])compress" "$file" ; then
@@ -903,26 +942,26 @@ highly recommended)::
fi
done
8.4 Reboot::
#. Reboot::
reboot
Step 9: Final Cleanup
---------------------
9.1 Wait for the system to boot normally. Login using the account you
#. Wait for the system to boot normally. Login using the account you
created. Ensure the system (including networking) works normally.
9.2 Optional: Delete the snapshots of the initial installation::
#. Optional: Delete the snapshots of the initial installation::
sudo zfs destroy bpool/BOOT/debian@install
sudo zfs destroy rpool/ROOT/debian@install
9.3 Optional: Disable the root password::
#. Optional: Disable the root password::
sudo usermod -p '*' root
9.4 Optional: Re-enable the graphical boot process:
#. Optional: Re-enable the graphical boot process:
If you prefer the graphical boot process, you can re-enable it now. If
you are using LUKS, it makes the prompt look nicer.
@@ -938,16 +977,16 @@ you are using LUKS, it makes the prompt look nicer.
**Note:** Ignore errors from ``osprober``, if present.
9.5 Optional: For LUKS installs only, backup the LUKS header::
#. Optional: For LUKS installs only, backup the LUKS header::
sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \
--header-backup-file luks1-header.dat
Store that backup somewhere safe (e.g. cloud storage). It is protected
by your LUKS passphrase, but you may wish to use additional encryption.
Store that backup somewhere safe (e.g. cloud storage). It is protected by
your LUKS passphrase, but you may wish to use additional encryption.
**Hint:** If you created a mirror or raidz topology, repeat this for
each LUKS volume (``luks2``, etc.).
**Hint:** If you created a mirror or raidz topology, repeat this for each
LUKS volume (``luks2``, etc.).
Troubleshooting
---------------
@@ -955,8 +994,8 @@ Troubleshooting
Rescuing using a Live CD
~~~~~~~~~~~~~~~~~~~~~~~~
Go through `Step 1: Prepare The Install
Environment <#step-1-prepare-the-install-environment>`__.
Go through `Step 1: Prepare The Install Environment
<#step-1-prepare-the-install-environment>`__.
For LUKS, first unlock the disk(s)::
@@ -987,45 +1026,38 @@ Do whatever you need to do to fix your system.
When done, cleanup::
exit
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {}
mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | \
xargs -i{} umount -lf {}
zpool export -a
reboot
MPT2SAS
~~~~~~~
Most problem reports for this tutorial involve ``mpt2sas`` hardware that
does slow asynchronous drive initialization, like some IBM M1015 or
OEM-branded cards that have been flashed to the reference LSI firmware.
The basic problem is that disks on these controllers are not visible to
the Linux kernel until after the regular system is started, and ZoL does
not hotplug pool members. See
`https://github.com/zfsonlinux/zfs/issues/330 <https://github.com/zfsonlinux/zfs/issues/330>`__.
Most LSI cards are perfectly compatible with ZoL. If your card has this
glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in
``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to
appear before importing the pool.
Areca
~~~~~
Systems that require the ``arcsas`` blob driver should add it to the
``/etc/initramfs-tools/modules`` file and run
``update-initramfs -c -k all``.
``/etc/initramfs-tools/modules`` file and run ``update-initramfs -c -k all``.
Upgrade or downgrade the Areca driver if something like
``RIP: 0010:[<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20``
appears anywhere in kernel log. ZoL is unstable on systems that emit
this error message.
appears anywhere in kernel log. ZoL is unstable on systems that emit this
error message.
VMware
~~~~~~
MPT2SAS
~~~~~~~
- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere
configuration. Doing this ensures that ``/dev/disk`` aliases are
created in the guest.
Most problem reports for this tutorial involve ``mpt2sas`` hardware that does
slow asynchronous drive initialization, like some IBM M1015 or OEM-branded
cards that have been flashed to the reference LSI firmware.
The basic problem is that disks on these controllers are not visible to the
Linux kernel until after the regular system is started, and ZoL does not
hotplug pool members. See `https://github.com/zfsonlinux/zfs/issues/330
<https://github.com/zfsonlinux/zfs/issues/330>`__.
Most LSI cards are perfectly compatible with ZoL. If your card has this
glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in
``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to
appear before importing the pool.
QEMU/KVM/XEN
~~~~~~~~~~~~
@@ -1053,3 +1085,9 @@ Uncomment these lines:
::
sudo systemctl restart libvirtd.service
VMware
~~~~~~
- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere configuration.
Doing this ensures that ``/dev/disk`` aliases are created in the guest.

File diff suppressed because it is too large Load Diff