Add Fedora Root on ZFS support (#177)

* Add support for Fedora Root on ZFS

* use /usr/sbin/zfs

* 2.0.5 is now available with support for latest kernel

* require cryptsetup.target before mounting swap

* disable sshd

* fedora redirect

Signed-off-by: Maurice Zhou <jasper@apvc.uk>
This commit is contained in:
ne9z
2021-08-03 22:42:08 +08:00
committed by GitHub
parent f5a2a4e05b
commit de588ca3d3
11 changed files with 1234 additions and 86 deletions

View File

@@ -1,88 +1,7 @@
:orphan:
Fedora Fedora
====== =======================
Only `DKMS`_ style packages can be provided for Fedora from the official This page has been moved to `here <Fedora/index.html>`__.
OpenZFS repository. This is because Fedora is a fast moving distribution
which does not provide a stable kABI. These packages track the official
OpenZFS tags and are updated as new versions are released. Packages are
available for the following configurations:
| **Fedora Releases:** 32, 33, 34
| **Architectures:** x86_64
.. note::
Due to the release cycle of OpenZFS and Fedora's rapid adoption of new
kernels it may happen that you won't be able to build DKMS packages for
the most recent kernel update. If the `latest OpenZFS release`_ does
not yet support the installed Fedora kernel you will have to pin your
kernel to an earlier supported version.
To simplify installation a *zfs-release* package is provided which includes
a zfs.repo configuration file and public signing key. All official
OpenZFS packages are signed using this key, and by default dnf will verify a
package's signature before allowing it be to installed. Users are strongly
encouraged to verify the authenticity of the ZFS on Linux public key using
the fingerprint listed here.
| **Location:** /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
| **Fedora 32 Package:** `zfs-release.fc32.noarch.rpm`_
| **Fedora 33 Package:** `zfs-release.fc33.noarch.rpm`_
| **Fedora 34 Package:** `zfs-release.fc34.noarch.rpm`_
| **Download from:**
`pgp.mit.edu <https://pgp.mit.edu/pks/lookup?search=0xF14AB620&op=index&fingerprint=on>`__
| **Fingerprint:** C93A FFFD 9F3F 7B03 C310 CEB6 A9D5 A1C0 F14A B620
.. code:: sh
$ sudo dnf install https://zfsonlinux.org/fedora/zfs-release$(rpm -E %dist).noarch.rpm
$ gpg --import --import-options show-only /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux
pub rsa2048 2013-03-21 [SC]
C93AFFFD9F3F7B03C310CEB6A9D5A1C0F14AB620
uid ZFS on Linux <zfs@zfsonlinux.org>
sub rsa2048 2013-03-21 [E]
The OpenZFS packages should be installed with ``dnf`` on Fedora. Note that
it is important to make sure that the matching *kernel-devel* package is
installed for the running kernel since DKMS requires it to build ZFS.
.. code:: sh
$ sudo dnf install zfs
If the Fedora provided *zfs-fuse* package is already installed on the
system. Then the ``dnf swap`` command should be used to replace the
existing fuse packages with the ZFS on Linux packages.
.. code:: sh
$ sudo dnf swap zfs-fuse zfs
By default the OpenZFS kernel modules are automatically loaded when a ZFS
pool is detected. If you would prefer to always load the modules at boot
time you must create an ``/etc/modules-load.d/zfs.conf`` file.
.. code:: sh
$ sudo sh -c "echo zfs >/etc/modules-load.d/zfs.conf"
Testing Repositories
--------------------
In addition to the primary *zfs* repository a *zfs-testing* repository
is available. This repository, which is disabled by default, contains
the latest version of OpenZFS which is under active development. These
packages are made available in order to get feedback from users regarding
the functionality and stability of upcoming releases. These packages
**should not** be used on production systems. Packages from the testing
repository can be installed as follows.
::
$ sudo dnf config-manager --enable zfs-testing
$ sudo dnf install zfs
.. _DKMS: https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support
.. _latest OpenZFS release: https://github.com/openzfs/zfs/releases/latest
.. _zfs-release.fc32.noarch.rpm: https://zfsonlinux.org/fedora/zfs-release.fc32.noarch.rpm
.. _zfs-release.fc33.noarch.rpm: https://zfsonlinux.org/fedora/zfs-release.fc33.noarch.rpm
.. _zfs-release.fc34.noarch.rpm: https://zfsonlinux.org/fedora/zfs-release.fc34.noarch.rpm

View File

@@ -0,0 +1,11 @@
Fedora Root on ZFS
======================
`Start here <Root%20on%20ZFS/0-overview.html>`__.
Contents
--------
.. toctree::
:maxdepth: 2
:glob:
Root on ZFS/*

View File

@@ -0,0 +1,112 @@
.. highlight:: sh
Overview
======================
This document describes how to install Fedora with ZFS as root
file system.
Caution
~~~~~~~
- With less than 4GB RAM, DKMS might fail to build
in live environment.
- This guide wipes entire physical disks. Back up existing data.
- `GRUB does not and
will not work on 4Kn drive with legacy (BIOS) booting.
<http://savannah.gnu.org/bugs/?46700>`__
Partition layout
~~~~~~~~~~~~~~~~
GUID partition table (GPT) is used.
EFI system partition will be referred to as **ESP** in this document.
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Name | legacy boot | ESP | Boot pool | swap | root pool | remaining space |
+======================+======================+=======================+======================+=====================+=======================+=================+
| File system | | vfat | ZFS | swap | ZFS | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Size | 1M | 2G | 4G | depends on RAM size | | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Optional encryption | | *Secure Boot* | | plain dm-crypt | ZFS native encryption | |
| | | | | | | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Partition no. | 5 | 1 | 2 | 4 | 3 | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Mount point | | /boot/efi | /boot | | / | |
| | | /boot/efis/disk-part1 | | | | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
Dataset layout
~~~~~~~~~~~~~~
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| Dataset | canmount | mountpoint | container | notes |
+===========================+======================+======================+=====================================+===========================================+
| bpool | off | /boot | contains sys | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool | off | / | contains sys | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| bpool/sys | off | none | contains BOOT | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys | off | none | contains ROOT | sys is encryptionroot |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| bpool/sys/BOOT | off | none | contains boot environments | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/ROOT | off | none | contains boot environments | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/DATA | off | none | contains placeholder "default" | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/DATA/default | off | / | contains user datasets | child datsets inherits mountpoint |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/DATA/default/ | on | /home (inherited) | no | |
| home | | | | user datasets, also called "shared |
| | | | | datasets", "persistent datasets"; also |
| | | | | include /var/lib, /srv, ... |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| bpool/sys/BOOT/default | noauto | legacy /boot | no | noauto is used to switch BE. because of |
| | | | | noauto, must use fstab to mount |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/ROOT/default | noauto | / | no | mounted by initrd zfs hook |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| bpool/sys/BOOT/be1 | noauto | legacy /boot | no | see bpool/sys/BOOT/default |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/ROOT/be1 | noauto | / | no | see rpool/sys/ROOT/default |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
Encryption
~~~~~~~~~~
- Swap
Swap is always encrypted. By default, swap is encrypted
with plain dm-crypt with key generated from ``/dev/urandom``
at every boot. Swap content does not persist between reboots.
- Root pool
ZFS native encryption can be optionally enabled for ``rpool/sys``
and child datasets.
User should be aware that, ZFS native encryption does not
encrypt some metadata of the datasets.
ZFS native encryption also does not change master key when ``zfs change-key`` is invoked.
Therefore, you should wipe the disk when password is compromised to protect confidentiality.
See `zfs-load-key.8 <https://openzfs.github.io/openzfs-docs/man/8/zfs-load-key.8.html>`__
and `zfs-change-key.8 <https://openzfs.github.io/openzfs-docs/man/8/zfs-change-key.8.html>`__
for more information regarding ZFS native encryption.
Encryption is enabled at dataset creation and can not be disabled later.
- Boot pool
Boot pool can not be encrypted.
- Bootloader
Bootloader can not be encrypted.
However, with Secure Boot, bootloader
can be verified by motherboard firmware to be untempered,
which should be sufficient for most purposes.
Secure Boot is supported out-of-the-box by Fedora.

View File

@@ -0,0 +1,117 @@
.. highlight:: sh
Preparation
======================
.. contents:: Table of Contents
:local:
#. Download a variant of Fedora 34 live image
and boot from it.
#. Disable Secure Boot. ZFS modules can not be loaded of Secure Boot is enabled.
#. Set root password or ``/root/authorized_keys``.
#. Start SSH server::
echo PermitRootLogin yes >> /etc/ssh/sshd_config
systemctl start sshd
#. Connect from another computer::
ssh root@192.168.1.19
#. Set SELinux to persmissive::
setenforce 0
#. Install ``kernel-devel``::
source /etc/os-release
dnf install -y https://dl.fedoraproject.org/pub/fedora/linux/releases/${VERSION_ID}/Everything/x86_64/os/Packages/k/kernel-devel-$(uname -r).rpm
#. Add ZFS repo::
dnf install -y https://zfsonlinux.org/fedora/zfs-release.fc${VERSION_ID}.noarch.rpm
#. Install ZFS packages::
dnf install -y zfs
#. Load kernel modules::
modprobe zfs
#. Install helper script and partition tool::
dnf install -y arch-install-scripts gdisk
#. Target Fedora version::
INST_FEDORA_VER='34'
#. Unique pool suffix. ZFS expects pool names to be
unique, therefore it's recommended to create
pools with a unique suffix::
INST_UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | tr -dc 'a-z0-9' | cut -c-6)
#. Identify this installation in ZFS filesystem path::
INST_ID=fedora
#. Target disk
List available disks with::
ls /dev/disk/by-id/*
If using virtio as disk bus, use
``/dev/disk/by-path/*`` or ``/dev/vd*``.
Declare disk array::
DISK=(/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR)
For single disk installation, use::
DISK=(/dev/disk/by-id/disk1)
#. Choose a primary disk. This disk will be used
for primary EFI partition and hibernation, default to
first disk in the array::
INST_PRIMARY_DISK=${DISK[0]}
#. Set vdev topology, possible values are:
- (not set, single disk or striped; no redundancy)
- mirror
- raidz1
- raidz2
- raidz3
::
INST_VDEV=
#. Set partition size:
Set ESP size::
INST_PARTSIZE_ESP=2 # in GB
Set boot pool size. To avoid running out of space while using
boot environments, the minimum is 4GB. Adjust the size if you
intend to use multiple kernel/distros::
INST_PARTSIZE_BPOOL=4
Set swap size. It's `recommended <https://chrisdown.name/2018/01/02/in-defence-of-swap.html>`__
to setup a swap partition. If you intend to use hibernation,
the minimum should be no less than RAM size. Skip if swap is not needed::
INST_PARTSIZE_SWAP=8
Root pool size, use all remaining disk space if not set::
INST_PARTSIZE_RPOOL=

View File

@@ -0,0 +1,229 @@
.. highlight:: sh
System Installation
======================
.. contents:: Table of Contents
:local:
#. Partition the disks.
See `Overview <0-overview.html>`__ for details::
for i in ${DISK[@]}; do
sgdisk --zap-all $i
sgdisk -n1:1M:+${INST_PARTSIZE_ESP}G -t1:EF00 $i
sgdisk -n2:0:+${INST_PARTSIZE_BPOOL}G -t2:BE00 $i
if [ "${INST_PARTSIZE_SWAP}" != "" ]; then
sgdisk -n4:0:+${INST_PARTSIZE_SWAP}G -t4:8200 $i
fi
if [ "${INST_PARTSIZE_RPOOL}" = "" ]; then
sgdisk -n3:0:0 -t3:BF00 $i
else
sgdisk -n3:0:+${INST_PARTSIZE_RPOOL}G -t3:BF00 $i
fi
sgdisk -a1 -n5:24K:+1000K -t5:EF02 $i
done
#. Create boot pool::
zpool create \
-d -o feature@async_destroy=enabled \
-o feature@bookmarks=enabled \
-o feature@embedded_data=enabled \
-o feature@empty_bpobj=enabled \
-o feature@enabled_txg=enabled \
-o feature@extensible_dataset=enabled \
-o feature@filesystem_limits=enabled \
-o feature@hole_birth=enabled \
-o feature@large_blocks=enabled \
-o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl \
-O canmount=off \
-O compression=lz4 \
-O devices=off \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/boot \
-R /mnt \
bpool_$INST_UUID \
$INST_VDEV \
$(for i in ${DISK[@]}; do
printf "$i-part2 ";
done)
You should not need to customize any of the options for the boot pool.
GRUB does not support all of the zpool features. See ``spa_feature_names``
in `grub-core/fs/zfs/zfs.c
<http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276>`__.
This step creates a separate boot pool for ``/boot`` with the features
limited to only those that GRUB supports, allowing the root pool to use
any/all features.
Features enabled with ``-o compatibility=grub2`` can be seen
`here <https://github.com/openzfs/zfs/blob/master/cmd/zpool/compatibility.d/grub2>`__.
#. Create root pool::
zpool create \
-o ashift=12 \
-o autotrim=on \
-R /mnt \
-O acltype=posixacl \
-O canmount=off \
-O compression=zstd \
-O dnodesize=auto \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/ \
rpool_$INST_UUID \
$INST_VDEV \
$(for i in ${DISK[@]}; do
printf "$i-part3 ";
done)
**Notes:**
- The use of ``ashift=12`` is recommended here because many drives
today have 4 KiB (or larger) physical sectors, even though they
present 512 B logical sectors. Also, a future replacement drive may
have 4 KiB physical sectors (in which case ``ashift=12`` is desirable)
or 4 KiB logical sectors (in which case ``ashift=12`` is required).
- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you
do not want this, remove that option, but later add
``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create``
for ``/var/log``, as `journald requires ACLs
<https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported>`__
- Setting ``normalization=formD`` eliminates some corner cases relating
to UTF-8 filename normalization. It also implies ``utf8only=on``,
which means that only UTF-8 filenames are allowed. If you care to
support non-UTF-8 filenames, do not use this option. For a discussion
of why requiring UTF-8 filenames may be a bad idea, see `The problems
with enforced UTF-8 only filenames
<http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames>`__.
- ``recordsize`` is unset (leaving it at the default of 128 KiB). If you
want to tune it (e.g. ``-o recordsize=1M``), see `these
<https://jrs-s.net/2019/04/03/on-zfs-recordsize/>`__ `various
<http://blog.programster.org/zfs-record-size>`__ `blog
<https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSFileRecordsizeGrowth>`__
`posts
<https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSRecordsizeAndCompression>`__.
- Setting ``relatime=on`` is a middle ground between classic POSIX
``atime`` behavior (with its significant performance impact) and
``atime=off`` (which provides the best performance by completely
disabling atime updates). Since Linux 2.6.30, ``relatime`` has been
the default for other filesystems. See `RedHats documentation
<https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime>`__
for further information.
- Setting ``xattr=sa`` `vastly improves the performance of extended
attributes
<https://github.com/zfsonlinux/zfs/commit/82a37189aac955c81a59a5ecc3400475adb56355>`__.
Inside ZFS, extended attributes are used to implement POSIX ACLs.
Extended attributes can also be used by user-space applications.
`They are used by some desktop GUI applications.
<https://en.wikipedia.org/wiki/Extended_file_attributes#Linux>`__
`They can be used by Samba to store Windows ACLs and DOS attributes;
they are required for a Samba Active Directory domain controller.
<https://wiki.samba.org/index.php/Setting_up_a_Share_Using_Windows_ACLs>`__
Note that ``xattr=sa`` is `Linux-specific
<https://openzfs.org/wiki/Platform_code_differences>`__. If you move your
``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux,
extended attributes will not be readable (though your data will be). If
portability of extended attributes is important to you, omit the
``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole
pool, it is probably fine to use it for ``/var/log``.
- Make sure to include the ``-part3`` portion of the drive path. If you
forget that, you are specifying the whole disk, which ZFS will then
re-partition, and you will lose the bootloader partition(s).
#. This section implements dataset layout as described in `overview <0-overview.html>`__.
Create root system container:
- Unencrypted::
zfs create \
-o canmount=off \
-o mountpoint=none \
rpool_$INST_UUID/$INST_ID
- Encrypted:
Pick a strong password. Once compromised, changing password will not keep your
data safe. See ``zfs-change-key(8)`` for more info::
zfs create \
-o canmount=off \
-o mountpoint=none \
-o encryption=on \
-o keylocation=prompt \
-o keyformat=passphrase \
rpool_$INST_UUID/$INST_ID
Create other system datasets::
zfs create -o canmount=off -o mountpoint=none bpool_$INST_UUID/$INST_ID
zfs create -o canmount=off -o mountpoint=none bpool_$INST_UUID/$INST_ID/BOOT
zfs create -o canmount=off -o mountpoint=none rpool_$INST_UUID/$INST_ID/ROOT
zfs create -o canmount=off -o mountpoint=none rpool_$INST_UUID/$INST_ID/DATA
zfs create -o mountpoint=legacy -o canmount=noauto bpool_$INST_UUID/$INST_ID/BOOT/default
zfs create -o mountpoint=/ -o canmount=off rpool_$INST_UUID/$INST_ID/DATA/default
zfs create -o mountpoint=/ -o canmount=noauto rpool_$INST_UUID/$INST_ID/ROOT/default
zfs mount rpool_$INST_UUID/$INST_ID/ROOT/default
mkdir /mnt/boot
mount -t zfs bpool_$INST_UUID/$INST_ID/BOOT/default /mnt/boot
for i in {usr,var,var/lib};
do
zfs create -o canmount=off rpool_$INST_UUID/$INST_ID/DATA/default/$i
done
for i in {home,root,srv,usr/local,var/log,var/spool};
do
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/$i
done
chmod 750 /mnt/root
#. Format and mount ESP::
for i in ${DISK[@]}; do
mkfs.vfat -n EFI ${i}-part1
mkdir -p /mnt/boot/efis/${i##*/}-part1
mount -t vfat ${i}-part1 /mnt/boot/efis/${i##*/}-part1
done
mkdir -p /mnt/boot/efi
mount -t vfat ${INST_PRIMARY_DISK}-part1 /mnt/boot/efi
#. Create optional user data datasets to omit data from rollback::
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/games
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/www
# for GNOME
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/AccountsService
# for Docker
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/docker
# for NFS
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/nfs
# for LXC
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/lxc
# for LibVirt
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/libvirt
##other application
# zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/$name
Add other datasets when needed, such as PostgreSQL.
#. Install base packages::
dnf --installroot=/mnt --releasever=${INST_FEDORA_VER} -y install \
https://zfsonlinux.org/fedora/zfs-release.fc${INST_FEDORA_VER}.noarch.rpm \
@core grub2-efi-x64 grub2-pc-modules grub2-efi-x64-modules shim-x64 efibootmgr cryptsetup \
kernel kernel-devel
#. Install ZFS::
dnf --installroot=/mnt --releasever=${INST_FEDORA_VER} -y install zfs zfs-dracut

View File

@@ -0,0 +1,127 @@
.. highlight:: sh
System Configuration
======================
.. contents:: Table of Contents
:local:
#. Generate list of datasets for `zfs-mount-generator
<https://manpages.ubuntu.com/manpages/focal/man8/zfs-mount-generator.8.html>`__ to mount them at boot::
# tab-separated zfs properties
# see /etc/zfs/zed.d/history_event-zfs-list-cacher.sh
export \
PROPS="name,mountpoint,canmount,atime,relatime,devices,exec\
,readonly,setuid,nbmand,encroot,keylocation"
mkdir -p /mnt/etc/zfs/zfs-list.cache
zfs list -H -t filesystem -o $PROPS -r rpool_$INST_UUID > /mnt/etc/zfs/zfs-list.cache/rpool_$INST_UUID
sed -Ei "s|/mnt/?|/|" /mnt/etc/zfs/zfs-list.cache/*
#. Generate fstab::
echo bpool_$INST_UUID/$INST_ID/BOOT/default /boot zfs rw,xattr,posixacl 0 0 >> /mnt/etc/fstab
for i in ${DISK[@]}; do
echo UUID=$(blkid -s UUID -o value ${i}-part1) /boot/efis/${i##*/}-part1 vfat \
x-systemd.idle-timeout=1min,x-systemd.automount,noauto,umask=0022,fmask=0022,dmask=0022 0 1 >> /mnt/etc/fstab
done
echo UUID=$(blkid -s UUID -o value ${INST_PRIMARY_DISK}-part1) /boot/efi vfat \
x-systemd.idle-timeout=1min,x-systemd.automount,noauto,umask=0022,fmask=0022,dmask=0022 0 1 >> /mnt/etc/fstab
if [ "${INST_PARTSIZE_SWAP}" != "" ]; then
for i in ${DISK[@]}; do
echo ${i##*/}-part4-swap ${i}-part4 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256,discard >> /mnt/etc/crypttab
echo /dev/mapper/${i##*/}-part4-swap none swap x-systemd.requires=cryptsetup.target,defaults 0 0 >> /mnt/etc/fstab
done
fi
By default, systemd will halt boot process if any entry in ``/etc/fstab`` fails
to mount. This is unnecessary for mirrored EFI boot partitions.
With the above mount options, systemd will skip mounting them at boot,
only mount them on demand when accessed.
#. Configure dracut::
echo 'add_dracutmodules+=" zfs "' > /mnt/etc/dracut.conf.d/zfs.conf
#. Enable DHCP on all ethernet ports::
tee /mnt/etc/systemd/network/20-default.network <<EOF
[Match]
Name=en*
Name=eth*
[Network]
DHCP=yes
EOF
systemctl enable systemd-networkd systemd-resolved --root=/mnt
Customize this file if the system is not using wired DHCP network.
See `Network Configuration <https://wiki.archlinux.org/index.php/Network_configuration>`__.
Alternatively, configure ``NetworkManager``.
#. Enable timezone sync::
hwclock --systohc
systemctl enable systemd-timesyncd --root=/mnt
#. Interactively set locale, keymap, timezone, hostname and root password::
rm -f /mnt/etc/localtime
systemd-firstboot --root=/mnt --force --prompt --root-password=PASSWORD
This can be non-interactive, see man page for details::
rm -f /mnt/etc/localtime
systemd-firstboot --root=/mnt --force \
--locale="en_US.UTF-8" --locale-messages="en_US.UTF-8" \
--keymap=us --timezone="Europe/Berlin" --hostname=myHost \
--root-password=PASSWORD --root-shell=/bin/bash
``systemd-firstboot`` have bugs, root password is set below.
#. Install locale package, example for English locale::
dnf --installroot=/mnt install -y glibc-minimal-langpack glibc-langpack-en
Program will show errors if not installed.
#. Enable ZFS services::
systemctl enable zfs-import-scan.service zfs-import.target zfs-mount zfs-zed zfs.target --root=/mnt
#. By default SSH server is enabled, allowing root login by password,
disable SSH server::
systemctl disable sshd --root=/mnt
#. Chroot::
echo "INST_PRIMARY_DISK=$INST_PRIMARY_DISK
INST_LINVAR=$INST_LINVAR
INST_UUID=$INST_UUID
INST_ID=$INST_ID
INST_VDEV=$INST_VDEV" > /mnt/root/chroot
echo DISK=\($(for i in ${DISK[@]}; do printf "$i "; done)\) >> /mnt/root/chroot
arch-chroot /mnt bash --login
unalias -a
#. Source variables::
source /root/chroot
#. Relabel filesystem on next boot::
fixfiles -F onboot
#. Set root password::
passwd
#. Build ZFS modules::
ls -1 /lib/modules \
| while read kernel_version; do
dkms autoinstall -k $kernel_version
done

View File

@@ -0,0 +1,170 @@
.. highlight:: sh
Optional Configuration
======================
.. contents:: Table of Contents
:local:
Skip to `bootloader <5-bootloader.html>`__ section if
no optional configuration is needed.
Boot environment manager
~~~~~~~~~~~~~~~~~~~~~~~~
A boot environment is a dataset which contains a bootable
instance of an operating system.
`bieaz <https://gitlab.com/m_zhou/bieaz/-/releases/>`__ can
be installed to manage boot environments. Download and install
prebuilt rpm file.
Encrypt boot pool
~~~~~~~~~~~~~~~~~~~
**WARNING**: Encrypting boot pool may cause significant boot time increases.
In test installation, GRUB took nearly 2 minutes to decrypt LUKS container.
#. LUKS password::
LUKS_PWD=secure-passwd
You will need to enter the same password for
each disk at boot. As root pool key is
protected by this password, the previous warning
about password strength still apply.
Double-check password here. Complete reinstallation is
needed if entered wrong.
#. Create encryption keys::
mkdir /etc/cryptkey.d/
chmod 700 /etc/cryptkey.d/
dd bs=32 count=1 if=/dev/urandom of=/etc/cryptkey.d/rpool_$INST_UUID-${INST_ID}-key-zfs
dd bs=32 count=1 if=/dev/urandom of=/etc/cryptkey.d/bpool_$INST_UUID-key-luks
#. Backup boot pool::
zfs snapshot -r bpool_$INST_UUID/$INST_ID@pre-luks
zfs send -Rv bpool_$INST_UUID/$INST_ID@pre-luks > /root/bpool_$INST_UUID-${INST_ID}-pre-luks
#. Unmount EFI partition::
umount /boot/efi
for i in ${DISK[@]}; do
umount /boot/efis/${i##*/}-part1
done
#. Destroy boot pool::
zpool destroy bpool_$INST_UUID
#. Create LUKS containers::
for i in ${DISK[@]}; do
cryptsetup luksFormat -q --type luks1 --key-file /etc/cryptkey.d/bpool_$INST_UUID-key-luks $i-part2
echo $LUKS_PWD | cryptsetup luksAddKey --key-file /etc/cryptkey.d/bpool_$INST_UUID-key-luks $i-part2
cryptsetup open ${i}-part2 ${i##*/}-part2-luks-bpool_$INST_UUID --key-file /etc/cryptkey.d/bpool_$INST_UUID-key-luks
echo ${i##*/}-part2-luks-bpool_$INST_UUID ${i}-part2 /etc/cryptkey.d/bpool_$INST_UUID-key-luks discard >> /etc/crypttab
done
GRUB 2.06 still does not have complete support for LUKS2, LUKS1
is used instead.
#. Embed key file in initrd::
echo "install_items+=\" \
/etc/cryptkey.d/rpool_$INST_UUID-${INST_ID}-key-zfs \
/etc/cryptkey.d/bpool_$INST_UUID-key-luks \"" \
> /etc/dracut.conf.d/rpool_$INST_UUID-${INST_ID}-key-zfs.conf
#. Recreate boot pool with mappers as vdev::
zpool create \
-d -o feature@async_destroy=enabled \
-o feature@bookmarks=enabled \
-o feature@embedded_data=enabled \
-o feature@empty_bpobj=enabled \
-o feature@enabled_txg=enabled \
-o feature@extensible_dataset=enabled \
-o feature@filesystem_limits=enabled \
-o feature@hole_birth=enabled \
-o feature@large_blocks=enabled \
-o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl \
-O canmount=off \
-O compression=lz4 \
-O devices=off \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/boot \
bpool_$INST_UUID \
$INST_VDEV \
$(for i in ${DISK[@]}; do
printf "/dev/mapper/${i##*/}-part2-luks-bpool_$INST_UUID ";
done)
#. Restore boot pool backup::
zfs recv bpool_${INST_UUID}/${INST_ID} < /root/bpool_$INST_UUID-${INST_ID}-pre-luks
rm /root/bpool_$INST_UUID-${INST_ID}-pre-luks
#. Mount boot dataset and EFI partitions::
mount /boot
mount /boot/efi
for i in ${DISK[@]}; do
mount /boot/efis/${i##*/}-part1
done
#. As keys are stored in initrd,
set secure permissions for ``/boot``::
chmod 700 /boot
#. Change root pool password to key file::
zfs change-key -l \
-o keylocation=file:///etc/cryptkey.d/rpool_$INST_UUID-${INST_ID}-key-zfs \
-o keyformat=raw \
rpool_$INST_UUID/$INST_ID
#. Enable GRUB cryptodisk::
echo "GRUB_ENABLE_CRYPTODISK=y" >> /etc/default/grub
#. Import bpool service::
tee /etc/systemd/system/zfs-import-bpool-mapper.service <<EOF
[Unit]
Description=Import encrypted boot pool
Documentation=man:zpool(8)
DefaultDependencies=no
Requires=systemd-udev-settle.service
After=cryptsetup.target
Before=boot.mount
ConditionPathIsDirectory=/sys/module/zfs
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/zpool import -aNd /dev/mapper
[Install]
WantedBy=zfs-import.target
EOF
systemctl enable zfs-import-bpool-mapper.service
#. **Important**: Back up root dataset key ``/etc/cryptkey.d/rpool_$INST_UUID-${INST_ID}-key-zfs``
to a secure location.
In the possible event of LUKS container corruption,
data on root set will only be available
with this key.

View File

@@ -0,0 +1,168 @@
.. highlight:: sh
Bootloader
======================
.. contents:: Table of Contents
:local:
Apply workarounds
~~~~~~~~~~~~~~~~~~~~
Currently GRUB has multiple compatibility problems with ZFS,
especially with regards to newer ZFS features.
Workarounds have to be applied.
#. grub2-probe fails to get canonical path
When persistent device names ``/dev/disk/by-id/*`` are used
with ZFS, GRUB will fail to resolve the path of the boot pool
device. Error::
# /usr/bin/grub2-probe: error: failed to get canonical path of `/dev/virtio-pci-0000:06:00.0-part3'.
Solution::
echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile.d/zpool_vdev_name_path.sh
source /etc/profile.d/zpool_vdev_name_path.sh
#. Pool name missing
See `this bug report <https://savannah.gnu.org/bugs/?59614>`__.
Root pool name is missing from ``root=ZFS=rpool_$INST_UUID/ROOT/default``
kernel cmdline in generated ``grub.cfg`` file.
A workaround is to replace the pool name detection with ``zdb``
command::
sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|" /etc/grub.d/10_linux
Install GRUB
~~~~~~~~~~~~~~~~~~~~
#. If using virtio disk, add driver to initrd::
echo 'filesystems+=" virtio_blk "' >> /etc/dracut.conf.d/fs.conf
#. Generate initrd::
rm -f /etc/zfs/zpool.cache
touch /etc/zfs/zpool.cache
chmod a-w /etc/zfs/zpool.cache
chattr +i /etc/zfs/zpool.cache
ls -1 /lib/modules \
| while read kernel_version; do
dracut --force --kver $kernel_version
done
#. When in doubt, install both legacy boot
and EFI.
#. Disable BLS::
echo "GRUB_ENABLE_BLSCFG=false" >> /etc/default/grub
#. Create GRUB boot directory, in ESP and boot pool::
mkdir -p /boot/efi/EFI/fedora # EFI GRUB dir
mkdir -p /boot/efi/EFI/fedora/grub2 # legacy GRUB dir
mkdir -p /boot/grub2
Boot environment-specific configuration (kernel, etc)
is stored in ``/boot/grub2/grub.cfg``, enabling rollback.
#. If using legacy booting, install GRUB to every disk::
for i in ${DISK[@]}; do
grub2-install --boot-directory /boot/efi/EFI/fedora --target=i386-pc $i
done
#. If using EFI::
for i in ${DISK[@]}; do
efibootmgr -cgp 1 -l "\EFI\fedora\shimx64.efi" \
-L "fedora-${i##*/}" -d ${i}
done
cp -r /usr/lib/grub/x86_64-efi/ /boot/efi/EFI/fedora
#. Generate GRUB Menu::
grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
cp /boot/efi/EFI/fedora/grub.cfg /boot/efi/EFI/fedora/grub2/grub.cfg
cp /boot/efi/EFI/fedora/grub.cfg /boot/grub2/grub.cfg
#. For both legacy and EFI booting: mirror ESP content::
ESP_MIRROR=$(mktemp -d)
cp -r /boot/efi/EFI $ESP_MIRROR
for i in /boot/efis/*; do
cp -r $ESP_MIRROR/EFI $i
done
#. Notes for GRUB on Fedora
To support Secure Boot, GRUB has been heavily modified by Fedora,
namely:
- ``grub2-install`` is `disabled for UEFI <https://bugzilla.redhat.com/show_bug.cgi?id=1917213>`__
- Only a static, signed version of bootloader is copied to EFI system partition
- This signed bootloader does not have built-in support for either ZFS or LUKS containers
- This signed bootloader only loads configuration from ``/boot/efi/EFI/fedora/grub.cfg``
Unrelated to Secure Boot, GRUB has also been modified to provide optional
support for `systemd bootloader specification (bls) <https://systemd.io/BOOT_LOADER_SPECIFICATION/>`__.
Currently ``blscfg.mod`` is incompatible with root on ZFS.
Also see `Fedora docs for GRUB
<https://docs.fedoraproject.org/en-US/fedora/rawhide/system-administrators-guide/kernel-module-driver-configuration/Working_with_the_GRUB_2_Boot_Loader/>`__.
Finish Installation
~~~~~~~~~~~~~~~~~~~~
#. Exit chroot::
exit
#. Take a snapshot of the clean installation for future use::
zfs snapshot -r rpool_$INST_UUID/$INST_ID@install
zfs snapshot -r bpool_$INST_UUID/$INST_ID@install
#. Unmount EFI system partition::
umount /mnt/boot/efi
umount /mnt/boot/efis/*
#. Export pools::
zpool export bpool_$INST_UUID
zpool export rpool_$INST_UUID
#. Reboot::
reboot
#. After reboot, consider adding a normal user::
myUser=UserName
zfs create $(df --output=source /home | tail -n +2)/${myUser}
useradd -MUd /home/${myUser} -c 'My Name' ${myUser}
zfs allow -u ${myUser} mount,snapshot,destroy $(df --output=source /home | tail -n +2)/${myUser}
chown -R ${myUser}:${myUser} /home/${myUser}
chmod 700 /home/${myUser}
restorecon /home/${myUser}
passwd ${myUser}
Set up cron job to snapshot user home everyday::
dnf install cronie
systemctl enable --now crond
crontab -eu ${myUser}
#@daily /usr/sbin/zfs snap $(df --output=source /home/${myUser} | tail -n +2)@$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)
zfs list -t snapshot -S creation $(df --output=source /home/${myUser} | tail -n +2)
Install package groups::
dnf group list # query package groups
dnf group install 'i3 Desktop'
dnf group install 'Fedora Workstation' # GNOME
dnf group install 'Web Server'

View File

@@ -0,0 +1,226 @@
.. highlight:: sh
Recovery
======================
.. contents:: Table of Contents
:local:
GRUB Tips
-------------
Boot from GRUB rescue
~~~~~~~~~~~~~~~~~~~~~~~
If bootloader file is damaged, it's still possible
to boot computer with GRUB rescue image.
This section is also applicable if you are in
``grub rescue>``.
#. On another computer, generate rescue image with::
pacman -S --needed mtools libisoburn grub
grub-mkrescue -o grub-rescue.img
dd if=grub-rescue.img of=/dev/your-usb-stick
Boot computer from the rescue media.
Both legacy and EFI mode are supported.
Skip this step if you are in GRUB rescue.
#. List available disks with ``ls`` command::
grub> ls (hd # press tab
Possible devices are:
hd0 hd1 hd2 hd3
If you are dropped to GRUB rescue instead of
booting from GRUB rescue image, boot disk can be found
out with::
echo $root
# cryto0
# hd0,gpt2
GRUB configuration is loaded from::
echo $prefix
# (crypto0)/sys/BOOT/default@/grub
# (hd0,gpt2)/sys/BOOT/default@/grub
#. List partitions by pressing tab key:
.. code-block:: text
grub> ls (hd0 # press tab
Possible partitions are:
Device hd0: No known filesystem detected - Sector size 512B - Total size 20971520KiB
Partition hd0,gpt1: Filesystem type fat - Label `EFI', UUID 0DF5-3A76 - Partition start at 1024KiB - Total size 1048576KiB
Partition hd0,gpt2: No known filesystem detected - Partition start at 1049600KiB - Total size 4194304KiB
- If boot pool is encrypted:
Unlock it with ``cryptomount``::
grub> insmod luks
grub> cryptomount hd0,gpt2
Attempting to decrypt master key...
Enter passphrase for hd0,gpt2 (af5a240e13e24483acf02600d61e0f36):
Slot 1 opened
Unlocked LUKS container is ``(crypto0)``:
.. code-block:: text
grub> ls (crypto0)
Device crypto0: Filesystem type zfs - Label `bpool_ip3tdb' - Last modification
time 2021-05-03 12:14:08 Monday, UUID f14d7bdf89fe21fb - Sector size 512B -
Total size 4192256KiB
- If boot pool is not encrypted:
.. code-block:: text
grub> ls (hd0,gpt2)
Device hd0,gpt2: Filesystem type zfs - Label `bpool_ip3tdb' - Last modification
time 2021-05-03 12:14:08 Monday, UUID f14d7bdf89fe21fb - Sector size 512B -
Total size 4192256KiB
#. List boot environments nested inside ``bpool/$INST_ID/BOOT``::
grub> ls (crypto0)/sys/BOOT
@/ default/ be0/
#. Instruct GRUB to load configuration from ``be0`` boot environment
then enter normal mode::
grub> prefix=(crypto0)/sys/BOOT/be0/@/grub
grub> insmod normal
grub> normal
#. GRUB menu should now appear.
#. After entering system, `reinstall GRUB <#grub-installation>`__.
Switch GRUB prefix when disk fails
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you are using LUKS encrypted boot pool with multiple disks,
the primary disk failed, GRUB will fail to load configuration.
If there's still enough redundancy for the boot pool, try fix
GRUB with the following method:
#. Ensure ``Slot 1 opened`` message
is shown
.. code-block:: text
Welcome to GRUB!
error: no such cryptodisk found.
Attempting to decrypt master key...
Enter passphrase for hd0,gpt2 (c0987ea1a51049e9b3056622804de62a):
Slot 1 opened
error: disk `cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf' not found.
Entering rescue mode...
grub rescue>
If ``error: access denied.`` is shown,
try re-enter password with::
grub rescue> cryptomount hd0,gpt2
#. Check prefix::
grub rescue > set
# prefix=(cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf)/sys/BOOT/be0@/grub
# root=cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf
#. Set correct ``prefix`` and ``root`` by replacing
``cryptouuid/UUID`` with ``crypto0``::
grub rescue> prefix=(crypto0)/sys/BOOT/default@/grub
grub rescue> root=crypto0
#. Boot GRUB::
grub rescue> insmod normal
grub rescue> normal
GRUB should then boot normally.
#. After entering system, edit ``/etc/fstab`` to promote
one backup to ``/boot/efi``.
#. Make the change to ``prefix`` and ``root``
permanent by `reinstalling GRUB <#grub-installation>`__.
Access system in chroot
-----------------------
#. Go through `preparation <1-preparation.html>`__.
#. Import and unlock root and boot pool::
zpool import -NR /mnt rpool_$INST_UUID
zpool import -NR /mnt bpool_$INST_UUID
If using password::
zfs load-key rpool_$INST_UUID/$INST_ID
If using keyfile::
zfs load-key -L file:///path/to/keyfile rpool_$INST_UUID/$INST_ID
#. Find the current boot environment::
zfs list
BE=default
#. Mount root filesystem::
zfs mount rpool_$INST_UUID/$INST_ID/ROOT/$BE
#. chroot into the system::
fedora-chroot /mnt /bin/bash --login
zfs mount -a
mount -a
#. Finish rescue. See `finish installation <#finish-installation>`__.
Backup and migrate existing installation
----------------------------------------
With the help of `zfs send
<https://openzfs.github.io/openzfs-docs/man/8/zfs-send.8.html>`__
it is relatively easy to perform a system backup and migration.
#. Create a snapshot of root file system::
zfs snapshot -r rpool/fedora@backup
zfs snapshot -r bpool/fedora@backup
#. Save snapshot to a file or pipe to SSH::
zfs send --options rpool/fedora@backup > /backup/fedora-rpool
zfs send --options bpool/fedora@backup > /backup/fedora-bpool
#. Re-create partitions and root/boot
pool on target system.
#. Restore backup::
zfs recv rpool_new/fedora < /backup/fedora-rpool
zfs recv bpool_new/fedora < /backup/fedora-bpool
#. Chroot and reinstall bootloader.
#. Update pool name in ``/etc/fstab``, ``/boot/grub/grub.cfg``
and ``/etc/zfs/zfs-list.cache/*``.
#. Update device name, etc, in ``/etc/fstab`` and ``/etc/crypttab``.

View File

@@ -0,0 +1,69 @@
Fedora
======
Contents
--------
.. toctree::
:maxdepth: 1
:glob:
*
Installation
------------
Note: this is for installing ZFS on an existing Fedora
installation. To use ZFS as root file system,
see below.
#. Add ZFS repo::
dnf install -y https://zfsonlinux.org/fedora/zfs-release$(rpm -E %dist).noarch.rpm
#. Install ZFS packages::
dnf install -y kernel-devel zfs
#. Load kernel module::
modprobe zfs
If kernel module can not be loaded, your kernel version
might be not yet supported by OpenZFS. Try install
an LTS kernel::
dnf copr enable -y kwizart/kernel-longterm-5.4
dnf install -y kernel-longterm kernel-longterm-devel
# reboot to new LTS kernel
modprobe zfs
#. By default ZFS kernel modules are loaded upon detecting a pool.
To always load the modules at boot::
echo zfs > /etc/modules-load.d/zfs.conf
Testing Repo
--------------------
Testing repository, which is disabled by default, contains
the latest version of OpenZFS which is under active development.
These packages
**should not** be used on production systems.
::
dnf config-manager --enable zfs-testing
dnf install zfs
Root on ZFS
-----------
ZFS can be used as root file system for Fedora.
An installation guide is available.
`Start here <Root%20on%20ZFS/0-overview.html>`__.
.. toctree::
:maxdepth: 1
:glob:
Root on ZFS/*

View File

@@ -13,7 +13,7 @@ documentation <https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/>`__
Arch Linux/index Arch Linux/index
Debian/index Debian/index
Fedora Fedora/index
FreeBSD FreeBSD
Gentoo <https://wiki.gentoo.org/wiki/ZFS> Gentoo <https://wiki.gentoo.org/wiki/ZFS>
NixOS <https://nixos.wiki/wiki/NixOS_on_ZFS> NixOS <https://nixos.wiki/wiki/NixOS_on_ZFS>