Updates and cleanup for Arch, RHEL, NixOS and Fedora

Signed-off-by: Maurice Zhou <ja@apvc.uk>
This commit is contained in:
Maurice Zhou
2022-07-22 17:14:14 +02:00
committed by George Melikov
parent 5777295f0a
commit 2766cb7197
43 changed files with 937 additions and 4474 deletions

View File

@@ -1,11 +0,0 @@
Rocky Linux 8 Root on ZFS
=======================================
`Start here <RHEL%208-based%20distro%20Root%20on%20ZFS/0-overview.html>`__.
Contents
--------
.. toctree::
:maxdepth: 2
:glob:
RHEL 8-based distro Root on ZFS/*

View File

@@ -1,141 +0,0 @@
.. highlight:: sh
Overview
======================
This document describes how to install RHEL 8-based distro with ZFS as root
file system.
Caution
~~~~~~~
- With less than 4GB RAM, DKMS might fail to build
in live environment.
- This guide wipes entire physical disks. Back up existing data.
- `GRUB does not and
will not work on 4Kn drive with legacy (BIOS) booting.
<http://savannah.gnu.org/bugs/?46700>`__
Partition layout
~~~~~~~~~~~~~~~~
GUID partition table (GPT) is used.
EFI system partition will be referred to as **ESP** in this document.
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Name | legacy boot | ESP | Boot pool | swap | root pool | remaining space |
+======================+======================+=======================+======================+=====================+=======================+=================+
| File system | | vfat | ZFS | swap | ZFS | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Size | 1M | 2G | 4G | depends on RAM size | | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Optional encryption | | *Secure Boot* | | plain dm-crypt | ZFS native encryption | |
| | | | | | | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Partition no. | 5 | 1 | 2 | 4 | 3 | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Mount point | | /boot/efi | /boot | | / | |
| | | /boot/efis/disk-part1 | | | | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
Dataset layout
~~~~~~~~~~~~~~
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| Dataset | canmount | mountpoint | container | notes |
+===========================+======================+======================+=====================================+===========================================+
| bpool | off | /boot | contains sys | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool | off | / | contains sys | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| bpool/sys | off | none | contains BOOT | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys | off | none | contains ROOT | sys is encryptionroot |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| bpool/sys/BOOT | off | none | contains boot environments | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/ROOT | off | none | contains boot environments | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/DATA | off | none | contains placeholder "default" | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/DATA/default | off | / | contains user datasets | child datsets inherits mountpoint |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/DATA/default/ | on | /home (inherited) | no | |
| home | | | | user datasets, also called "shared |
| | | | | datasets", "persistent datasets"; also |
| | | | | include /var/lib, /srv, ... |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| bpool/sys/BOOT/default | noauto | /boot | no | noauto is used to switch BE. because of |
| | | | | noauto, must use fstab to mount |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/ROOT/default | noauto | / | no | mounted by initrd zfs hook |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| bpool/sys/BOOT/be1 | noauto | /boot | no | see bpool/sys/BOOT/default |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/ROOT/be1 | noauto | / | no | see rpool/sys/ROOT/default |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
Encryption
~~~~~~~~~~
- Swap
Swap is always encrypted. By default, swap is encrypted
with plain dm-crypt with key generated from ``/dev/urandom``
at every boot. Swap content does not persist between reboots.
- Root pool
ZFS native encryption can be optionally enabled for ``rpool/sys``
and child datasets.
User should be aware that, ZFS native encryption does not
encrypt some metadata of the datasets.
ZFS native encryption also does not change master key when ``zfs change-key`` is invoked.
Therefore, you should wipe the disk when password is compromised to protect confidentiality.
See `zfs-load-key.8 <https://openzfs.github.io/openzfs-docs/man/8/zfs-load-key.8.html>`__
and `zfs-change-key.8 <https://openzfs.github.io/openzfs-docs/man/8/zfs-change-key.8.html>`__
for more information regarding ZFS native encryption.
Encryption is enabled at dataset creation and can not be disabled later.
- Boot pool
Boot pool can not be encrypted.
- Bootloader
Bootloader can not be encrypted.
However, with Secure Boot, bootloader
can be verified by motherboard firmware to be untempered,
which should be sufficient for most purposes.
Secure Boot is not supported out-of-the-box due to ZFS module.
Booting with disk failure
~~~~~~~~~~~~~~~~~~~~~~~~~
This guide is written with disk failure in mind.
If disks used in Root on ZFS pool failed, but
sufficient redundancy for both root pool and boot pool
still exists, the system will still boot normally.
Swap partition on the failed disk will fail to mount,
after an 1m30s timeout.
This feature is useful for use cases such
as an unattended remote server.
Example:
- System has disks ``n>1``
- Installed with mirrored setup
- Mirrored setup can tolerate up to ``n-1`` disk failures
- Disconnect one or more disks, keep at least
one disk connected
- System still boots, but fails to mount swap and
EFI partition

View File

@@ -1,149 +0,0 @@
.. highlight:: sh
Preparation
======================
.. contents:: Table of Contents
:local:
#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.
#. Download a variant of `Rocky Linux 8.5 Live
ISO <https://dl.rockylinux.org/pub/rocky/8.5/Live/x86_64/>`__ and boot from it.
#. Set root password or ``/root/.ssh/authorized_keys``.
#. Start SSH server::
echo PermitRootLogin yes >> /etc/ssh/sshd_config
systemctl restart sshd
#. Connect from another computer::
ssh root@192.168.1.19
#. Temporarily set SELinux to permissive in live environment::
setenforce 0
SELinux will be enabled on the installed system.
#. Optional: If mirror speed is slow, you can manually pick a fixed mirror
from `mirrorlist <https://mirrors.rockylinux.org/mirrormanager/mirrors>`__
and apply it::
sed -i 's|^mirrorlist=|#mirrorlist=|g' /etc/yum.repos.d/*
sed -i 's|^#baseurl=|baseurl=|g' /etc/yum.repos.d/*
sed -i 's|dl.rockylinux.org/$contentdir|mirrors.sjtug.sjtu.edu.cn/rocky|g' /etc/yum.repos.d/*
#. Add ZFS repo::
source /etc/os-release
RHEL_ZFS_REPO=https://zfsonlinux.org/epel/zfs-release.el${VERSION_ID/./_}.noarch.rpm
dnf install -y $RHEL_ZFS_REPO
#. Install ZFS packages::
dnf install -y epel-release
dnf config-manager --disable zfs
dnf config-manager --enable zfs-kmod
dnf install -y zfs
#. Load kernel modules::
modprobe zfs
#. Install helper script and partition tool::
rpm -ivh --nodeps https://dl.fedoraproject.org/pub/fedora/linux/releases/36/Everything/x86_64/os/Packages/a/arch-install-scripts-24-3.fc36.noarch.rpm
dnf install -y gdisk dosfstools
#. Set RHEL major version::
INST_RHEL_VER=${VERSION_ID%%.*}
#. Unique pool suffix. ZFS expects pool names to be
unique, therefore it's recommended to create
pools with a unique suffix::
INST_UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | tr -dc 'a-z0-9' | cut -c-6)
#. Identify this installation in ZFS filesystem path::
INST_ID=rhel
#. Target disk
List available disks with::
ls /dev/disk/by-id/*
If using virtio as disk bus, use
``/dev/disk/by-path/*``.
Declare disk array::
DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
For single disk installation, use::
DISK='/dev/disk/by-id/disk1'
#. Choose a primary disk. This disk will be used
for primary EFI partition, default to
first disk in the array::
INST_PRIMARY_DISK=$(echo $DISK | cut -f1 -d\ )
#. Set vdev topology, possible values are:
- (not set, single disk or striped; no redundancy)
- mirror
- raidz1
- raidz2
- raidz3
::
INST_VDEV=
This will create a single vdev with the topology of your choice.
It is also possible to manually create a pool with multiple vdevs, such as::
zpool create --options \
poolName \
mirror sda sdb \
raidz2 sdc ... \
raidz3 sde ... \
spare sdf ...
Notice the cost of parity when using RAID-Z. See
`here <https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz>`__
and `here <https://docs.google.com/spreadsheets/d/1tf4qx1aMJp8Lo_R6gpT689wTjHv6CGVElrPqTA0w_ZY/>`__.
For boot pool, which must be readable by GRUB, mirrored vdev should always be used for maximum redundancy.
This guide will use mirrored bpool for multi-disk setup.
Refer to `zpoolconcepts <https://openzfs.github.io/openzfs-docs/man/7/zpoolconcepts.7.html>`__
and `zpool-create <https://openzfs.github.io/openzfs-docs/man/8/zpool-create.8.html>`__
man pages for details.
#. Set partition size:
Set ESP size::
INST_PARTSIZE_ESP=2 # in GB
Set boot pool size. To avoid running out of space while using
boot environments, the minimum is 4GB. Adjust the size if you
intend to use multiple kernel/distros::
INST_PARTSIZE_BPOOL=4
Set swap size. It's `recommended <https://chrisdown.name/2018/01/02/in-defence-of-swap.html>`__
to setup a swap partition. If you intend to use hibernation,
the minimum should be no less than RAM size. Skip if swap is not needed::
INST_PARTSIZE_SWAP=8
Root pool size, use all remaining disk space if not set::
INST_PARTSIZE_RPOOL=

View File

@@ -1,271 +0,0 @@
.. highlight:: sh
System Installation
======================
.. contents:: Table of Contents
:local:
#. Optional: wipe solid-state drives with the generic tool
`blkdiscard <https://utcc.utoronto.ca/~cks/space/blog/linux/ErasingSSDsWithBlkdiscard>`__,
to clean previous partition tables and improve performance.
All content will be irrevocably destroyed::
for i in ${DISK}; do
blkdiscard $i &
done
wait
This is a quick operation and should be completed under one
minute.
For other device specific methods, see
`Memory cell clearing <https://wiki.archlinux.org/title/Solid_state_drive/Memory_cell_clearing>`__
#. Partition the disks.
See `Overview <0-overview.html>`__ for details::
for i in ${DISK}; do
sgdisk --zap-all $i
sgdisk -n1:1M:+${INST_PARTSIZE_ESP}G -t1:EF00 $i
sgdisk -n2:0:+${INST_PARTSIZE_BPOOL}G -t2:BE00 $i
if [ "${INST_PARTSIZE_SWAP}" != "" ]; then
sgdisk -n4:0:+${INST_PARTSIZE_SWAP}G -t4:8200 $i
fi
if [ "${INST_PARTSIZE_RPOOL}" = "" ]; then
sgdisk -n3:0:0 -t3:BF00 $i
else
sgdisk -n3:0:+${INST_PARTSIZE_RPOOL}G -t3:BF00 $i
fi
sgdisk -a1 -n5:24K:+1000K -t5:EF02 $i
done
#. Create boot pool::
disk_num=0; for i in $DISK; do disk_num=$(( $disk_num + 1 )); done
if [ $disk_num -gt 1 ]; then INST_VDEV_BPOOL=mirror; fi
zpool create \
-d -o feature@async_destroy=enabled \
-o feature@bookmarks=enabled \
-o feature@embedded_data=enabled \
-o feature@empty_bpobj=enabled \
-o feature@enabled_txg=enabled \
-o feature@extensible_dataset=enabled \
-o feature@filesystem_limits=enabled \
-o feature@hole_birth=enabled \
-o feature@large_blocks=enabled \
-o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl \
-O canmount=off \
-O compression=lz4 \
-O devices=off \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/boot \
-R /mnt \
bpool_$INST_UUID \
$INST_VDEV_BPOOL \
$(for i in ${DISK}; do
printf "$i-part2 ";
done)
You should not need to customize any of the options for the boot pool.
GRUB does not support all of the zpool features. See ``spa_feature_names``
in `grub-core/fs/zfs/zfs.c
<http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276>`__.
This step creates a separate boot pool for ``/boot`` with the features
limited to only those that GRUB supports, allowing the root pool to use
any/all features.
Features enabled with ``-o compatibility=grub2`` can be seen
`here <https://github.com/openzfs/zfs/blob/master/cmd/zpool/compatibility.d/grub2>`__.
#. Create root pool::
zpool create \
-o ashift=12 \
-o autotrim=on \
-R /mnt \
-O acltype=posixacl \
-O canmount=off \
-O compression=zstd \
-O dnodesize=auto \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/ \
rpool_$INST_UUID \
$INST_VDEV \
$(for i in ${DISK}; do
printf "$i-part3 ";
done)
**Notes:**
- The use of ``ashift=12`` is recommended here because many drives
today have 4 KiB (or larger) physical sectors, even though they
present 512 B logical sectors. Also, a future replacement drive may
have 4 KiB physical sectors (in which case ``ashift=12`` is desirable)
or 4 KiB logical sectors (in which case ``ashift=12`` is required).
- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you
do not want this, remove that option, but later add
``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create``
for ``/var/log``, as `journald requires ACLs
<https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported>`__
- Setting ``normalization=formD`` eliminates some corner cases relating
to UTF-8 filename normalization. It also implies ``utf8only=on``,
which means that only UTF-8 filenames are allowed. If you care to
support non-UTF-8 filenames, do not use this option. For a discussion
of why requiring UTF-8 filenames may be a bad idea, see `The problems
with enforced UTF-8 only filenames
<http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames>`__.
- ``recordsize`` is unset (leaving it at the default of 128 KiB). If you
want to tune it (e.g. ``-o recordsize=1M``), see `these
<https://jrs-s.net/2019/04/03/on-zfs-recordsize/>`__ `various
<http://blog.programster.org/zfs-record-size>`__ `blog
<https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSFileRecordsizeGrowth>`__
`posts
<https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSRecordsizeAndCompression>`__.
- Setting ``relatime=on`` is a middle ground between classic POSIX
``atime`` behavior (with its significant performance impact) and
``atime=off`` (which provides the best performance by completely
disabling atime updates). Since Linux 2.6.30, ``relatime`` has been
the default for other filesystems. See `RedHats documentation
<https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime>`__
for further information.
- Setting ``xattr=sa`` `vastly improves the performance of extended
attributes
<https://github.com/zfsonlinux/zfs/commit/82a37189aac955c81a59a5ecc3400475adb56355>`__.
Inside ZFS, extended attributes are used to implement POSIX ACLs.
Extended attributes can also be used by user-space applications.
`They are used by some desktop GUI applications.
<https://en.wikipedia.org/wiki/Extended_file_attributes#Linux>`__
`They can be used by Samba to store Windows ACLs and DOS attributes;
they are required for a Samba Active Directory domain controller.
<https://wiki.samba.org/index.php/Setting_up_a_Share_Using_Windows_ACLs>`__
Note that ``xattr=sa`` is `Linux-specific
<https://openzfs.org/wiki/Platform_code_differences>`__. If you move your
``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux,
extended attributes will not be readable (though your data will be). If
portability of extended attributes is important to you, omit the
``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole
pool, it is probably fine to use it for ``/var/log``.
- Make sure to include the ``-part3`` portion of the drive path. If you
forget that, you are specifying the whole disk, which ZFS will then
re-partition, and you will lose the bootloader partition(s).
#. This section implements dataset layout as described in `overview <0-overview.html>`__.
Create root system container:
- Unencrypted::
zfs create \
-o canmount=off \
-o mountpoint=none \
rpool_$INST_UUID/$INST_ID
- Encrypted:
Pick a strong password. Once compromised, changing password will not keep your
data safe. See ``zfs-change-key(8)`` for more info::
zfs create \
-o canmount=off \
-o mountpoint=none \
-o encryption=on \
-o keylocation=prompt \
-o keyformat=passphrase \
rpool_$INST_UUID/$INST_ID
Create other system datasets::
zfs create -o canmount=off -o mountpoint=none bpool_$INST_UUID/$INST_ID
zfs create -o canmount=off -o mountpoint=none bpool_$INST_UUID/$INST_ID/BOOT
zfs create -o canmount=off -o mountpoint=none rpool_$INST_UUID/$INST_ID/ROOT
zfs create -o canmount=off -o mountpoint=none rpool_$INST_UUID/$INST_ID/DATA
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$INST_UUID/$INST_ID/BOOT/default
zfs create -o mountpoint=/ -o canmount=off rpool_$INST_UUID/$INST_ID/DATA/default
zfs create -o mountpoint=/ -o canmount=noauto rpool_$INST_UUID/$INST_ID/ROOT/default
zfs mount rpool_$INST_UUID/$INST_ID/ROOT/default
zfs mount bpool_$INST_UUID/$INST_ID/BOOT/default
for i in {usr,var,var/lib};
do
zfs create -o canmount=off rpool_$INST_UUID/$INST_ID/DATA/default/$i
done
for i in {home,root,srv,usr/local,var/log,var/spool};
do
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/$i
done
chmod 750 /mnt/root
#. Format and mount ESP::
for i in ${DISK}; do
mkfs.vfat -n EFI ${i}-part1
mkdir -p /mnt/boot/efis/${i##*/}-part1
mount -t vfat ${i}-part1 /mnt/boot/efis/${i##*/}-part1
done
mkdir -p /mnt/boot/efi
mount -t vfat ${INST_PRIMARY_DISK}-part1 /mnt/boot/efi
#. Create separate user dataset at ``/home/User``, dateset name can be
changed later::
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/home/User
If needed, snapshot, rollback and other related permissions can be
delegated to the user later.
#. Create optional program data datasets to omit data from rollback::
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/games
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/www
# for GNOME
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/AccountsService
# for Docker
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/docker
# for NFS
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/nfs
# for LXC
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/lxc
# for LibVirt
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/libvirt
##other application
# zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/$name
Add other datasets when needed, such as PostgreSQL.
#. Install base packages::
dnf --installroot=/mnt --releasever=${INST_RHEL_VER} -y install \
${RHEL_ZFS_REPO} @core epel-release grub2-efi-x64 grub2-pc-modules \
grub2-efi-x64-modules shim-x64 efibootmgr \
kernel kernel-devel python3-dnf-plugin-post-transaction-actions
dnf config-manager --installroot=/mnt --disable zfs
dnf config-manager --installroot=/mnt --enable zfs-kmod
dnf install --installroot=/mnt -y zfs zfs-dracut
#. Update zfs repo if a newer release is available::
source /mnt/etc/os-release
RHEL_ZFS_REPO_NEW=https://zfsonlinux.org/epel/zfs-release.el${VERSION_ID/./_}.noarch.rpm
dnf install --installroot=/mnt -y $RHEL_ZFS_REPO_NEW || true
#. Optional: enable boot environment support and dnf integration::
dnf --installroot=/mnt copr enable -y m0p/bieaz
dnf --installroot=/mnt install -y bieaz python3-dnf-plugin-rozb3
If multi-disk setup is used, enable multi-disk
support inside ``/mnt/etc/bieaz.cfg``.

View File

@@ -1,103 +0,0 @@
.. highlight:: sh
System Configuration
======================
.. contents:: Table of Contents
:local:
#. Generate fstab::
genfstab -U /mnt | sed 's;zfs[[:space:]]*;zfs zfsutil,;g' | grep "zfs zfsutil" >> /mnt/etc/fstab
for i in ${DISK}; do
echo UUID=$(blkid -s UUID -o value ${i}-part1) /boot/efis/${i##*/}-part1 vfat \
x-systemd.idle-timeout=1min,x-systemd.automount,noauto,umask=0022,fmask=0022,dmask=0022 0 1 >> /mnt/etc/fstab
done
echo UUID=$(blkid -s UUID -o value ${INST_PRIMARY_DISK}-part1) /boot/efi vfat \
x-systemd.idle-timeout=1min,x-systemd.automount,noauto,umask=0022,fmask=0022,dmask=0022 0 1 >> /mnt/etc/fstab
if [ "${INST_PARTSIZE_SWAP}" != "" ]; then
for i in ${DISK}; do
echo ${i##*/}-part4-swap ${i}-part4 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256,discard >> /mnt/etc/crypttab
echo /dev/mapper/${i##*/}-part4-swap none swap x-systemd.requires=cryptsetup.target,defaults 0 0 >> /mnt/etc/fstab
done
fi
By default, systemd will halt boot process if any entry in ``/etc/fstab`` fails
to mount. This is unnecessary for mirrored EFI boot partitions.
With the above mount options, systemd will skip mounting them at boot,
only mount them on demand when accessed.
#. Configure dracut::
echo 'add_dracutmodules+=" zfs "' > /mnt/etc/dracut.conf.d/zfs.conf
#. Force load mpt3sas module if used::
if grep mpt3sas /proc/modules; then
echo 'forced_drivers+=" mpt3sas "' >> /mnt/etc/dracut.conf.d/zfs.conf
fi
#. Interactively set locale, keymap, timezone, hostname and root password::
rm -f /mnt/etc/localtime
systemd-firstboot --root=/mnt --prompt --root-password=PASSWORD
This can be non-interactive, see man page for details::
rm -f /mnt/etc/localtime
systemd-firstboot --root=/mnt \
--locale="en_US.UTF-8" --locale-messages="en_US.UTF-8" \
--keymap=us --timezone="Europe/Berlin" --hostname=myHost \
--root-password=PASSWORD
``systemd-firstboot`` have bugs, root password is set below.
#. Generate host id::
zgenhostid -f -o /mnt/etc/hostid
#. Install locale package, example for English locale::
dnf --installroot=/mnt install -y glibc-minimal-langpack glibc-langpack-en
Program will show errors if not installed.
#. Enable ZFS services::
systemctl enable zfs-import-scan.service zfs-import.target zfs-zed zfs.target --root=/mnt
systemctl disable zfs-mount --root=/mnt
At boot, datasets on rpool are mounted with ``/etc/fstab``,
which can control the mounting process more precisely than ``zfs-mount.service``.
#. By default SSH server is enabled, allowing root login by password,
disable SSH server::
systemctl disable sshd --root=/mnt
systemctl enable firewalld --root=/mnt
#. Chroot::
echo "INST_PRIMARY_DISK=$INST_PRIMARY_DISK
INST_LINVAR=$INST_LINVAR
INST_UUID=$INST_UUID
INST_ID=$INST_ID
unalias -a
TERM=xterm
INST_VDEV=$INST_VDEV
INST_VDEV=$INST_VDEV
DISK=\"$DISK\"" > /mnt/root/chroot
history -w /mnt/home/sys-install-pre-chroot.txt
arch-chroot /mnt bash --login
#. Source variables::
source /root/chroot
#. For SELinux, relabel filesystem on reboot::
fixfiles -F onboot
#. Set root password::
passwd

View File

@@ -1,258 +0,0 @@
.. highlight:: sh
Bootloader
======================
.. contents:: Table of Contents
:local:
Apply workarounds
~~~~~~~~~~~~~~~~~~~~
Currently GRUB has multiple compatibility problems with ZFS,
especially with regards to newer ZFS features.
Workarounds have to be applied.
#. grub2-probe fails to get canonical path
When persistent device names ``/dev/disk/by-id/*`` are used
with ZFS, GRUB will fail to resolve the path of the boot pool
device. Error::
# /usr/bin/grub2-probe: error: failed to get canonical path of `/dev/virtio-pci-0000:06:00.0-part3'.
Solution::
echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile.d/zpool_vdev_name_path.sh
source /etc/profile.d/zpool_vdev_name_path.sh
#. Pool name missing
See `this bug report <https://savannah.gnu.org/bugs/?59614>`__.
Root pool name is missing from ``root=ZFS=rpool_$INST_UUID/ROOT/default``
kernel cmdline in generated ``grub.cfg`` file.
A workaround is to replace the pool name detection with ``zdb``
command::
sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|" /etc/grub.d/10_linux
Install GRUB
~~~~~~~~~~~~~~~~~~~~
#. If using virtio disk, add driver to initrd::
echo 'filesystems+=" virtio_blk "' >> /etc/dracut.conf.d/fs.conf
#. Generate initrd::
rm -f /etc/zfs/zpool.cache
touch /etc/zfs/zpool.cache
chmod a-w /etc/zfs/zpool.cache
chattr +i /etc/zfs/zpool.cache
for directory in /lib/modules/*; do
kernel_version=$(basename $directory)
dracut --force --kver $kernel_version
done
#. Load ZFS modules and disable BLS::
echo 'GRUB_ENABLE_BLSCFG=false' >> /etc/default/grub
#. Create GRUB boot directory, in ESP and boot pool::
mkdir -p /boot/efi/EFI/rocky # EFI GRUB dir
mkdir -p /boot/efi/EFI/rocky/grub2 # legacy GRUB dir
mkdir -p /boot/grub2
Boot environment-specific configuration (kernel, etc)
is stored in ``/boot/grub2/grub.cfg``, enabling rollback.
#. When in doubt, install both legacy boot
and EFI.
#. If using legacy booting, install GRUB to every disk::
for i in ${DISK}; do
grub2-install --boot-directory /boot/efi/EFI/rocky --target=i386-pc $i
done
#. If using EFI::
for i in ${DISK}; do
efibootmgr -cgp 1 -l "\EFI\rocky\shimx64.efi" \
-L "rocky-${i##*/}" -d ${i}
done
cp -r /usr/lib/grub/x86_64-efi/ /boot/efi/EFI/rocky
#. Generate GRUB Menu:
Apply workaround::
tee /etc/grub.d/09_fix_root_on_zfs <<EOF
#!/bin/sh
echo 'insmod zfs'
echo 'set root=(hd0,gpt2)'
EOF
chmod +x /etc/grub.d/09_fix_root_on_zfs
Generate menu::
grub2-mkconfig -o /boot/efi/EFI/rocky/grub.cfg
cp /boot/efi/EFI/rocky/grub.cfg /boot/efi/EFI/rocky/grub2/grub.cfg
cp /boot/efi/EFI/rocky/grub.cfg /boot/grub2/grub.cfg
The following errors may be safely ignored:
- ``device-mapper: reload ioctl on osprober-linux-sda2 (253:0) failed: Device or resource busy``
This is caused by os-prober probing OS on the partitions used by ZFS,
harmless but os-prober can be disabled by::
echo GRUB_DISABLE_OS_PROBER=true >> /etc/default/grub
- ``/usr/sbin/grub2-probe: error: ../grub-core/kern/fs.c:120:unknown filesystem.``
This is fixed by /etc/grub.d/09_fix_root_on_zfs
#. For both legacy and EFI booting: mirror ESP content::
ESP_MIRROR=$(mktemp -d)
unalias -a
cp -r /boot/efi/EFI $ESP_MIRROR
for i in /boot/efis/*; do
cp -r $ESP_MIRROR/EFI $i
done
#. Automatically regenerate GRUB menu on kernel update::
tee /etc/dnf/plugins/post-transaction-actions.d/00-update-grub-menu-for-kernel.action <<EOF >/dev/null
# kernel-core package contains vmlinuz and initramfs
# change package name if non-standard kernel is used
kernel-core:in:/usr/local/sbin/update-grub-menu.sh
kernel-core:out:/usr/local/sbin/update-grub-menu.sh
EOF
tee /usr/local/sbin/update-grub-menu.sh <<-'EOF' >/dev/null
#!/bin/sh
export PATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export ZPOOL_VDEV_NAME_PATH=YES
source /etc/os-release
grub2-mkconfig -o /boot/efi/EFI/${ID}/grub.cfg
cp /boot/efi/EFI/${ID}/grub.cfg /boot/efi/EFI/${ID}/grub2/grub.cfg
cp /boot/efi/EFI/${ID}/grub.cfg /boot/grub2/grub.cfg
ESP_MIRROR=$(mktemp -d)
cp -r /boot/efi/EFI $ESP_MIRROR
for i in /boot/efis/*; do
cp -r $ESP_MIRROR/EFI $i
done
rm -rf $ESP_MIRROR
EOF
chmod +x /usr/local/sbin/update-grub-menu.sh
#. Notes for GRUB on RHEL
To support Secure Boot, GRUB has been heavily modified by Fedora,
namely:
- ``grub2-install`` is `disabled for UEFI <https://bugzilla.redhat.com/show_bug.cgi?id=1917213>`__
- Only a static, signed version of bootloader is copied to EFI system partition
- This signed bootloader does not have built-in support for either ZFS or LUKS containers
- This signed bootloader only loads configuration from ``/boot/efi/EFI/fedora/grub.cfg``
Unrelated to Secure Boot, GRUB has also been modified to provide optional
support for `systemd bootloader specification (bls) <https://systemd.io/BOOT_LOADER_SPECIFICATION/>`__.
Currently ``blscfg.mod`` is incompatible with root on ZFS.
As bls is disabled, you will need to regenerate GRUB menu after each kernel upgrade.
Or else the new kernel will not be recognized and system will boot the old kernel
on reboot.
Also see `Fedora docs for GRUB
<https://docs.fedoraproject.org/en-US/fedora/rawhide/system-administrators-guide/kernel-module-driver-configuration/Working_with_the_GRUB_2_Boot_Loader/>`__.
Finish Installation
~~~~~~~~~~~~~~~~~~~~
#. Exit chroot::
exit
#. Take a snapshot of the clean installation for future use::
zfs snapshot -r rpool_$INST_UUID/$INST_ID@install
zfs snapshot -r bpool_$INST_UUID/$INST_ID@install
#. Unmount EFI system partition::
umount /mnt/boot/efi
umount /mnt/boot/efis/*
#. Export pools::
zpool export bpool_$INST_UUID
zpool export rpool_$INST_UUID
#. Reboot::
reboot
Post installaion
~~~~~~~~~~~~~~~~
#. If you have other data pools, generate list of datasets for `zfs-mount-generator
<https://manpages.ubuntu.com/manpages/focal/man8/zfs-mount-generator.8.html>`__ to mount them at boot::
DATA_POOL='tank0 tank1'
# tab-separated zfs properties
# see /etc/zfs/zed.d/history_event-zfs-list-cacher.sh
export \
PROPS="name,mountpoint,canmount,atime,relatime,devices,exec\
,readonly,setuid,nbmand,encroot,keylocation"
for i in $DATA_POOL; do
zfs list -H -t filesystem -o $PROPS -r $i > /etc/zfs/zfs-list.cache/$i
done
#. After reboot, consider adding a normal user::
# with root permissions
sudo -i
# store user name in a variable
myUser=UserName
# rename default `User` to new user name
zfs rename $(df --output=source /home | tail -n +2)/User $(df --output=source /home | tail -n +2)/${myUser}
# update entry in fstab
sed -i "s|/home/User|/home/${myUser}|g" /etc/fstab
# add user
useradd --no-create-home --user-group --home-dir /home/${myUser} --comment 'My Name' ${myUser}
# delegate snapshot and destroy permissions of the home dataset to
# new user
zfs allow -u ${myUser} mount,snapshot,destroy $(df --output=source /home | tail -n +2)/${myUser}
# fix permissions
chown --recursive ${myUser}:${myUser} /home/${myUser}
chmod 700 /home/${myUser}
# fix selinux context
restorecon /home/${myUser}
# set new password for user
passwd ${myUser}
Set up cron job to snapshot user home everyday::
dnf install cronie
systemctl enable --now crond
crontab -eu ${myUser}
#@daily zfs snap $(df --output=source /home/${myUser} | tail -n +2)@$(dd if=/dev/urandom of=/dev/stdout bs=1 count=100 2>/dev/null |tr -dc 'a-z0-9' | cut -c-6)
zfs list -t snapshot -S creation $(df --output=source /home/${myUser} | tail -n +2)
Install package groups::
dnf group list --hidden -v # query package groups
dnf group install 'Virtualization Host'

View File

@@ -1,211 +0,0 @@
.. highlight:: sh
Recovery
======================
.. contents:: Table of Contents
:local:
GRUB Tips
-------------
Boot from GRUB rescue
~~~~~~~~~~~~~~~~~~~~~~~
If bootloader file is damaged, it's still possible
to boot computer with GRUB rescue image.
This section is also applicable if you are in
``grub rescue>``.
#. On another computer, generate rescue image with::
pacman -S --needed mtools libisoburn grub
grub-install
grub-mkrescue -o grub-rescue.img
dd if=grub-rescue.img of=/dev/your-usb-stick
Boot computer from the rescue media.
Both legacy and EFI mode are supported.
Or `download generated GRUB rescue image <https://gitlab.com/m_zhou/bieaz/uploads/e0847a8675cda4317ea7f48abb1d9f10/grub-rescue-2.06.img.7z>`__.
#. List available disks with ``ls`` command::
grub> ls (hd # press tab
Possible devices are:
hd0 hd1 hd2 hd3
#. List partitions by pressing tab key:
.. code-block:: text
grub> ls (hd0 # press tab
Possible partitions are:
Device hd0: No known filesystem detected - Sector size 512B - Total size 20971520KiB
Partition hd0,gpt1: Filesystem type fat - Label `EFI', UUID 0DF5-3A76 - Partition start at 1024KiB - Total size 1048576KiB
Partition hd0,gpt2: No known filesystem detected - Partition start at 1049600KiB - Total size 4194304KiB
- If boot pool is encrypted:
Unlock it with ``cryptomount``::
grub> insmod luks
grub> cryptomount hd0,gpt2
Attempting to decrypt master key...
Enter passphrase for hd0,gpt2 (af5a240e13e24483acf02600d61e0f36):
Slot 1 opened
Unlocked LUKS container is ``(crypto0)``:
.. code-block:: text
grub> ls (crypto0)
Device crypto0: Filesystem type zfs - Label `bpool_ip3tdb' - Last modification
time 2021-05-03 12:14:08 Monday, UUID f14d7bdf89fe21fb - Sector size 512B -
Total size 4192256KiB
- If boot pool is not encrypted:
.. code-block:: text
grub> ls (hd0,gpt2)
Device hd0,gpt2: Filesystem type zfs - Label `bpool_ip3tdb' - Last modification
time 2021-05-03 12:14:08 Monday, UUID f14d7bdf89fe21fb - Sector size 512B -
Total size 4192256KiB
#. List boot environments nested inside ``bpool/$INST_ID/BOOT``::
grub> ls (crypto0)/sys/BOOT
@/ default/ be0/
#. Instruct GRUB to load configuration from ``be0`` boot environment::
grub> prefix=(crypto0)/sys/BOOT/be0/@/grub
grub> configfile $prefix/grub.cfg
#. GRUB menu should now appear.
#. After entering system, `reinstall GRUB <#grub-installation>`__.
Switch GRUB prefix when disk fails
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you are using LUKS encrypted boot pool with multiple disks,
the primary disk failed, GRUB will fail to load configuration.
If there's still enough redundancy for the boot pool, try fix
GRUB with the following method:
#. Ensure ``Slot 1 opened`` message
is shown
.. code-block:: text
Welcome to GRUB!
error: no such cryptodisk found.
Attempting to decrypt master key...
Enter passphrase for hd0,gpt2 (c0987ea1a51049e9b3056622804de62a):
Slot 1 opened
error: disk `cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf' not found.
Entering rescue mode...
grub rescue>
If ``error: access denied.`` is shown,
try re-enter password with::
grub rescue> cryptomount hd0,gpt2
#. Check prefix::
grub rescue > set
# prefix=(cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf)/sys/BOOT/be0@/grub
# root=cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf
#. Set correct ``prefix`` and ``root`` by replacing
``cryptouuid/UUID`` with ``crypto0``::
grub rescue> prefix=(crypto0)/sys/BOOT/default@/grub
grub rescue> root=crypto0
#. Boot GRUB::
grub rescue> insmod normal
grub rescue> normal
GRUB should then boot normally.
#. After entering system, edit ``/etc/fstab`` to promote
one backup to ``/boot/efi``.
#. Make the change to ``prefix`` and ``root``
permanent by `reinstalling GRUB <#grub-installation>`__.
Access system in chroot
-----------------------
#. Go through `preparation <1-preparation.html>`__.
#. Import and unlock root and boot pool::
zpool import -NR /mnt rpool_$INST_UUID
zpool import -NR /mnt bpool_$INST_UUID
If using password::
zfs load-key rpool_$INST_UUID/$INST_ID
If using keyfile::
zfs load-key -L file:///path/to/keyfile rpool_$INST_UUID/$INST_ID
#. Find the current boot environment::
zfs list
BE=default
#. Mount root filesystem::
zfs mount rpool_$INST_UUID/$INST_ID/ROOT/$BE
#. chroot into the system::
arch-chroot /mnt /bin/bash --login
zfs mount -a
mount -a
#. Finish rescue. See `finish installation <#finish-installation>`__.
Backup and migrate existing installation
----------------------------------------
With the help of `zfs send
<https://openzfs.github.io/openzfs-docs/man/8/zfs-send.8.html>`__
it is relatively easy to perform a system backup and migration.
#. Create a snapshot of root file system::
zfs snapshot -r rpool/$INST_ID@backup
zfs snapshot -r bpool/$INST_ID@backup
#. Save snapshot to a file or pipe to SSH::
zfs send --options rpool/$INST_ID@backup > /backup/$INST_ID-rpool
zfs send --options bpool/$INST_ID@backup > /backup/$INST_ID-bpool
#. Re-create partitions and root/boot
pool on target system.
#. Restore backup::
zfs recv rpool_new/$INST_ID < /backup/$INST_ID-rpool
zfs recv bpool_new/$INST_ID < /backup/$INST_ID-bpool
#. Chroot and reinstall bootloader.
#. Update pool name in ``/etc/fstab``, ``/boot/grub/grub.cfg``
and ``/etc/zfs/zfs-list.cache/*``.
#. Update device name, etc, in ``/etc/fstab`` and ``/etc/crypttab``.

View File

@@ -0,0 +1,11 @@
RHEL Root on ZFS
=======================================
`Start here <RHEL%20-based%20distro%20Root%20on%20ZFS/1-preparation.html>`__.
Contents
--------
.. toctree::
:maxdepth: 2
:glob:
RHEL-based distro Root on ZFS/*

View File

@@ -0,0 +1,78 @@
.. highlight:: sh
Preparation
======================
.. contents:: Table of Contents
:local:
#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.
#. Download a variant of `AlmaLinux Minimal Live ISO
<https://repo.almalinux.org/almalinux/9/live/x86_64/>`__ and boot from it.
#. Connect to the Internet.
#. Set root password or ``/root/.ssh/authorized_keys``.
#. Start SSH server::
echo PermitRootLogin yes >> /etc/ssh/sshd_config
systemctl restart sshd
#. Connect from another computer::
ssh root@192.168.1.19
#. Target disk
List available disks with::
ls /dev/disk/by-id/*
If using virtio as disk bus, use ``/dev/disk/by-path/*``.
Declare disk array::
DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
For single disk installation, use::
DISK='/dev/disk/by-id/disk1'
#. Set partition size:
Set swap size. It's `recommended <https://chrisdown.name/2018/01/02/in-defence-of-swap.html>`__
to setup a swap partition. If you intend to use hibernation,
the minimum should be no less than RAM size. Skip if swap is not needed::
INST_PARTSIZE_SWAP=8
Root pool size, use all remaining disk space if not set::
INST_PARTSIZE_RPOOL=
#. Temporarily set SELinux to permissive in live environment::
setenforce 0
SELinux will be enabled on the installed system.
#. Add ZFS repo::
dnf install -y https://zfsonlinux.org/epel/zfs-release-el-2-1.noarch.rpm
#. Check available repos::
dnf repolist --all
#. Install ZFS packages::
dnf config-manager --disable zfs
dnf config-manager --enable zfs-kmod
dnf install -y zfs
# if gpg import fails, add --nogpgcheck
#. Load kernel modules::
modprobe zfs
#. Install partition tool::
dnf install -y gdisk dosfstools

View File

@@ -0,0 +1,146 @@
.. highlight:: sh
System Installation
======================
.. contents:: Table of Contents
:local:
#. Partition the disks::
for i in ${DISK}; do
sgdisk --zap-all $i
sgdisk -n1:1M:+1G -t1:EF00 $i
sgdisk -n2:0:+4G -t2:BE00 $i
test -z $INST_PARTSIZE_SWAP || sgdisk -n4:0:+${INST_PARTSIZE_SWAP}G -t4:8200 $i
if test -z $INST_PARTSIZE_RPOOL; then
sgdisk -n3:0:0 -t3:BF00 $i
else
sgdisk -n3:0:+${INST_PARTSIZE_RPOOL}G -t3:BF00 $i
fi
sgdisk -a1 -n5:24K:+1000K -t5:EF02 $i
done
#. Create boot pool::
zpool create \
-o compatibility=grub2 \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl \
-O canmount=off \
-O compression=lz4 \
-O devices=off \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/boot \
-R /mnt \
bpool \
mirror \
$(for i in ${DISK}; do
printf "$i-part2 ";
done)
If not using a multi-disk setup, remove ``mirror``.
You should not need to customize any of the options for the boot pool.
GRUB does not support all of the zpool features. See ``spa_feature_names``
in `grub-core/fs/zfs/zfs.c
<http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276>`__.
This step creates a separate boot pool for ``/boot`` with the features
limited to only those that GRUB supports, allowing the root pool to use
any/all features.
Features enabled with ``-o compatibility=grub2`` can be seen
`here <https://github.com/openzfs/zfs/blob/master/cmd/zpool/compatibility.d/grub2>`__.
#. Create root pool::
zpool create \
-o ashift=12 \
-o autotrim=on \
-R /mnt \
-O acltype=posixacl \
-O canmount=off \
-O compression=zstd \
-O dnodesize=auto \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/ \
rpool \
mirror \
$(for i in ${DISK}; do
printf "$i-part3 ";
done)
If not using a multi-disk setup, remove ``mirror``.
#. This section implements dataset layout as described in `overview <1-preparation.html>`__.
Create root system container:
- Unencrypted::
zfs create \
-o canmount=off \
-o mountpoint=none \
rpool/redhat
- Encrypted:
Pick a strong password. Once compromised, changing password will not keep your
data safe. See ``zfs-change-key(8)`` for more info::
zfs create \
-o canmount=off \
-o mountpoint=none \
-o encryption=on \
-o keylocation=prompt \
-o keyformat=passphrase \
rpool/redhat
Create system datasets::
zfs create -o canmount=on -o mountpoint=/ rpool/redhat/root
zfs create -o canmount=on -o mountpoint=/home rpool/redhat/home
zfs create -o canmount=off -o mountpoint=/var rpool/redhat/var
zfs create -o canmount=on rpool/redhat/var/lib
zfs create -o canmount=on rpool/redhat/var/log
Create boot dataset::
zfs create -o canmount=on -o mountpoint=/boot bpool/redhat
#. Format and mount ESP::
for i in ${DISK}; do
mkfs.vfat -n EFI ${i}-part1
mkdir -p /mnt/boot/efis/${i##*/}-part1
mount -t vfat ${i}-part1 /mnt/boot/efis/${i##*/}-part1
done
mkdir -p /mnt/boot/efi
mount -t vfat $(echo $DISK | cut -f1 -d\ )-part1 /mnt/boot/efi
#. Install packages::
dnf --installroot=/mnt --releasever=$(source /etc/os-release ; echo $VERSION_ID) -y install \
@core grub2-efi-x64 grub2-pc-modules grub2-efi-x64-modules shim-x64 efibootmgr kernel
dnf --installroot=/mnt --releasever=$(source /etc/os-release ; echo $VERSION_ID) -y install \
https://zfsonlinux.org/epel/zfs-release-el-2-1.noarch.rpm
dnf config-manager --installroot=/mnt --disable zfs
dnf config-manager --installroot=/mnt --enable zfs-kmod
dnf --installroot=/mnt --releasever=$(source /etc/os-release ; echo $VERSION_ID) \
--nogpgcheck -y install zfs zfs-dracut

View File

@@ -0,0 +1,66 @@
.. highlight:: sh
System Configuration
======================
.. contents:: Table of Contents
:local:
#. Generate fstab::
mkdir -p /mnt/etc/
for i in ${DISK}; do
echo UUID=$(blkid -s UUID -o value ${i}-part1) /boot/efis/${i##*/}-part1 vfat \
umask=0022,fmask=0022,dmask=0022 0 1 >> /mnt/etc/fstab
done
echo $(echo $DISK | cut -f1 -d\ )-part1 /boot/efi vfat \
noauto,umask=0022,fmask=0022,dmask=0022 0 1 >> /mnt/etc/fstab
#. Configure dracut::
echo 'add_dracutmodules+=" zfs "' > /mnt/etc/dracut.conf.d/zfs.conf
#. Force load mpt3sas module if used::
if grep mpt3sas /proc/modules; then
echo 'forced_drivers+=" mpt3sas "' >> /mnt/etc/dracut.conf.d/zfs.conf
fi
#. Set locale, keymap, timezone, hostname and root password::
rm -f /mnt/etc/localtime
systemd-firstboot --root=/mnt --prompt --root-password=PASSWORD --force
#. Generate host id::
zgenhostid -f -o /mnt/etc/hostid
#. Install locale package, example for English locale::
dnf --installroot=/mnt install -y glibc-minimal-langpack glibc-langpack-en
#. Enable ZFS services::
systemctl enable zfs-import-scan.service zfs-mount zfs-import.target zfs-zed zfs.target --root=/mnt
#. By default SSH server is enabled, allowing root login by password,
disable SSH server::
systemctl disable sshd --root=/mnt
systemctl enable firewalld --root=/mnt
#. Chroot::
m='/dev /proc /sys'
for i in $m; do mount --rbind $i /mnt/$i; done
history -w /mnt/home/sys-install-pre-chroot.txt
chroot /mnt /usr/bin/env DISK=$DISK bash --login
#. For SELinux, relabel filesystem on reboot::
fixfiles -F onboot
#. Set root password::
passwd

View File

@@ -0,0 +1,129 @@
.. highlight:: sh
Bootloader
======================
.. contents:: Table of Contents
:local:
Apply workarounds
~~~~~~~~~~~~~~~~~~~~
Currently GRUB has multiple compatibility problems with ZFS,
especially with regards to newer ZFS features.
Workarounds have to be applied.
#. grub2-probe fails to get canonical path
When persistent device names ``/dev/disk/by-id/*`` are used
with ZFS, GRUB will fail to resolve the path of the boot pool
device. Error::
# /usr/bin/grub2-probe: error: failed to get canonical path of `/dev/virtio-pci-0000:06:00.0-part3'.
Solution::
echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile.d/zpool_vdev_name_path.sh
source /etc/profile.d/zpool_vdev_name_path.sh
#. Pool name missing
See `this bug report <https://savannah.gnu.org/bugs/?59614>`__.
Root pool name is missing from ``root=ZFS=rpool_$INST_UUID/ROOT/default``
kernel cmdline in generated ``grub.cfg`` file.
A workaround is to replace the pool name detection with ``zdb``
command::
sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|" /etc/grub.d/10_linux
Caution: this fix must be applied after every GRUB update and before generating the menu.
Install GRUB
~~~~~~~~~~~~~~~~~~~~
#. If using virtio disk, add driver to initrd::
echo 'filesystems+=" virtio_blk "' >> /etc/dracut.conf.d/fs.conf
#. Create empty cache file and generate initrd::
rm -f /etc/zfs/zpool.cache
touch /etc/zfs/zpool.cache
chmod a-w /etc/zfs/zpool.cache
chattr +i /etc/zfs/zpool.cache
for directory in /lib/modules/*; do
kernel_version=$(basename $directory)
dracut --force --kver $kernel_version
done
#. Load ZFS modules and disable BLS::
echo 'GRUB_ENABLE_BLSCFG=false' >> /etc/default/grub
#. If using legacy booting, install GRUB to every disk::
for i in ${DISK}; do
grub2-install --target=i386-pc $i
done
#. If using EFI::
for i in ${DISK}; do
efibootmgr -cgp 1 -l "\EFI\almalinux\shimx64.efi" \
-L "almalinux-${i##*/}" -d ${i}
done
cp -r /usr/lib/grub/x86_64-efi/ /boot/efi/EFI/almalinux/
#. Generate GRUB Menu:
Generate menu::
grub2-mkconfig -o /boot/grub2/grub.cfg
cp /boot/grub2/grub.cfg /boot/efi/EFI/almalinux/
#. For both legacy and EFI booting: mirror ESP content::
ESP_MIRROR=$(mktemp -d)
unalias -a
cp -r /boot/efi/EFI $ESP_MIRROR
for i in /boot/efis/*; do
cp -r $ESP_MIRROR/EFI $i
done
rm -rf $ESP_MIRROR
#. Notes for GRUB on RHEL
As bls is disabled, you will need to regenerate GRUB menu after each kernel upgrade.
Or else the new kernel will not be recognized and system will boot the old kernel
on reboot.
Finish Installation
~~~~~~~~~~~~~~~~~~~~
#. Exit chroot::
exit
#. Export pools::
umount -Rl /mnt
zpool export -a
#. Reboot::
reboot
#. On first reboot, the boot process will fail, with failure messages such
as "You are in Emergency Mode...Press Ctrl-D to continue".
Wait for the computer to automatically reboot and the problem will be resolved.
Post installaion
~~~~~~~~~~~~~~~~
#. Install package groups::
dnf group list --hidden -v # query package groups
dnf group install @gnome-desktop
#. Add new user, configure swap.

View File

@@ -146,15 +146,15 @@ And for RHEL/CentOS 8 and newer::
Use *zfs-testing* for DKMS packages and *zfs-testing-kmod*
kABI-tracking kmod packages.
RHEL 8-based distro Root on ZFS
RHEL-based distro Root on ZFS
-------------------------------
`Start here <RHEL%208-based%20distro%20Root%20on%20ZFS/0-overview.html>`__.
`Start here <RHEL%20-based%20distro%20Root%20on%20ZFS/1-preparation.html>`__.
.. toctree::
:maxdepth: 1
:glob:
RHEL 8-based distro Root on ZFS/*
RHEL-based distro Root on ZFS/*
.. _kABI-tracking kmod: https://elrepoproject.blogspot.com/2016/02/kabi-tracking-kmod-packages.html
.. _DKMS: https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support