From 3d4931680de7eeea05fd0717f64c76577f4cfccd Mon Sep 17 00:00:00 2001 From: Maurice Zhou Date: Fri, 12 Mar 2021 15:59:49 +0800 Subject: [PATCH] Arch Linux: Use bash array for multi-disk; Overview section for ZFS package; drop Artix Linux Arch Linux: index.rst punctuation Arch Linux: Boot pool encryption key must not be in child dataset Arch Linux: delete backup after restoration Remove trailing blanks Move topology spec above pool creation Arch Linux: Reintroduce INST_UUID Arch Linux: secure permissions for key file Signed-off-by: Maurice Zhou --- .../Arch Linux/Arch Linux Root on ZFS.rst | 1244 +++++++---------- .../Arch Linux/Artix Linux Root on ZFS.rst | 1075 -------------- docs/Getting Started/Arch Linux/index.rst | 97 +- 3 files changed, 541 insertions(+), 1875 deletions(-) delete mode 100644 docs/Getting Started/Arch Linux/Artix Linux Root on ZFS.rst diff --git a/docs/Getting Started/Arch Linux/Arch Linux Root on ZFS.rst b/docs/Getting Started/Arch Linux/Arch Linux Root on ZFS.rst index aaa131b..d2cef61 100644 --- a/docs/Getting Started/Arch Linux/Arch Linux Root on ZFS.rst +++ b/docs/Getting Started/Arch Linux/Arch Linux Root on ZFS.rst @@ -73,24 +73,20 @@ without the passphrase being entered at the console. Performance is good. As the encryption happens in ZFS, even if multiple disks (mirror or raidz topologies) are used, the data only has to be encrypted once. -Boot pool can be optionally encrypted with LUKS, see `here <#encrypt-boot-pool-with-luks>`__. +Boot pool can be optionally encrypted with LUKS, Encrypted boot pool can protect initrd from tempering. -Preinstallation +Preparations ---------------- -Download Arch Linux live image -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -#. Choose a mirror - - `Mirrorlist `__ +#. Choose a mirror from `mirrorlist `__. #. Download March 2021 build and signature. `File a new issue and mention @ne9z `__ if it's no longer available. - - `ISO (US mirror) `__ - - `Signature `__ + - `ISO (US mirror) `__ + - `Signature `__ #. Check live image against signature:: @@ -111,9 +107,6 @@ Download Arch Linux live image #. Boot the target computer from the prepared live medium. -Prepare the Live Environment -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - #. Connect to the internet. If the target computer aquires IP address with DHCP, no further steps need to be taken. @@ -123,19 +116,19 @@ Prepare the Live Environment #. Start SSH server. - - Interactively set root password with:: + Interactively set root password with:: passwd - - Start SSH server:: + Start SSH server:: systemctl start sshd - - Find the IP address of the target computer:: + Find the IP address of the target computer:: ip -4 address show scope global - - On another computer, connect to the target computer with:: + On another computer, connect to the target computer with:: ssh root@192.168.1.10 @@ -155,23 +148,23 @@ Prepare the Live Environment [archzfs] Include = /etc/pacman.d/mirrorlist-archzfs EOF - + curl -L https://git.io/JtQp4 > /etc/pacman.d/mirrorlist-archzfs #. Select mirror: - - Kill ``reflector``:: + Kill ``reflector``:: killall -9 reflector - - Edit the following files:: + Edit the following files:: nano /etc/pacman.d/mirrorlist - Uncomment and move mirrors to - the beginning of the file. + Uncomment and move mirrors to + the beginning of the file. - - Update database:: + Update database:: pacman -Sy @@ -201,18 +194,13 @@ Prepare the Live Environment modprobe zfs -Installation Variables -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In this part, we will set some variables to configure the system. - #. Timezone - List the available timezones with:: + List available timezones with:: ls /usr/share/zoneinfo/ - Store the target timezone in a variable:: + Store target timezone in a variable:: INST_TZ=/usr/share/zoneinfo/Asia/Irkutsk @@ -236,84 +224,71 @@ In this part, we will set some variables to configure the system. INST_LINVAR='linux' +#. Unique pool suffix. ZFS expects pool names to be + unique, therefore it's recommended to create + pools with a unique suffix:: + + INST_UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | tr -dc 'a-z0-9' | cut -c-6) + #. Target disk - List the available disks with:: + List available disks with:: - ls -d /dev/disk/by-id/* | grep -v part + ls -1d /dev/disk/by-id/* | grep -v part If the disk is not in the command output, use ``/dev/disk/by-path``. - Store the target disk in a variable:: + Declare disk array:: - DISK=/dev/disk/by-id/nvme-foo_NVMe_bar_512GB - - For multi-disk setups, repeat the formatting and - partitioning commands for other disks. + DISK=(/dev/disk/by-id/disk1 /dev/disk/by-id/disk2) System Installation ------------------- -Format and Partition the Target Disks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +#. Partition the disks:: -#. Clear the partition table:: + for i in ${DISK[@]}; do - sgdisk --zap-all $DISK + # clear partition table + sgdisk --zap-all $i -#. Create EFI system partition (for use now or in the future):: + # EFI system partition + sgdisk -n1:1M:+1G -t1:EF00 $i - sgdisk -n1:1M:+1G -t1:EF00 $DISK + # Boot pool partition + sgdisk -n2:0:+4G -t2:BE00 $i -#. Create BIOS boot partition:: + # with swap + sgdisk -n3:0:-8G -t3:BF00 $i + sgdisk -n4:0:0 -t4:8308 $i - sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK + # without swap (not recommended) + #sgdisk -n3:0:0 -t3:BF00 $i -#. Create boot pool partition:: + # with BIOS booting + sgdisk -a1 -n5:24K:+1000K -t5:EF02 $i - sgdisk -n2:0:+4G -t2:BE00 $DISK + done -#. Create root pool partition: + It's `recommended `__ + to create a swap partition. - - If you don't need a separate swap partition:: + Adjust the swap partition size to your needs. + If hibernation is needed, + swap size should be same or larger than RAM. + Check RAM size with ``free -h``. - sgdisk -n3:0:0 -t3:BF00 $DISK - - - If a separate swap partition is needed:: - - sgdisk -n3:0:-8G -t3:BF00 $DISK - sgdisk -n4:0:0 -t4:8308 $DISK - - Adjust the swap partition size to your needs. - If `hibernation <#hibernation>`__ is needed, - swap size should be same or larger than RAM. - Check RAM size with ``free -h``. - -#. Repeat the above steps for other target disks, if any. - -Create Root and Boot Pools -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. For multi-disk setup - - If you want to create a multi-disk pool, replace ``${DISK}-partX`` - with the topology and the disk path. - - For example, change:: +#. When creating pools, for single disk installation, omit topology specification + ``mirror``:: zpool create \ - ... \ - ${DISK}-part2 + ... + rpool_$INST_UUID \ + # mirror \ + ... - to:: - - zpool create \ - ... \ - mirror \ - /dev/disk/by-id/ata-disk1-part2 \ - /dev/disk/by-id/ata-disk2-part2 - - if needed, replace ``mirror`` with ``raidz1``, ``raidz2`` or ``raidz3``. +#. When creating pools, for multi-disk installation, you can also use other topologies + such as ``raidz1``, ``raidz2`` and ``raidz3``. #. Create boot pool:: @@ -340,8 +315,11 @@ Create Root and Boot Pools -O xattr=sa \ -O mountpoint=/boot \ -R /mnt \ - bpool \ - ${DISK}-part2 + bpool_$INST_UUID \ + mirror \ + $(for i in ${DISK[@]}; do + printf "$i-part2 "; + done) You should not need to customize any of the options for the boot pool. @@ -386,8 +364,11 @@ Create Root and Boot Pools -O relatime=on \ -O xattr=sa \ -O mountpoint=/ \ - rpool \ - ${DISK}-part3 + rpool_$INST_UUID \ + mirror \ + $(for i in ${DISK[@]}; do + printf "$i-part3 "; + done) **Notes:** @@ -443,14 +424,12 @@ Create Root and Boot Pools forget that, you are specifying the whole disk, which ZFS will then re-partition, and you will lose the bootloader partition(s). -Create Datasets -~~~~~~~~~~~~~~~~~~~~~~ #. Create system boot container:: zfs create \ -o canmount=off \ -o mountpoint=/boot \ - bpool/sys + bpool_$INST_UUID/sys #. Create system root container: @@ -462,129 +441,102 @@ Create Datasets zfs create \ -o canmount=off \ -o mountpoint=/ \ - rpool/sys + rpool_$INST_UUID/sys - Encrypted: - #. Choose a strong password. + Choose a strong password. + Due to the Copy-on-Write nature of ZFS, + `merely changing password is not enough `__ + once the password is compromised. + Dataset and pool must be destroyed, + disk wiped and system rebuilt from scratch to protect confidentiality. + Example: generate passphrase with `xkcdpass `_:: - Due to the Copy-on-Write nature of ZFS, - `merely changing password is not enough `__ - once the password is compromised. - Dataset and pool must be destroyed, - disk wiped and system rebuilt from scratch to protect confidentiality. + pacman -S --noconfirm xkcdpass + xkcdpass -Vn 10 -w /usr/lib/python*/site-packages/xkcdpass/static/eff-long - Example: generate passphrase with `xkcdpass `_:: + Root pool password can be supplied with SSH at boot time if boot pool is not encrypted, + see optional configurations section. - pacman -S --noconfirm xkcdpass - xkcdpass -Vn 10 -w /usr/lib/python*/site-packages/xkcdpass/static/eff-long + Encrypt boot pool. + For mobile devices, it is strongly recommended to + encrypt boot pool and enable Secure Boot, as described in + the optional configuration section. This will prevent attacks to + initrd. + However, GRUB as of 2.04 requires interactively entering password, + you must phsically type in the passwords at boot time, + or else the computer will not boot. - Root pool password can be supplied with SSH at boot time if boot pool is not encrypted, - see `Supply password with SSH <#supply-password-with-ssh>`__. + Create dataset:: - #. Encrypt boot pool. - - For mobile devices, it is strongly recommended to - `encrypt boot pool and enable Secure Boot <#encrypt-boot-pool-with-luks>`__ - immediately after reboot to prevent attacks to initramfs. To quote - `cryptsetup faq `__: - - An attacker that wants to compromise your system will just - compromise the initrd or the kernel itself. - - However, GRUB as of 2.04 requires interactively entering password, - you must phsically type in the passwords at boot time, - or else the computer will not boot. - - #. Create dataset:: - - zfs create \ - -o canmount=off \ - -o mountpoint=/ \ - -o encryption=on \ - -o keylocation=prompt \ - -o keyformat=passphrase \ - rpool/sys + zfs create \ + -o canmount=off \ + -o mountpoint=/ \ + -o encryption=on \ + -o keylocation=prompt \ + -o keyformat=passphrase \ + rpool_$INST_UUID/sys #. Create container datasets:: - zfs create -o canmount=off -o mountpoint=none bpool/sys/BOOT - zfs create -o canmount=off -o mountpoint=none rpool/sys/ROOT - zfs create -o canmount=off -o mountpoint=none rpool/sys/DATA + zfs create -o canmount=off -o mountpoint=none bpool_$INST_UUID/sys/BOOT + zfs create -o canmount=off -o mountpoint=none rpool_$INST_UUID/sys/ROOT + zfs create -o canmount=off -o mountpoint=none rpool_$INST_UUID/sys/DATA #. Create root and boot filesystem datasets:: - zfs create -o mountpoint=legacy -o canmount=noauto bpool/sys/BOOT/default - zfs create -o mountpoint=/ -o canmount=noauto rpool/sys/ROOT/default + zfs create -o mountpoint=legacy -o canmount=noauto bpool_$INST_UUID/sys/BOOT/default + zfs create -o mountpoint=/ -o canmount=off rpool_$INST_UUID/sys/DATA/default + zfs create -o mountpoint=/ -o canmount=noauto rpool_$INST_UUID/sys/ROOT/default #. Mount root and boot filesystem datasets:: - zfs mount rpool/sys/ROOT/default + zfs mount rpool_$INST_UUID/sys/ROOT/default mkdir /mnt/boot - mount -t zfs bpool/sys/BOOT/default /mnt/boot + mount -t zfs bpool_$INST_UUID/sys/BOOT/default /mnt/boot #. Create datasets to separate user data from root filesystem:: - zfs create -o mountpoint=/ -o canmount=off rpool/sys/DATA/default - + # create containers for i in {usr,var,var/lib}; do - zfs create -o canmount=off rpool/sys/DATA/default/$i + zfs create -o canmount=off rpool_$INST_UUID/sys/DATA/default/$i done for i in {home,root,srv,usr/local,var/log,var/spool,var/tmp}; do - zfs create -o canmount=on rpool/sys/DATA/default/$i + zfs create -o canmount=on rpool_$INST_UUID/sys/DATA/default/$i done chmod 750 /mnt/root chmod 1777 /mnt/var/tmp -#. Optional user data datasets: +#. Create optional user data datasets to omit data from rollback:: - If this system will have games installed:: + zfs create -o canmount=on rpool_$INST_UUID/sys/DATA/default/var/games + zfs create -o canmount=on rpool_$INST_UUID/sys/DATA/default/var/www + # for GNOME + zfs create -o canmount=on rpool_$INST_UUID/sys/DATA/default/var/lib/AccountsService + # for Docker + zfs create -o canmount=on rpool_$INST_UUID/sys/DATA/default/var/lib/docker + # for NFS + zfs create -o canmount=on rpool_$INST_UUID/sys/DATA/default/var/lib/nfs + # for LXC + zfs create -o canmount=on rpool_$INST_UUID/sys/DATA/default/var/lib/lxc + # for LibVirt + zfs create -o canmount=on rpool_$INST_UUID/sys/DATA/default/var/lib/libvirt - zfs create -o canmount=on rpool/sys/DATA/default/var/games +#. Format and mount EFI system partitions:: - If you use /var/www on this system:: + for i in ${DISK[@]}; do + mkfs.vfat -n EFI ${i}-part1 + mkdir -p /mnt/boot/efis/${i##*/} + mount -t vfat ${i}-part1 /mnt/boot/efis/${i##*/} + done - zfs create -o canmount=on rpool/sys/DATA/default/var/www - - If this system will use GNOME:: - - zfs create -o canmount=on rpool/sys/DATA/default/var/lib/AccountsService - - If this system will use Docker (which manages its own datasets & - snapshots):: - - zfs create -o canmount=on rpool/sys/DATA/default/var/lib/docker - - If this system will use NFS (locking):: - - zfs create -o canmount=on rpool/sys/DATA/default/var/lib/nfs - - If this system will use Linux Containers:: - - zfs create -o canmount=on rpool/sys/DATA/default/var/lib/lxc - - If this system will use libvirt:: - - zfs create -o canmount=on rpool/sys/DATA/default/var/lib/libvirt - -Format and Mount EFI System Partition -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -:: - - mkfs.vfat -n EFI ${DISK}-part1 - mkdir /mnt/boot/efi - mount -t vfat ${DISK}-part1 /mnt/boot/efi - -If you are using a multi-disk setup, this step will only install -bootloader to the first disk. Other disks will be handled later. - -Package Installation -~~~~~~~~~~~~~~~~~~~~ + mkdir -p /mnt/boot/efi + mount -t vfat ${DISK[0]}-part1 /mnt/boot/efi #. Install base packages:: @@ -639,20 +591,28 @@ System Configuration mkdir -p /mnt/etc/zfs/zfs-list.cache - zfs list -H -t filesystem -o $PROPS -r rpool > /mnt/etc/zfs/zfs-list.cache/rpool + zfs list -H -t filesystem -o $PROPS -r rpool_$INST_UUID > /mnt/etc/zfs/zfs-list.cache/rpool_$INST_UUID sed -Ei "s|/mnt/?|/|" /mnt/etc/zfs/zfs-list.cache/* #. Generate fstab:: - echo bpool/sys/BOOT/default /boot zfs rw,xattr,posixacl 0 0 >> /mnt/etc/fstab - echo UUID=$(blkid -s UUID -o value ${DISK}-part1) /boot/efi vfat \ + echo bpool_$INST_UUID/sys/BOOT/default /boot zfs rw,xattr,posixacl 0 0 >> /mnt/etc/fstab + + for i in ${DISK[@]}; do + echo UUID=$(blkid -s UUID -o value ${i}-part1) /boot/efis/${i##*/} vfat \ + x-systemd.idle-timeout=1min,x-systemd.automount,noauto,umask=0022,fmask=0022,dmask=0022 0 1 >> /mnt/etc/fstab + done + + echo UUID=$(blkid -s UUID -o value ${DISK[0]}-part1) /boot/efi vfat \ x-systemd.idle-timeout=1min,x-systemd.automount,noauto,umask=0022,fmask=0022,dmask=0022 0 1 >> /mnt/etc/fstab If a swap partition has been created:: - echo crypt-swap ${DISK}-part4 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256,discard >> /mnt/etc/crypttab - echo /dev/mapper/crypt-swap none swap defaults 0 0 >> /mnt/etc/fstab + for i in ${DISK[@]}; do + echo swap-${i##*/} ${i}-part4 /dev/urandom swap,cipher=aes-cbc-essiv:sha256,size=256,discard >> /mnt/etc/crypttab + echo /dev/mapper/swap-${i##*/} none swap defaults 0 0 >> /mnt/etc/fstab + done #. Configure mkinitcpio:: @@ -704,7 +664,13 @@ System Configuration #. Chroot:: - arch-chroot /mnt /usr/bin/env DISK=$DISK bash --login + for i in ${DISK[@]}; do printf "$i "; done + # /dev/disk/by-id/disk1 /dev/disk/by-id/disk2 + arch-chroot /mnt /usr/bin/env INST_UUID=$INST_UUID bash --login + + Declare target disks:: + + DISK=(/dev/disk/by-id/disk1 /dev/disk/by-id/disk2) #. Apply locales:: @@ -724,7 +690,7 @@ System Configuration [archzfs] Include = /etc/pacman.d/mirrorlist-archzfs EOF - + curl -L https://git.io/JtQp4 > /etc/pacman.d/mirrorlist-archzfs #. Enable networking:: @@ -737,133 +703,288 @@ System Configuration #. Generate zpool.cache - Pools are imported by initramfs with the information stored in ``/etc/zfs/zpool.cache``. - This cache file will be embedded in initramfs. + Pools are imported by initrd with the information stored in ``/etc/zfs/zpool.cache``. + This cache file will be embedded in initrd. :: - zpool set cachefile=/etc/zfs/zpool.cache rpool - zpool set cachefile=/etc/zfs/zpool.cache bpool + zpool set cachefile=/etc/zfs/zpool.cache rpool_$INST_UUID + zpool set cachefile=/etc/zfs/zpool.cache bpool_$INST_UUID #. Set root password:: passwd -#. Generate initramfs:: +#. Generate initrd:: mkinitcpio -P -Bootloader Installation +Optional Configuration +~~~~~~~~~~~~~~~~~~~~~~~ +- Boot Environment Manager + + A boot environment is a dataset which contains a bootable + instance of an operating system. Within the context of this installation, + boot environments can be created on-the-fly to preserve root file system + states before pacman transactions. + + Install `rozb3-pac `__ + pacman hook and + `bieaz `__ + from AUR to create boot environments. + Prebuilt packages are also available. + +- Supply password with SSH + + #. Install mkinitcpio tools:: + + pacman -S mkinitcpio-netconf mkinitcpio-dropbear openssh + + #. Store public keys in ``/etc/dropbear/root_key``:: + + vi /etc/dropbear/root_key + + Note that dropbear only supports RSA keys. + + #. Edit mkinitcpio:: + + tee /etc/mkinitcpio.conf <<- 'EOF' + HOOKS=(base udev autodetect modconf block keyboard netconf dropbear zfsencryptssh zfs filesystems) + EOF + + #. Add ``ip=`` to kernel command line:: + + # example DHCP + echo 'GRUB_CMDLINE_LINUX="ip=::::::dhcp"' >> /etc/default/grub + + Details for ``ip=`` can be found at + `here `__. + + #. Generate host keys:: + + ssh-keygen -Am pem + + #. Regenerate initrd:: + + mkinitcpio -P + +- Encrypted boot pool. + + If encryption is enabled earlier, boot pool can be optionally encrypted. + + This step will reformat ``${DISK[@]}-part2`` as LUKS container and rebuild + boot pool with ``/dev/mapper/*`` as vdev. Password must + be entered interactively at GRUB and thus incompatible with + `Supply password with SSH <#supply-password-with-ssh>`__. + + Encrypted boot pool protects initrd from + malicious modification and supports hibernation + and persistent encrypted swap. + + #. Create encryption keys:: + + mkdir /etc/cryptkey.d/ + chmod 700 /etc/cryptkey.d/ + dd bs=32 count=1 if=/dev/urandom of=/etc/cryptkey.d/lukskey-bpool_$INST_UUID + dd bs=32 count=1 if=/dev/urandom of=/etc/cryptkey.d/zfskey-rpool_$INST_UUID + + #. Backup boot pool:: + + zfs snapshot -r bpool_$INST_UUID/sys@pre-luks + zfs send -R bpool_$INST_UUID/sys@pre-luks > /root/bpool_$INST_UUID-pre-luks + + #. Unmount EFI partition:: + + umount /boot/efi + + for i in ${DISK[@]}; do + umount /boot/efis/${i##*/} + done + + #. Destroy boot pool:: + + zpool destroy bpool_$INST_UUID + + #. LUKS password:: + + LUKS_PWD=secure-passwd + + You will need to enter the same password for + each disk at boot. As root pool key is + protected by this password, the previous warning + about password strength still apply. + + #. Create LUKS containers:: + + for i in ${DISK[@]}; do + cryptsetup luksFormat -q --type luks1 --key-file /etc/cryptkey.d/lukskey-bpool_$INST_UUID $i-part2 + echo $LUKS_PWD | cryptsetup luksAddKey --key-file /etc/cryptkey.d/lukskey-bpool_$INST_UUID $i-part2 + cryptsetup open ${i}-part2 luks-bpool_$INST_UUID-${i##*/}-part2 --key-file /etc/cryptkey.d/lukskey-bpool_$INST_UUID + echo luks-bpool_$INST_UUID-${i##*/}-part2 ${i}-part2 /etc/cryptkey.d/lukskey-bpool_$INST_UUID discard >> /etc/crypttab + done + + #. Embed key file in initrd:: + + tee -a /etc/mkinitcpio.conf <> /etc/default/grub + + #. **Important**: Back up root dataset key ``/etc/cryptkey.d/zfskey-rpool_$INST_UUID`` + to a secure location. + + In the possible event of LUKS container corruption, + data on root set will only be available + with this key. + +Bootloader ---------------------------- +Workarounds +~~~~~~~~~~~~~~~~~~~~ Currently GRUB has multiple compatibility problems with ZFS, especially with regards to newer ZFS features. Workarounds have to be applied. -grub-probe fails to get canonical path -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -When persistent device names ``/dev/disk/by-id/*`` are used -with ZFS, GRUB will fail to resolve the path of the boot pool -device. Error:: +#. grub-probe fails to get canonical path - # /usr/bin/grub-probe: error: failed to get canonical path of `/dev/virtio-pci-0000:06:00.0-part3'. + When persistent device names ``/dev/disk/by-id/*`` are used + with ZFS, GRUB will fail to resolve the path of the boot pool + device. Error:: -Solution:: + # /usr/bin/grub-probe: error: failed to get canonical path of `/dev/virtio-pci-0000:06:00.0-part3'. - echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile - source /etc/profile + Solution:: -Pool name missing -~~~~~~~~~~~~~~~~~ -See `this bug report `__. -Root pool name is missing from ``root=ZFS=rpool/ROOT/default`` -kernel cmdline in generated ``grub.cfg`` file. + echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile + source /etc/profile -A workaround is to replace the pool name detection with ``zdb`` -command:: +#. Pool name missing - sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|" /etc/grub.d/10_linux + See `this bug report `__. + Root pool name is missing from ``root=ZFS=rpool_$INST_UUID/ROOT/default`` + kernel cmdline in generated ``grub.cfg`` file. -If you forgot to apply this workaround, or GRUB package has been upgraded, -initramfs will fail to find root filesystem on reboot, ending in kernel panic. -Don't panic! See `here <#find-root-pool-name-in-grub>`__. + A workaround is to replace the pool name detection with ``zdb`` + command:: -GRUB Installation + sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|" /etc/grub.d/10_linux + + If you forgot to apply this workaround, or GRUB package has been upgraded, + initrd will fail to find root filesystem on reboot, ending in kernel panic. + +Installation ~~~~~~~~~~~~~~~~~ -- If you use EFI:: +#. Install GRUB: - grub-install + If you use EFI:: - This will only install boot loader to $DISK. - If you use multi-disk setup, other disks are - dealt with later. + grub-install && grub-install --removable - Some motherboards does not properly recognize GRUB - boot entry, to ensure that your computer will - boot, also install GRUB to fallback location with:: + If using multi-disk setup, mirror EFI system partitions:: - grub-install --removable + cp -r /boot/efi/EFI /tmp + umount /boot/efi + for i in ${DISK[@]}; do + cp -r /tmp/EFI /boot/efis/${i##*/} + efibootmgr -cgp 1 -l "\EFI\arch\grubx64.efi" \ + -L "arch-${i##*/}" -d ${i}-part1 + done + mount /boot/efi -- If you use BIOS booting:: + If you use BIOS booting:: - grub-install $DISK - - If this is a multi-disk setup, - install to other disks as well:: - - for i in {target_disk2,target_disk3}; do - grub-install /dev/disk/by-id/$i + for i in ${DISK[@]}; do + grub-install --target=i386-pc $i done -Generate GRUB Boot Menu -~~~~~~~~~~~~~~~~~~~~~~~ - -:: - - grub-mkconfig -o /boot/grub/grub.cfg - -Optional Configuration ----------------------- - -Supply password with SSH -~~~~~~~~~~~~~~~~~~~~~~~~ - -Optional: - -#. Install mkinitcpio tools:: - - pacman -S mkinitcpio-netconf mkinitcpio-dropbear openssh - -#. Store public keys in ``/etc/dropbear/root_key``:: - - vi /etc/dropbear/root_key - - Note that dropbear only supports RSA keys. - -#. Edit mkinitcpio:: - - tee /etc/mkinitcpio.conf <<- 'EOF' - HOOKS=(base udev autodetect modconf block keyboard netconf dropbear zfsencryptssh zfs filesystems) - EOF - -#. Add ``ip=`` to kernel command line:: - - # example DHCP - echo 'GRUB_CMDLINE_LINUX="ip=::::::dhcp"' >> /etc/default/grub - - Details for ``ip=`` can be found at - `here `__. - -#. Generate host keys:: - - ssh-keygen -Am pem - -#. Regenerate initramfs:: - - mkinitcpio -P - -#. Update GRUB menu:: +#. Generate GRUB Menu:: grub-mkconfig -o /boot/grub/grub.cfg @@ -876,533 +997,145 @@ Finish Installation #. Take a snapshot of the clean installation for future use:: - zfs snapshot -r rpool/sys/ROOT/default@install - zfs snapshot -r bpool/sys/BOOT/default@install + zfs snapshot -r rpool_$INST_UUID/sys@install + zfs snapshot -r bpool_$INST_UUID/sys@install #. Unmount EFI system partition:: umount /mnt/boot/efi + for i in ${DISK[@]}; do + umount /mnt/boot/efis/${i##*/} + done #. Export pools:: - zpool export bpool - zpool export rpool + zpool export bpool_$INST_UUID + zpool export rpool_$INST_UUID - They must be exported, or else they will fail to be imported on reboot. +#. Reboot:: -After Reboot ------------- -Mirror EFI System Partition -~~~~~~~~~~~~~~~~~~~~~~~~~~~ + reboot -#. Check disk name:: +GRUB Tips +------------- - ls -1 /dev/disk/by-id/ | grep -v '\-part[0-9]' +- Switch prefix -#. Mirror EFI ssystem partition:: + If GRUB has not been reinstalled after switching default boot environment, + GRUB might fail to load configuration files or modules. - for i in {target_disk2,target_disk3}; do - mkfs.vfat /dev/disk/by-id/$i-part1 - mkdir -p /boot/efis/$i - echo UUID=$(blkid -s UUID -o value /dev/disk/by-id/$i-part1) /boot/efis/$i vfat \ - x-systemd.idle-timeout=1min,x-systemd.automount,noauto,umask=0022,fmask=0022,dmask=0022 \ - 0 1 >> /etc/fstab - mount /boot/efis/$i - cp -r /boot/efi/EFI/ /boot/efis/$i - efibootmgr -cgp 1 -l "\EFI\arch\grubx64.efi" \ - -L "arch-$i" -d /dev/disk/by-id/$i-part1 - done + We need to point prefix to the new boot environment and instruct GRUB + to load configurations from there. -#. Create a service to monitor and sync EFI partitions:: + #. Press ``c`` at GRUB menu. Skip this if you are in GRUB rescue. - tee /etc/systemd/system/efis-sync.path << EOF - [Unit] - Description=Monitor changes in EFI system partition + #. Check existing prefix:: - [Path] - PathChanged=/boot/efi/EFI/arch/ - #PathChanged=/boot/efi/EFI/BOOT/ - [Install] - WantedBy=multi-user.target - EOF + grub > set + # ... + # unencrypted bpool_$INST_UUID + # prefix=(hd0,gpt2)/sys/BOOT/default@/grub + # encrypted bpool_$INST_UUID + # prefix=(cryptouuid/UUID)/sys/BOOT/default@/grub - tee /etc/systemd/system/efis-sync.service << EOF - [Unit] - Description=Sync EFI system partition contents to backups + #. List available boot environments:: - [Service] - Type=oneshot - ExecStart=/usr/bin/bash -c 'for i in /boot/efis/*; do /usr/bin/cp -r /boot/efi/EFI/ $i/; done' - EOF + # unencrypted bpool_$INST_UUID + grub > ls (hd0,gpt2)/sys/BOOT + # encrypted bpool_$INST_UUID + grub > ls (crypto0)/sys/BOOT + @/ default/ pac-multm2/ - systemctl enable --now efis-sync.path + #. Set new prefix:: -#. If EFI system partition failed, promote one backup - to ``/boot/efi`` by editing ``/etc/fstab``. + # unencrypted bpool_$INST_UUID + grub > prefix=(hd0,gpt2)/sys/BOOT/pac-multm2@/grub + # encrypted bpool_$INST_UUID + grub > prefix=(crypto0)/sys/BOOT/pac-multm2@/grub -Mirror BIOS boot sector -~~~~~~~~~~~~~~~~~~~~~~~ + #. Load config from new prefix:: -This need to be manually applied when GRUB is updated. + grub > insmod normal + grub > normal -#. Check disk name:: + New entries are shown below the old ones. - ls -1 /dev/disk/by-id/ | grep -v '\-part[0-9]' +- Encrypted boot pool, if the password entered is wrong, GRUB + will drop to ``grub-rescue`` instead of retrying:: -#. Install GRUB to every disk:: + Attempting to decrypt master key... + Enter passphrase for hd0,gpt2 (c0987ea1a51049e9b3056622804de62a): + error: access denied. + error: no such cryptodisk found. + Entering rescue mode... + grub rescue> - for i in {target_disk2,target_disk3}; do - grub-install /dev/disk/by-id/$i - done + Try entering the password again with:: -Boot Environment Manager -~~~~~~~~~~~~~~~~~~~~~~~~ + grub rescue> cryptomount hd0,gpt2 + Attempting to decrypt master key... + Enter passphrase for hd0,gpt2 (c0987ea1a51049e9b3056622804de62a): + Slot 1 opened + grub rescue> insmod normal + grub rescue> normal -Optional: install -`rozb3-pac `__ -pacman hook and -`bieaz `__ -from AUR to create boot environments. + GRUB should then boot normally. -Prebuilt packages are also available -in the links above. +- Encrypted boot pool, when prefix disk failed, GRUB might fail to boot. -Post installation -~~~~~~~~~~~~~~~~~ -For post installation recommendations, -see `ArchWiki `__. + .. code-block:: text -Remember to create separate datasets for individual users. + Welcome to GRUB! -Encrypt boot pool with LUKS ---------------------------- + error: no such cryptodisk found. + Attempting to decrypt master key... + Enter passphrase for hd0,gpt2 (c0987ea1a51049e9b3056622804de62a): + Slot 1 opened + error: disk `cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf' not found. + Entering rescue mode... + grub rescue> -If encryption is enabled earlier, boot pool can be optionally encrypted. + Ensure ``Slot 1 opened`` message + is shown. If ``error: access denied.`` is shown, + the password entered is wrong. -This step will rebuild boot pool -on a LUKS 1 container. Password must -be entered interactively at GRUB and thus incompatible with -`Supply password with SSH <#supply-password-with-ssh>`__. + Check prefix:: -Encrypted boot pool protects initramfs from -malicious modification and supports hibernation -to encrypted swap. + grub rescue > set + # prefix=(cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf)/sys/BOOT/default@/grub + # root=cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf -#. Create encryption keys:: + Replace ``cryptouuid/UUID`` with ``crypto0``:: - mkdir /etc/cryptkey.d/ - chmod 700 /etc/cryptkey.d/ - dd bs=32 count=1 if=/dev/urandom of=/etc/cryptkey.d/lukskey-bpool - dd bs=32 count=1 if=/dev/urandom of=/etc/cryptkey.d/zfskey-rpool + grub rescue> prefix=(crypto0)/sys/BOOT/default@/grub + grub rescue> root=crypto0 -#. Backup boot pool:: + Boot GRUB:: - zfs snapshot -r bpool/sys@pre-luks - zfs send -R bpool/sys@pre-luks > /root/bpool-pre-luks + grub rescue> insmod normal + grub rescue> normal -#. Check boot pool creation command:: - - zpool history bpool | head -n2 \ - | grep 'zpool create' > /root/bpool-cmd - - Note the vdev disks at the end of the command. - -#. Unmount EFI partition:: - - umount /boot/efi - umount /boot/efis/* # if backups exist - -#. Destroy boot pool:: - - zpool destroy bpool - -#. Enter LUKS password:: - - LUKS_PWD=rootpool - -#. Check disks:: - - cat /root/bpool-cmd - - Disks are the last arguments of ``zpool create`` command. - -#. Create LUKS containers:: - - for i in {disk1,disk2}; do - cryptsetup luksFormat -q --type luks1 /dev/disk/by-id/$i-part2 --key-file /etc/cryptkey.d/lukskey-bpool - echo $LUKS_PWD | cryptsetup luksAddKey /dev/disk/by-id/$i-part2 --key-file /etc/cryptkey.d/lukskey-bpool - cryptsetup open /dev/disk/by-id/$i-part2 luks-bpool-$i-part2 --key-file /etc/cryptkey.d/lukskey-bpool - echo luks-bpool-$i-part2 /dev/disk/by-id/$i-part2 /etc/cryptkey.d/lukskey-bpool discard >> /etc/crypttab - done - -#. Embed key file in initramfs:: - - tee -a /etc/mkinitcpio.conf <> /etc/default/grub - -#. Install GRUB. See `GRUB Installation <#grub-installation>`__. - -#. Generate GRUB menu:: - - grub-mkconfig -o /boot/grub/grub.cfg - -#. **Important**: Back up root dataset key ``/etc/cryptkey.d/zfskey-rpool`` - to a secure location. - - In the possible event of LUKS container corruption, - data on root set will only be available - with this key. - -Secure Boot -~~~~~~~~~~~ -Recommended: With Secure Boot + encrypted boot pool + encrypted root dataset, -a chain-of-trust can be established. - -#. Sign boot loader - - - Use boot loader signed by Microsoft - - Using a boot loader signed with Microsoft's key is the - simplest and most direct approach to booting with Secure Boot active; - however, it's also the most limiting approach. - - Use `shim-signed `__\ :sup:`AUR` - and sign ``grubx64.efi`` with machine owner key. - See `here `__. - - - Customized Secure Boot - - It's possible to replace Microsoft's keys with your own, - which enables you to gain the benefits of Secure Boot - without using Shim. This can be a - useful approach if you want the benefits of Secure Boot - but don't want to trust Microsoft or any of the others - who distribute binaries signed with Microsoft's keys. - See `here `__. - - Note that enrolling your own key is risky and - might brick UEFI firmware, such as - `this instance `__. - The original poster replaced the motherboard. - -#. Set up a service to monitor and sign ``grubx64.efi``, - as in `mirrored ESP <#mirror-efi-system-partition>`__. - -Hibernation -~~~~~~~~~~~ - -If a separate swap partition and -`encrypted boot pool <#encrypt-boot-pool-with-LUKS>`__ -have been configured, hibernation, -also known as suspend-to-disk, can be enabled. - -#. Unload swap:: - - swapoff /dev/mapper/crypt-swap - cryptsetup close crypt-swap - -#. Check partition name and remove crypttab entry:: - - grep crypt-swap /etc/crypttab | awk '{ print $2 }' - # ${DISK}-part4 - DISK=/dev/disk/by-id/nvme-foo # NO -part4 - sed -i 's|crypt-swap.*||' /etc/crypttab - - Swap will be handled by ``encrypt`` initramfs hook. - -#. Create LUKS container:: - - dd bs=32 count=1 if=/dev/urandom of=/etc/cryptkey.d/lukskey-crypt-swap - cryptsetup luksFormat -q --type luks2 ${DISK}-part4 --key-file /etc/cryptkey.d/lukskey-crypt-swap - cryptsetup luksOpen ${DISK}-part4 crypt-swap --key-file /etc/cryptkey.d/lukskey-crypt-swap --allow-discards - mkswap /dev/mapper/crypt-swap - swapon /dev/mapper/crypt-swap - -#. Configure mkinitcpio:: - - sed -i 's|FILES=(|FILES=(/etc/cryptkey.d/lukskey-crypt-swap |' /etc/mkinitcpio.conf - sed -i 's| zfs | encrypt resume zfs |' /etc/mkinitcpio.conf - -#. Add kernel command line:: - - echo "GRUB_CMDLINE_LINUX=\"cryptdevice=PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part4):crypt-swap:allow-discards \ - cryptkey=rootfs:/etc/cryptkey.d/lukskey-crypt-swap \ - resume=/dev/mapper/crypt-swap\"" >> /etc/default/grub - -#. Regenerate initramfs and GRUB menu:: - - mkinitcpio -P - grub-mkconfig -o /boot/grub/grub.cfg - -#. Test hibernation:: - - systemctl hibernate - - Close all program before testing, just in case. - - If hibernation works, your computer will shut down. - Power it on. Computer should return to the previous state - seamlessly. - -Enter LUKS password in GRUB rescue -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Using LUKS encryption for boot pool, -if the password entered is wrong, GRUB -will drop to ``grub-rescue``:: - - Attempting to decrypt master key... - Enter passphrase for hd0,gpt2 (c0987ea1a51049e9b3056622804de62a): - error: access denied. - error: no such cryptodisk found. - Entering rescue mode... - grub rescue> - -Try entering the password again with:: - - grub rescue> cryptomount hd0,gpt2 - Attempting to decrypt master key... - Enter passphrase for hd0,gpt2 (c0987ea1a51049e9b3056622804de62a): - Slot 1 opened - grub rescue> insmod normal - grub rescue> normal - -GRUB should then boot normally. - -Change GRUB prefix when disk fails -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Using encryption, when -disk failed, GRUB might fail to boot. - -.. code-block:: text - - Welcome to GRUB! - - error: no such cryptodisk found. - Attempting to decrypt master key... - Enter passphrase for hd0,gpt2 (c0987ea1a51049e9b3056622804de62a): - Slot 1 opened - error: disk `cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf' not found. - Entering rescue mode... - grub rescue> - -Ensure ``Slot 1 opened`` message -is shown. If ``error: access denied.`` is shown, -the password entered is wrong. - -#. Check prefix:: - - grub rescue > set - # prefix=(cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf)/sys/BOOT/default@/grub - # root=cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf - -#. Replace ``cryptouuid/UUID`` with ``crypto0``:: - - grub rescue> prefix=(crypto0)/sys/BOOT/default@/grub - grub rescue> root=crypto0 - -#. Boot GRUB:: - - grub rescue> insmod normal - grub rescue> normal - -GRUB should then boot normally. After entering system, -promote one backup to ``/boot/efi`` and reinstall GRUB with -``grub-install``. + GRUB should then boot normally. After entering system, + promote one backup to ``/boot/efi`` and reinstall GRUB with + ``grub-install``. Recovery -------- -Find root pool name in GRUB -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. At GRUB menu countdown, press ``c`` to enter commandline. - -#. Find current GRUB root:: - - grub > set - # unencrypted bpool - # root=hd0,gpt2 - # encrypted bpool - # root=cryptouuid/UUID - -#. Find boot pool name:: - - # unencrypted bpool - grub > ls (hd0,gpt2) - # encrypted bpool - grub > ls (crypto0) - # Device hd0,gpt2: Filesystem type zfs - Label `bpool_$myUUID' ... - -#. Press Esc to go back to GRUB menu. - -#. With menu entry "Arch Linux" selected, press ``e``. - -#. Find ``linux`` line and add root pool name:: - - echo 'Loading Linux linux' - # broken - linux /sys/BOOT/default@/vmlinuz-linux root=ZFS=/sys/ROOT/default rw - # fixed - linux /sys/BOOT/default@/vmlinuz-linux root=ZFS=rpool_$myUUID/sys/ROOT/default rw - -#. Press Ctrl-x or F10 to boot. Apply the workaround afterwards. - -Load grub.cfg in GRUB command line -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. Press ``c`` at GRUB menu. - -#. Check prefix:: - - grub > set - # ... - # unencrypted bpool - # prefix=(hd0,gpt2)/sys/BOOT/default@/grub - # encrypted bpool - # prefix=(cryptouuid/UUID)/sys/BOOT/default@/grub - -#. List available boot environments:: - - # unencrypted bpool - grub > ls (hd0,gpt2)/sys/BOOT # press tab after 'T' - # encrypted bpool - grub > ls (crypto0)/sys/BOOT # press tab after 'T' - Possible files are: - - @/ default/ pac-multm2/ - -#. Set new prefix:: - - # unencrypted bpool - grub > prefix=(hd0,gpt2)/sys/BOOT/pac-multm2@/grub - # encrypted bpool - grub > prefix=(crypto0)/sys/BOOT/pac-multm2@/grub - -#. Load config from new prefix:: - - grub > normal - - New entries are shown below the old ones. - -Rescue in Live Environment -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. `Download Arch Linux live image <#download-arch-linux-live-image>`__. - -#. `Prepare the Live Environment <#prepare-the-live-environment>`__. +#. Go through `preparations <#preparations>`__. #. Import and unlock root and boot pool:: - zpool import -N -R /mnt rpool - zpool import -N -R /mnt bpool + zpool import -NR /mnt rpool_$INST_UUID + zpool import -NR /mnt bpool_$INST_UUID If using password:: - zfs load-key rpool/sys + zfs load-key rpool_$INST_UUID/sys If using keyfile:: - zfs load-key -L file:///path/to/keyfile rpool/sys + zfs load-key -L file:///path/to/keyfile rpool_$INST_UUID/sys #. Find the current boot environment:: @@ -1411,19 +1144,12 @@ Rescue in Live Environment #. Mount root filesystem:: - zfs mount rpool/sys/ROOT/$BE + zfs mount rpool_$INST_UUID/sys/ROOT/$BE #. chroot into the system:: arch-chroot /mnt /bin/bash --login - mount /boot - mount /boot/efi zfs mount -a + mount -a -#. Finish rescue:: - - exit - umount /mnt/boot/efi - zpool export bpool - zpool export rpool - reboot +#. Finish rescue. See `finish installation <#finish-installation>`__. diff --git a/docs/Getting Started/Arch Linux/Artix Linux Root on ZFS.rst b/docs/Getting Started/Arch Linux/Artix Linux Root on ZFS.rst deleted file mode 100644 index ea3c87f..0000000 --- a/docs/Getting Started/Arch Linux/Artix Linux Root on ZFS.rst +++ /dev/null @@ -1,1075 +0,0 @@ -.. highlight:: sh - -Artix Linux Root on ZFS -======================= - -.. contents:: Table of Contents - :local: - -Overview --------- - -`Artix Linux `__ is a systemd-free distribution based on Arch Linux. - -OpenRC, runit and s6 are supported init systems. - -Caution -~~~~~~~ - -- This guide uses entire physical disks. -- Multiple systems on one disk is not supported. -- Target disk will be wiped. Back up your data before continuing. -- The target system, virtual or physical, must have at least 4GB RAM, - or the DKMS module might fail to build. -- Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) - only works with UEFI booting. This not unique to ZFS. `GRUB does not and - will not work on 4Kn with legacy (BIOS) booting. - `__ - -Support -~~~~~~~ - -If you need help, reach out to the community using the :ref:`mailing_lists` or IRC at -`#zfsonlinux `__ on `freenode -`__. If you have a bug report or feature request -related to this HOWTO, please `file a new issue and mention @ne9z -`__. - -Contributing -~~~~~~~~~~~~ - -#. Fork and clone `this repo `__. - -#. Install the tools:: - - sudo pacman -S python-pip - - pip3 install -r docs/requirements.txt - - # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: - PATH=$HOME/.local/bin:$PATH - -#. Make your changes. - -#. Test:: - - cd docs - make html - sensible-browser _build/html/index.html - -#. ``git commit --signoff`` to a branch, ``git push``, and create a pull - request. Mention @rlaager. - -Encryption -~~~~~~~~~~ - -This guide supports optional ZFS native encryption on root pool. - -Unencrypted does not encrypt anything, of course. With no encryption -happening, this option naturally has the best performance. - -ZFS native encryption encrypts the data and most metadata in the root -pool. It does not encrypt dataset or snapshot names or properties. The -boot pool is not encrypted at all, but it only contains the bootloader, -kernel, and initrd. (Unless you put a password in ``/etc/fstab``, the -initrd is unlikely to contain sensitive data.) The system cannot boot -without the passphrase being entered at the console. Performance is -good. As the encryption happens in ZFS, even if multiple disks (mirror -or raidz topologies) are used, the data only has to be encrypted once. - -Preinstallation ----------------- -Download Artix Linux live image -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -OpenRC is used throughout this guide. - -Other init systems, runit and s6, are also supported. -Change the service commands to the equivalent commands. - -#. Choose a mirror: - - `Mirrorlist `__ - -#. Download January 2021 build and signature. `File a new issue and mention @ne9z - `__ if it's - no longer available. - - - `ISO (US mirror) `__ - - `Signature `__ - -#. Check live image against signature:: - - gpg --auto-key-retrieve --verify artix-base-openrc-20210101-x86_64.iso.sig - - If the file is authentic, output should be the following:: - - gpg: Signature made Sun 03 Jan 2021 09:30:42 PM UTC - gpg: using RSA key A574A1915CEDE31A3BFF5A68606520ACB886B428 - gpg: Good signature from "Christos Nouskas " [unknown] - ... - Primary key fingerprint: A574 A191 5CED E31A 3BFF 5A68 6065 20AC B886 B428 - - Ensure ``Good signature`` and last 8 digits are ``B886 B428``, - as listed on `Artix Linux Download `__ page. - -#. Write the image to a USB drive or an optical disc. - -#. Boot the target computer from the prepared live medium. - -#. At GRUB menu, select "From ISO: artix x86_64". - -Prepare the Live Environment -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. Connect to the internet. - If the target computer aquires IP address with DHCP, - no further steps need to be taken. - Otherwise, refer to - `Network Configuration `__ - wiki page. - -#. Become root:: - - sudo -i - -#. Start SSH server. - - - Interactively set root password with:: - - passwd - - - Permit root login with password:: - - echo PermitRootLogin yes >> /etc/ssh/sshd_config - - - Start SSH server:: - - rc-service sshd start - - - Find the IP address of the target computer:: - - ip -4 address show scope global - - - On another computer, connect to the target computer with:: - - ssh root@192.168.1.10 - -#. Enter a bash shell:: - - bash - -#. Import keys of archzfs repository:: - - curl -L https://archzfs.com/archzfs.gpg | pacman-key -a - - curl -L https://git.io/JtQpl | xargs -i{} pacman-key --lsign-key {} - -#. Add archzfs repository:: - - tee -a /etc/pacman.conf <<- 'EOF' - - [archzfs] - Include = /etc/pacman.d/mirrorlist-archzfs - EOF - - curl -L https://git.io/JtQp4 > /etc/pacman.d/mirrorlist-archzfs - -#. Select mirror: - - - Edit the following files:: - - nano /etc/pacman.d/mirrorlist - nano /etc/pacman.d/mirrorlist-arch - - Uncomment and move mirrors to - the beginning of the file. - - - Update database:: - - pacman -Sy - -#. Install ZFS and tools in the live environment:: - - pacman -Sy --noconfirm --needed gdisk dosfstools zfs-dkms glibc - -#. Load kernel module:: - - modprobe zfs - -Installation Variables -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In this part, we will set some variables to configure the system. - -#. Timezone - - List the available timezones with:: - - ls /usr/share/zoneinfo/ - - Store the target timezone in a variable:: - - INST_TZ=/usr/share/zoneinfo/Asia/Irkutsk - -#. Host name - - Store the host name in a variable:: - - INST_HOST='artixonzfs' - -#. Kernel variant - - Store the kernel variant in a variable. - Available variants in official repo are: - - - linux - - linux-lts - - linux-zen - - :: - - INST_LINVAR='linux' - -#. Target disk - - List the available disks with:: - - ls -d /dev/disk/by-id/* | grep -v part - - If the disk is connected with VirtIO, use ``/dev/vd*``. - And replace ``${DISK}-part`` in this guide with ``${DISK}`` - - Store the target disk in a variable:: - - DISK=/dev/disk/by-id/nvme-foo_NVMe_bar_512GB - - For multi-disk setups, repeat the formatting and - partitioning commands for other disks. - -System Installation -------------------- - -Format and Partition the Target Disks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. Clear the partition table:: - - sgdisk --zap-all $DISK - -#. Create EFI system partition (for use now or in the future):: - - sgdisk -n1:1M:+1G -t1:EF00 $DISK - -#. Create BIOS boot partition:: - - sgdisk -a1 -n5:24K:+1000K -t5:EF02 $DISK - -#. Create boot pool partition:: - - sgdisk -n2:0:+4G -t2:BE00 $DISK - -#. Create root pool partition: - - - If you don't need a separate swap partition:: - - sgdisk -n3:0:0 -t3:BF00 $DISK - - - If a separate swap partition is needed:: - - sgdisk -n3:0:-8G -t3:BF00 $DISK - sgdisk -n4:0:0 -t4:8308 $DISK - - Adjust the swap partition size to your needs. - -#. Repeat the above steps for other target disks, if any. - -Create Root and Boot Pools -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. For multi-disk setup - - If you want to create a multi-disk pool, replace ``${DISK}-partX`` - with the topology and the disk path. - - For example, change:: - - zpool create \ - ... \ - ${DISK}-part2 - - to:: - - zpool create \ - ... \ - mirror \ - /dev/disk/by-id/ata-disk1-part2 \ - /dev/disk/by-id/ata-disk2-part2 - - if needed, replace ``mirror`` with ``raidz1``, ``raidz2`` or ``raidz3``. - -#. Create boot pool:: - - zpool create \ - -o ashift=12 \ - -o autotrim=on \ - -d -o feature@async_destroy=enabled \ - -o feature@bookmarks=enabled \ - -o feature@embedded_data=enabled \ - -o feature@empty_bpobj=enabled \ - -o feature@enabled_txg=enabled \ - -o feature@extensible_dataset=enabled \ - -o feature@filesystem_limits=enabled \ - -o feature@hole_birth=enabled \ - -o feature@large_blocks=enabled \ - -o feature@lz4_compress=enabled \ - -o feature@spacemap_histogram=enabled \ - -O acltype=posixacl \ - -O canmount=off \ - -O compression=lz4 \ - -O devices=off \ - -O normalization=formD \ - -O relatime=on \ - -O xattr=sa \ - -O mountpoint=/boot \ - -R /mnt \ - bpool \ - ${DISK}-part2 - - You should not need to customize any of the options for the boot pool. - - GRUB does not support all of the zpool features. See ``spa_feature_names`` - in `grub-core/fs/zfs/zfs.c - `__. - This step creates a separate boot pool for ``/boot`` with the features - limited to only those that GRUB supports, allowing the root pool to use - any/all features. Note that GRUB opens the pool read-only, so all - read-only compatible features are “supported” by GRUB. - - **Feature Notes:** - - - The ``allocation_classes`` feature should be safe to use. However, unless - one is using it (i.e. a ``special`` vdev), there is no point to enabling - it. It is extremely unlikely that someone would use this feature for a - boot pool. If one cares about speeding up the boot pool, it would make - more sense to put the whole pool on the faster disk rather than using it - as a ``special`` vdev. - - The ``project_quota`` feature has been tested and is safe to use. This - feature is extremely unlikely to matter for the boot pool. - - The ``resilver_defer`` should be safe but the boot pool is small enough - that it is unlikely to be necessary. - - The ``spacemap_v2`` feature has been tested and is safe to use. The boot - pool is small, so this does not matter in practice. - - As a read-only compatible feature, the ``userobj_accounting`` feature - should be compatible in theory, but in practice, GRUB can fail with an - “invalid dnode type” error. This feature does not matter for ``/boot`` - anyway. - -#. Create root pool:: - - zpool create \ - -o ashift=12 \ - -o autotrim=on \ - -R /mnt \ - -O acltype=posixacl \ - -O canmount=off \ - -O compression=zstd \ - -O dnodesize=auto \ - -O normalization=formD \ - -O relatime=on \ - -O xattr=sa \ - -O mountpoint=/ \ - rpool \ - ${DISK}-part3 - - **Notes:** - - - The use of ``ashift=12`` is recommended here because many drives - today have 4 KiB (or larger) physical sectors, even though they - present 512 B logical sectors. Also, a future replacement drive may - have 4 KiB physical sectors (in which case ``ashift=12`` is desirable) - or 4 KiB logical sectors (in which case ``ashift=12`` is required). - - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you - do not want this, remove that option, but later add - ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create`` - for ``/var/log``, as `journald requires ACLs - `__ - - Setting ``normalization=formD`` eliminates some corner cases relating - to UTF-8 filename normalization. It also implies ``utf8only=on``, - which means that only UTF-8 filenames are allowed. If you care to - support non-UTF-8 filenames, do not use this option. For a discussion - of why requiring UTF-8 filenames may be a bad idea, see `The problems - with enforced UTF-8 only filenames - `__. - - ``recordsize`` is unset (leaving it at the default of 128 KiB). If you - want to tune it (e.g. ``-o recordsize=1M``), see `these - `__ `various - `__ `blog - `__ - `posts - `__. - - Setting ``relatime=on`` is a middle ground between classic POSIX - ``atime`` behavior (with its significant performance impact) and - ``atime=off`` (which provides the best performance by completely - disabling atime updates). Since Linux 2.6.30, ``relatime`` has been - the default for other filesystems. See `RedHat’s documentation - `__ - for further information. - - Setting ``xattr=sa`` `vastly improves the performance of extended - attributes - `__. - Inside ZFS, extended attributes are used to implement POSIX ACLs. - Extended attributes can also be used by user-space applications. - `They are used by some desktop GUI applications. - `__ - `They can be used by Samba to store Windows ACLs and DOS attributes; - they are required for a Samba Active Directory domain controller. - `__ - Note that ``xattr=sa`` is `Linux-specific - `__. If you move your - ``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux, - extended attributes will not be readable (though your data will be). If - portability of extended attributes is important to you, omit the - ``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole - pool, it is probably fine to use it for ``/var/log``. - - Make sure to include the ``-part3`` portion of the drive path. If you - forget that, you are specifying the whole disk, which ZFS will then - re-partition, and you will lose the bootloader partition(s). - -Create Datasets -~~~~~~~~~~~~~~~~~~~~~~ -#. Create system boot container:: - - zfs create \ - -o canmount=off \ - -o mountpoint=/boot \ - bpool/sys - -#. Create system root container: - - Dataset encryption is set at creation and can not be altered later, - but encrypted dataset can be created inside an unencrypted parent dataset. - - - Unencrypted:: - - zfs create \ - -o canmount=off \ - -o mountpoint=/ \ - rpool/sys - - - Encrypted: - - #. Choose a strong password. - - Once the password is compromised, - dataset and pool must be destroyed, - disk wiped and system rebuilt from scratch to protect confidentiality. - `Merely changing password is not enough `__. - - Example: generate passphrase with `xkcdpass `_:: - - pacman -S --noconfirm xkcdpass - xkcdpass -Vn 10 -w /usr/lib/python*/site-packages/xkcdpass/static/eff-long - - Root pool password can be supplied with SSH at boot time if boot pool is not encrypted, - see `Supply password with SSH <#supply-password-with-ssh>`__. - - #. Encrypt boot pool. - - For mobile devices, it is strongly recommended to encrypt boot pool and enable Secure Boot - immediately after reboot to prevent attacks to initramfs. To quote - `cryptsetup faq `__: - - An attacker that wants to compromise your system will just - compromise the initrd or the kernel itself. - - This HOWTO has not been ported to Artix. - Refer to Arch guide for details. - - #. Create dataset:: - - zfs create \ - -o canmount=off \ - -o mountpoint=/ \ - -o encryption=on \ - -o keylocation=prompt \ - -o keyformat=passphrase \ - rpool/sys - -#. Create container datasets:: - - zfs create -o canmount=off -o mountpoint=none bpool/sys/BOOT - zfs create -o canmount=off -o mountpoint=none rpool/sys/ROOT - zfs create -o canmount=off -o mountpoint=none rpool/sys/DATA - -#. Create root and boot filesystem datasets:: - - zfs create -o mountpoint=legacy -o canmount=noauto bpool/sys/BOOT/default - zfs create -o mountpoint=/ -o canmount=noauto rpool/sys/ROOT/default - -#. Mount root and boot filesystem datasets:: - - zfs mount rpool/sys/ROOT/default - mkdir /mnt/boot - mount -t zfs bpool/sys/BOOT/default /mnt/boot - -#. Create datasets to separate user data from root filesystem:: - - zfs create -o mountpoint=/ -o canmount=off rpool/sys/DATA/default - - for i in {usr,var,var/lib}; - do - zfs create -o canmount=off rpool/sys/DATA/default/$i - done - - for i in {home,root,srv,usr/local,var/log,var/spool,var/tmp}; - do - zfs create -o canmount=on rpool/sys/DATA/default/$i - done - - chmod 750 /mnt/root - chmod 1777 /mnt/var/tmp - -#. Optional user data datasets: - - If this system will have games installed:: - - zfs create -o canmount=on rpool/sys/DATA/default/var/games - - If you use /var/www on this system:: - - zfs create -o canmount=on rpool/sys/DATA/default/var/www - - If this system will use GNOME:: - - zfs create -o canmount=on rpool/sys/DATA/default/var/lib/AccountsService - - If this system will use Docker (which manages its own datasets & - snapshots):: - - zfs create -o canmount=on rpool/sys/DATA/default/var/lib/docker - - If this system will use NFS (locking):: - - zfs create -o canmount=on rpool/sys/DATA/default/var/lib/nfs - - If this system will use Linux Containers:: - - zfs create -o canmount=on rpool/sys/DATA/default/var/lib/lxc - - If this system will use libvirt:: - - zfs create -o canmount=on rpool/sys/DATA/default/var/lib/libvirt - -Format and Mount EFI System Partition -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -:: - - mkfs.vfat -n EFI ${DISK}-part1 - mkdir /mnt/boot/efi - mount -t vfat ${DISK}-part1 /mnt/boot/efi - -If you are using a multi-disk setup, this step will only install -bootloader to the first disk. Other disks will be handled later. - -Package Installation -~~~~~~~~~~~~~~~~~~~~ - -#. Install base packages:: - - basestrap /mnt base vi mandoc grub connman connman-openrc openrc elogind-openrc - -#. Install kernel headers and zfs-dkms package: - - Check kernel version:: - - INST_LINVER=$(pacman -Syi ${INST_LINVAR} | grep Version | awk '{ print $3 }') - - Check zfs-dkms package version:: - - DKMS_VER=$(pacman -Si zfs-dkms \ - | grep 'Version' \ - | awk '{ print $3 }' \ - | sed 's|-.*||') - - Visit OpenZFS release page:: - - curl -L https://github.com/openzfs/zfs/raw/zfs-${DKMS_VER}/META \ - | grep Linux - # Linux-Maximum: 5.10 - # Linux-Minimum: 3.10 - # compare with the output of the following command - echo ${INST_LINVER%%-*} - # 5.10.17 # supported - - If the kernel is supported: - - - Install zfs-dkms:: - - basestrap /mnt zfs-dkms ${INST_LINVAR} ${INST_LINVAR}-headers - - If the kernel is not yet supported, install an older kernel: - - - Check build date:: - - DKMS_DATE=$(pacman -Syi zfs-dkms \ - | grep 'Build Date' \ - | sed 's/.*: //' \ - | LC_ALL=C xargs -i{} date -d {} -u +%Y/%m/%d) - - - Check kernel version:: - - INST_LINVER=$(curl https://archive.artixlinux.org/repos/${DKMS_DATE}/system/os/x86_64/ \ - | grep \"${INST_LINVAR}-'[0-9]' \ - | grep -v sig \ - | sed "s|.*$INST_LINVAR-||" \ - | sed "s|-x86_64.*||") - - - Install kernel and headers:: - - basestrap -U /mnt \ - https://archive.artixlinux.org/packages/l/${INST_LINVAR}/${INST_LINVAR}-${INST_LINVER}-x86_64.pkg.tar.zst \ - https://archive.artixlinux.org/packages/l/${INST_LINVAR}-headers/${INST_LINVAR}-headers-${INST_LINVER}-x86_64.pkg.tar.zst - - - Install zfs-dkms:: - - basestrap /mnt zfs-dkms - -#. Hold kernel package from updates:: - - sed -i 's/#IgnorePkg/IgnorePkg/' /mnt/etc/pacman.conf - sed -i "/^IgnorePkg/ s/$/ ${INST_LINVAR} ${INST_LINVAR}-headers/" /mnt/etc/pacman.conf - - Kernel must be manually updated, see kernel update section in Getting Started. - -#. Install firmware:: - - pacstrap /mnt linux-firmware intel-ucode amd-ucode - -#. If you boot your computer with EFI:: - - basestrap /mnt efibootmgr - -#. If a swap partition has been created:: - - basestrap /mnt cryptsetup - basestrap /mnt cryptsetup-openrc - -#. For other optional packages, - see `ArchWiki `__. - -System Configuration --------------------- - -#. Generate fstab:: - - echo bpool/sys/BOOT/default /boot zfs rw,xattr,posixacl 0 0 >> /mnt/etc/fstab - echo UUID=$(blkid -s UUID -o value ${DISK}-part1) /boot/efi vfat umask=0022,fmask=0022,dmask=0022 0 1 >> /mnt/etc/fstab - - ``tmpfs`` for ``/tmp`` is recommended:: - - echo "tmpfs /tmp tmpfs nodev,nosuid 0 0" >> /mnt/etc/fstab - - If a swap partition has been created:: - - echo /dev/mapper/crypt-swap none swap defaults 0 0 >> /mnt/etc/fstab - echo swap=crypt-swap >> /mnt/etc/conf.d/dmcrypt - echo source=\'${DISK}-part4\' >> /mnt/etc/conf.d/dmcrypt - -#. Configure mkinitcpio:: - - mv /mnt/etc/mkinitcpio.conf /mnt/etc/mkinitcpio.conf.original - - tee /mnt/etc/mkinitcpio.conf < /mnt/etc/hostname - -#. Timezone:: - - ln -sf $INST_TZ /mnt/etc/localtime - hwclock --systohc - -#. Locale:: - - echo "en_US.UTF-8 UTF-8" >> /mnt/etc/locale.gen - echo "LANG=en_US.UTF-8" >> /mnt/etc/locale.conf - - Other locales should be added after reboot. - -#. Chroot:: - - artix-chroot /mnt /usr/bin/env DISK=$DISK bash --login - -#. If a swap partition has been created, - enable cryptsetup services for crypt-swap:: - - rc-update add device-mapper boot - rc-update add dmcrypt boot - -#. Add and enable ZFS mount service:: - - tee /etc/init.d/zfs-mount << 'EOF' - #!/usr/bin/openrc-run - - start() { - /usr/bin/zfs mount -a - } - EOF - - chmod +x /etc/init.d/zfs-mount - - rc-update add zfs-mount boot - - Other ZFS services, such as ``zed`` - can be ported from ``/usr/lib/systemd/system/zfs*``. - -#. Apply locales:: - - locale-gen - -#. Import keys of archzfs repository:: - - curl -L https://archzfs.com/archzfs.gpg | pacman-key -a - - curl -L https://git.io/JtQpl | xargs -i{} pacman-key --lsign-key {} - -#. Add archzfs repository:: - - tee -a /etc/pacman.conf <<- 'EOF' - #[archzfs-testing] - #Include = /etc/pacman.d/mirrorlist-archzfs - [archzfs] - Include = /etc/pacman.d/mirrorlist-archzfs - EOF - - curl -L https://git.io/JtQp4 > /etc/pacman.d/mirrorlist-archzfs - -#. Enable networking:: - - rc-update add connmand default - -#. Generate zpool.cache - - Pools are imported by initramfs with the information stored in ``/etc/zfs/zpool.cache``. - This cache file will be embedded in initramfs. - - :: - - zpool set cachefile=/etc/zfs/zpool.cache rpool - zpool set cachefile=/etc/zfs/zpool.cache bpool - -#. Set root password:: - - passwd - -#. Generate initramfs:: - - mkinitcpio -P - -Bootloader Installation ----------------------------- - -Currently GRUB has multiple compatibility problems with ZFS, -especially with regards to newer ZFS features. -Workarounds have to be applied. - -grub-probe fails to get canonical path -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -When persistent device names ``/dev/disk/by-id/*`` are used -with ZFS, GRUB will fail to resolve the path of the boot pool -device. Error:: - - # /usr/bin/grub-probe: error: failed to get canonical path of `/dev/virtio-pci-0000:06:00.0-part3'. - -Solution:: - - echo 'export ZPOOL_VDEV_NAME_PATH=YES' >> /etc/profile - source /etc/profile - -Pool name missing -~~~~~~~~~~~~~~~~~ -See `this bug report `__. -Root pool name is missing from ``root=ZFS=rpool/ROOT/default`` -in generated ``grub.cfg`` file. - -A workaround is to replace the pool name detection with ``zdb`` -command:: - - sed -i "s|rpool=.*|rpool=\`zdb -l \${GRUB_DEVICE} \| grep -E '[[:blank:]]name' \| cut -d\\\' -f 2\`|" /etc/grub.d/10_linux - -If you forgot to apply this workaround, or GRUB package has been upgraded, -initramfs will fail to find root filesystem on reboot, ending in kernel panic. -Don't panic! See `here <#find-root-pool-name-in-grub>`__. - -GRUB Installation -~~~~~~~~~~~~~~~~~ - -- If you use EFI:: - - grub-install - - This will only install boot loader to $DISK. - If you use multi-disk setup, other disks are - dealt with later. - - Some motherboards does not properly recognize GRUB - boot entry, to ensure that your computer will - boot, also install GRUB to fallback location with:: - - grub-install --removable - -- If you use BIOS booting:: - - grub-install $DISK - - If this is a multi-disk setup, - install to other disks as well:: - - for i in {target_disk2,target_disk3}; do - grub-install /dev/disk/by-id/$i - done - -Generate GRUB Boot Menu -~~~~~~~~~~~~~~~~~~~~~~~ - -:: - - grub-mkconfig -o /boot/grub/grub.cfg - -Optional Configuration ----------------------- - -Supply password with SSH -~~~~~~~~~~~~~~~~~~~~~~~~ - -Optional: - -#. Install mkinitcpio tools:: - - pacman -S mkinitcpio-netconf mkinitcpio-dropbear openssh - -#. Store authorized keys in ``/etc/dropbear/root_key``:: - - vi /etc/dropbear/root_key - - Note that dropbear only supports RSA keys. - -#. Edit mkinitcpio:: - - tee /etc/mkinitcpio.conf <<- 'EOF' - HOOKS=(base udev autodetect modconf block keyboard netconf dropbear zfsencryptssh zfs filesystems) - EOF - -#. Add ``ip=`` to kernel command line:: - - # example DHCP - echo 'GRUB_CMDLINE_LINUX="ip=::::::dhcp"' >> /etc/default/grub - - Details for ``ip=`` can be found at - `here `__. - -#. Generate host keys:: - - ssh-keygen -Am pem - -#. Regenerate initramfs:: - - mkinitcpio -P - -#. Update GRUB menu:: - - grub-mkconfig -o /boot/grub/grub.cfg - -Finish Installation -------------------- - -#. Exit chroot:: - - exit - -#. Take a snapshot of the clean installation for future use:: - - zfs snapshot -r rpool/sys/ROOT/default@install - zfs snapshot -r bpool/sys/BOOT/default@install - -#. Unmount EFI system partition:: - - umount /mnt/boot/efi - -#. Export pools:: - - zpool export bpool - zpool export rpool - - They must be exported, or else they will fail to be imported on reboot. - -After Reboot ------------- -Mirror EFI System Partition -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. Check disk name:: - - ls -1 /dev/disk/by-id/ | grep -v '\-part[0-9]' - -#. Mirror EFI ssystem partition:: - - for i in {target_disk2,target_disk3}; do - mkfs.vfat /dev/disk/by-id/$i-part1 - mkdir -p /boot/efis/$i - echo UUID=$(blkid -s UUID -o value /dev/disk/by-id/$i-part1) /boot/efis/$i vfat \ - umask=0022,fmask=0022,dmask=0022 0 1 >> /etc/fstab - mount /boot/efis/$i - cp -r /boot/efi/EFI/ /boot/efis/$i - efibootmgr -cgp 1 -l "\EFI\artix\grubx64.efi" \ - -L "artix-$i" -d /dev/disk/by-id/$i-part1 - done - -#. Enable cron and set up cron job to sync EFI system partition contents:: - - rc-update add cronie default - crontab -u root -e - # @hourly /usr/bin/bash -c 'for i in /boot/efis/*; do /usr/bin/cp -r /boot/efi/EFI/ $i/; done' - - Alternatively, monitor ``/boot/efi/EFI/artix`` with ``inotifywait``. - -#. If EFI system partition failed, promote one backup - to ``/boot/efi`` by editing ``/etc/fstab``. - -Mirror BIOS boot sector -~~~~~~~~~~~~~~~~~~~~~~~ - -This need to be manually applied when GRUB is updated. - -#. Check disk name:: - - ls -1 /dev/disk/by-id/ | grep -v '\-part[0-9]' - -#. Install GRUB to every disk:: - - for i in {target_disk2,target_disk3}; do - grub-install /dev/disk/by-id/$i - done - -Boot Environment Manager -~~~~~~~~~~~~~~~~~~~~~~~~ - -Optional: install -`rozb3-pac `__ -pacman hook and -`bieaz `__ -from AUR to create boot environments. - -Prebuilt packages are also available -in the links above. - -Post installation -~~~~~~~~~~~~~~~~~ -For post installation recommendations, -see `ArchWiki `__. - -Remember to create separate datasets for individual users. - -Recovery --------- - -Find root pool name in GRUB -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. At GRUB menu countdown, press ``c`` to enter commandline. - -#. Find current GRUB root:: - - grub > set - # unencrypted bpool - # root=hd0,gpt2 - # encrypted bpool - # root=cryptouuid/UUID - -#. Find boot pool name:: - - # unencrypted bpool - grub > ls (hd0,gpt2) - # encrypted bpool - grub > ls (crypto0) - # Device hd0,gpt2: Filesystem type zfs - Label `bpool_$myUUID' ... - -#. Press Esc to go back to GRUB menu. - -#. With menu entry "Arch Linux" selected, press ``e``. - -#. Find ``linux`` line and add root pool name:: - - echo 'Loading Linux linux' - # broken - linux /sys/BOOT/default@/vmlinuz-linux root=ZFS=/sys/ROOT/default rw - # fixed - linux /sys/BOOT/default@/vmlinuz-linux root=ZFS=rpool_$myUUID/sys/ROOT/default rw - -#. Press Ctrl-x or F10 to boot. Apply the workaround afterwards. - -Load grub.cfg in GRUB command line -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. Press ``c`` at GRUB menu. - -#. List available disks:: - - grub > ls (hd # press tab after 'd' - Possible devices are: - - hd0 hd1 - -#. List available boot environments:: - - grub > ls (hd0,gpt2)/sys/BOOT # press tab after 'T' - Possible files are: - - @/ default/ pac-multm2/ - -#. Load grub.cfg:: - - grub > configfile (hd0,gpt2)/sys/BOOT/default@/grub/grub.cfg - -Rescue in Live Environment -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -#. `Download Artix Linux live image <#download-artix-linux-live-image>`__. - -#. `Prepare the Live Environment <#prepare-the-live-environment>`__. - -#. Import and unlock root and boot pool:: - - zpool import -N -R /mnt rpool - zpool import -N -R /mnt bpool - - If using password:: - - zfs load-key rpool - -#. Find the current boot environment:: - - zfs list - BE=default - -#. Mount root filesystem:: - - zfs mount rpool/sys/ROOT/$BE - -#. chroot into the system:: - - arch-chroot /mnt /bin/bash --login - mount /boot - mount /boot/efi - zfs mount -a - -#. Finish rescue:: - - exit - umount /mnt/boot/efi - zpool export bpool - zpool export rpool - reboot diff --git a/docs/Getting Started/Arch Linux/index.rst b/docs/Getting Started/Arch Linux/index.rst index 341c325..90e3de5 100644 --- a/docs/Getting Started/Arch Linux/index.rst +++ b/docs/Getting Started/Arch Linux/index.rst @@ -15,6 +15,45 @@ If you need help, reach out to the community using the :ref:`mailing_lists` or I related to this HOWTO, please `file a new issue and mention @ne9z `__. +Overview +-------- + +Due to license incompatibility, +ZFS support is provided by out-of-tree kernel modules. + +Kernel modules are specific to each kernel package, i.e., +ZFS kernel module built for ``linux-5.11.1.arch1-1`` is incompatible +with ``linux-5.11.2.arch1-1`` kernel. + +ZFS kernel modules can be obtained by + +- installing ``zfs-linux*``, which contains prebuilt ZFS kernel modules; +- installing ``zfs-dkms`` and build ZFS kernel modules on-the-fly. + +``zfs-linux*`` packages are the easiest and +most risk-free way to obtain ZFS support. +However, they hard-depend on a specific kernel +and will block kernel updates if the corresponding +``zfs-linux*`` package is not available. + +``zfs-dkms`` package is the more versatile choice. +After installation, Dynamic Kernel Module Support +will automatically build ZFS kernel modules for installed +kernels and not interfere with kernel updates. +However, the modules are somewhat slow to build and more +importantly, there will be little warning message when the +build fails. Also, as ``zfs-dkms`` does not perform checks against +kernel version, this must be done by user themselves for major kernel updates +such as ``5.10 -> 5.11``. + +``zfs-linux*`` is recommended for users who are using stock kernels +from official Arch Linux repo and can accept kernel update delays. +Such delays should be no more than a few days. + +``zfs-dkms`` is required for experienced users who are using custom kernels or +want to follow the latest kernel updates. This package is also required for derivative +distros such as `Artix Linux `__. + Installation ------------ @@ -50,24 +89,15 @@ You can use it as follows. pacman -Sy -testing repo -^^^^^^^^^^^^ -Testing repo provides newer packages than stable repo, -but may contain unknown bugs. -Use at your own risk. - -To use testing repo, uncomment lines in -``/etc/pacman.conf``. - -archzfs package -~~~~~~~~~~~~~~~ +zfs-linux* package +~~~~~~~~~~~~~~~~~~ When using unmodified Arch Linux kernels, -prebuilt ``archzfs`` packages are available. -You can also switch between ``archzfs`` and ``zfs-dkms`` +prebuilt ``zfs-linux*`` packages are available. +You can also switch between ``zfs-linux*`` and ``zfs-dkms`` packages later. -For other kernels or Arch-based distros, use `archzfs-dkms package`_. +For other kernels or Arch-based distros, use zfs-dkms package. #. Check kernel variant:: @@ -81,21 +111,25 @@ For other kernels or Arch-based distros, use `archzfs-dkms package`_. if [ ${INST_LINVER} == \ $(pacman -Si ${INST_LINVAR} | grep Version | awk '{ print $3 }') ]; then - pacman -S --noconfirm ${INST_LINVAR} + pacman -S --noconfirm --needed ${INST_LINVAR} else - pacman -U --noconfirm \ + pacman -U --noconfirm --needed \ https://archive.archlinux.org/packages/l/${INST_LINVAR}/${INST_LINVAR}-${INST_LINVER}-x86_64.pkg.tar.zst fi -#. Install archzfs:: +#. Install zfs-linux*:: pacman -Sy zfs-${INST_LINVAR} -archzfs-dkms package -~~~~~~~~~~~~~~~~~~~~ +#. Hold kernel package from updates:: -This package will dynamically build ZFS modules for -supported kernels. + sed -i 's/#IgnorePkg/IgnorePkg/' /etc/pacman.conf + sed -i "/^IgnorePkg/ s/$/ ${INST_LINVAR} ${INST_LINVAR}-headers/" /etc/pacman.conf + + Kernel will be upgraded when an update for ``zfs-linux*`` becomes available. + +zfs-dkms package +~~~~~~~~~~~~~~~~ Check kernel compatibility ^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -201,25 +235,6 @@ Kernel update Do not update if the kernel is not compatible with OpenZFS. --git packages -~~~~~~~~~~~~~ - -Normal packages are built from -`latest OpenZFS stable release `__ -which may not contain the newest features. - -``-git`` packages are directly built from -`OpenZFS master branch `__, -which may contain unknown bugs. - -To use ``-git`` packages, attach ``-git`` suffix to package names, example:: - - # zfs-dkms - zfs-dkms-git - - # zfs-${INST_LINVAR} - zfs-${INST_LINVAR}-git - Check Live Image Compatibility ------------------------------ #. Choose a mirror:: @@ -240,7 +255,7 @@ Check Live Image Compatibility https://archive.artixlinux.org/repos/2021/01/01/system/os/x86_64 # linux-5.10.3.arch1-1-x86_64.pkg.tar.zst -#. Check latest archzfs package version:: +#. Check latest zfs-dkms package version:: https://archzfs.com/archzfs/x86_64/ # zfs-dkms-2.0.1-1-x86_64.pkg.tar.zst