Updates and cleanup for Arch, RHEL, NixOS and Fedora

Signed-off-by: Maurice Zhou <ja@apvc.uk>
This commit is contained in:
Maurice Zhou
2022-07-22 17:14:14 +02:00
committed by George Melikov
parent 5777295f0a
commit 2766cb7197
43 changed files with 937 additions and 4474 deletions

View File

@@ -1,142 +0,0 @@
.. highlight:: sh
Overview
======================
This document describes how to install NixOS with ZFS as root
file system.
Caution
~~~~~~~
- This guide wipes entire physical disks. Back up existing data.
- `GRUB does not and
will not work on 4Kn drive with legacy (BIOS) booting.
<http://savannah.gnu.org/bugs/?46700>`__
Partition layout
~~~~~~~~~~~~~~~~
GUID partition table (GPT) is used.
EFI system partition will be referred to as **ESP** in this document.
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Name | legacy boot | ESP | Boot pool | swap | root pool | remaining space |
+======================+======================+=======================+======================+=====================+=======================+=================+
| File system | | vfat | ZFS | swap | ZFS | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Size | 1M | 2G | 4G | depends on RAM size | | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Optional encryption | | *Secure Boot* | | plain dm-crypt | ZFS native encryption | |
| | | | | | | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Partition no. | 5 | 1 | 2 | 4 | 3 | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
| Mount point | | | /boot | | / | |
| | | /boot/efis/disk-part1 | | | | |
+----------------------+----------------------+-----------------------+----------------------+---------------------+-----------------------+-----------------+
Dataset layout
~~~~~~~~~~~~~~
The dataset layout used in this guide follows stardard
mutable file positions (``/var``, ``/etc``, ...), but can
still be modified to `a immutable root <https://grahamc.com/blog/erase-your-darlings>`__
after installation.
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| Dataset | canmount | mountpoint | container | notes |
+===========================+======================+======================+=====================================+===========================================+
| bpool | off | /boot | contains sys | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool | off | / | contains sys | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| bpool/sys | off | none | contains BOOT | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys | off | none | contains ROOT | sys is encryptionroot |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| bpool/sys/BOOT | off | none | contains boot environments | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/ROOT | off | none | contains boot environments | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/DATA | off | none | contains placeholder "default" | |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/DATA/default | off | / | contains user datasets | child datsets inherits mountpoint |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/DATA/local | off | / | contains /nix datasets | child datsets inherits mountpoint |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/DATA/default/ | on | /home (inherited) | no | |
| home | | | | user datasets, also called "shared |
| | | | | datasets", "persistent datasets"; also |
| | | | | include /var/lib, /srv, ... |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| bpool/sys/BOOT/default | noauto | /boot | no | noauto is used to switch BE. because of |
| | | | | noauto, must use fstab to mount |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
| rpool/sys/ROOT/default | noauto | / | no | mounted by initrd zfs hook |
+---------------------------+----------------------+----------------------+-------------------------------------+-------------------------------------------+
Encryption
~~~~~~~~~~
- Swap
Swap is always encrypted. By default, swap is encrypted
with plain dm-crypt with key generated from ``/dev/urandom``
at every boot. Swap content does not persist between reboots.
- Root pool
ZFS native encryption can be optionally enabled for ``rpool/sys``
and child datasets.
User should be aware that, ZFS native encryption does not
encrypt some metadata of the datasets.
ZFS native encryption also does not change master key when ``zfs change-key`` is invoked.
Therefore, you should wipe the disk when password is compromised to protect confidentiality.
See `zfs-load-key.8 <https://openzfs.github.io/openzfs-docs/man/8/zfs-load-key.8.html>`__
and `zfs-change-key.8 <https://openzfs.github.io/openzfs-docs/man/8/zfs-change-key.8.html>`__
for more information regarding ZFS native encryption.
Encryption is enabled at dataset creation and can not be disabled later.
- Boot pool
Boot pool can not be encrypted.
- Bootloader
Bootloader can not be encrypted.
However, with Secure Boot, bootloader
can be verified by motherboard firmware to be untempered,
which should be sufficient for most purposes.
Secure Boot is not supported out-of-the-box due to ZFS module.
Booting with disk failure
~~~~~~~~~~~~~~~~~~~~~~~~~
This guide is written with disk failure in mind.
If disks used in Root on ZFS pool failed, but
sufficient redundancy for both root pool and boot pool
still exists, the system will still boot normally.
Swap partition on the failed disk will fail to mount,
after an 1m30s timeout.
This feature is useful for use cases such
as an unattended remote server.
Example:
- System has disks ``n>1``
- Installed with mirrored setup
- Mirrored setup can tolerate up to ``n-1`` disk failures
- Disconnect one or more disks, keep at least
one disk connected
- System still boots, but fails to mount swap and
EFI partition

View File

@@ -6,41 +6,26 @@ Preparation
.. contents:: Table of Contents
:local:
#. Download `Minimal ISO image
<https://channels.nixos.org/nixos-22.05/latest-nixos-minimal-x86_64-linux.iso>`__ and boot from it.
#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.
#. Download `NixOS Live Image
<https://channels.nixos.org/nixos-22.05/latest-nixos-gnome-x86_64-linux.iso>`__ and boot from it.
#. Connect to the Internet.
#. Set root password or ``/root/.ssh/authorized_keys``.
#. Start SSH server::
#. Connect to network. See `NixOS manual <https://nixos.org/manual/nixos/stable/index.html#sec-installation-booting>`__.
#. SSH server is enabled by default. To connect, set root password with::
sudo passwd
systemctl restart sshd
#. Connect from another computer::
ssh root@192.168.1.19
#. Unique pool suffix. ZFS expects pool names to be
unique, therefore it's recommended to create
pools with a unique suffix::
INST_UUID=$(dd if=/dev/urandom bs=1 count=100 2>/dev/null | tr -dc 'a-z0-9' | cut -c-6)
#. Identify this installation in ZFS filesystem path::
INST_ID=nixos
#. Root on ZFS configuration file name::
INST_CONFIG_FILE='zfs.nix'
#. Target disk
List available disks with::
ls /dev/disk/by-id/*
If using virtio as disk bus, use
``/dev/disk/by-path/*``.
If using virtio as disk bus, use ``/dev/disk/by-path/*``.
Declare disk array::
@@ -50,57 +35,8 @@ Preparation
DISK='/dev/disk/by-id/disk1'
#. Choose a primary disk. This disk will be used
for primary EFI partition, default to
first disk in the array::
INST_PRIMARY_DISK=$(echo $DISK | cut -f1 -d\ )
#. Set vdev topology, possible values are:
- (not set, single disk or striped; no redundancy)
- mirror
- raidz1
- raidz2
- raidz3
::
INST_VDEV=
This will create a single vdev with the topology of your choice.
It is also possible to manually create a pool with multiple vdevs, such as::
zpool create --options \
poolName \
mirror sda sdb \
raidz2 sdc ... \
raidz3 sde ... \
spare sdf ...
Notice the cost of parity when using RAID-Z. See
`here <https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz>`__
and `here <https://docs.google.com/spreadsheets/d/1tf4qx1aMJp8Lo_R6gpT689wTjHv6CGVElrPqTA0w_ZY/>`__.
For boot pool, which must be readable by GRUB, mirrored vdev should always be used for maximum redundancy.
This guide will use mirrored bpool for multi-disk setup.
Refer to `zpoolconcepts <https://openzfs.github.io/openzfs-docs/man/7/zpoolconcepts.7.html>`__
and `zpool-create <https://openzfs.github.io/openzfs-docs/man/8/zpool-create.8.html>`__
man pages for details.
#. Set partition size:
Set ESP size::
INST_PARTSIZE_ESP=2 # in GB
Set boot pool size. To avoid running out of space while using
boot environments, the minimum is 4GB. Adjust the size if you
intend to use multiple kernel/distros::
INST_PARTSIZE_BPOOL=4
Set swap size. It's `recommended <https://chrisdown.name/2018/01/02/in-defence-of-swap.html>`__
to setup a swap partition. If you intend to use hibernation,
the minimum should be no less than RAM size. Skip if swap is not needed::

View File

@@ -1,362 +0,0 @@
.. highlight:: sh
System Configuration
======================
.. contents:: Table of Contents
:local:
#. Optional: wipe solid-state drives with the generic tool
`blkdiscard <https://utcc.utoronto.ca/~cks/space/blog/linux/ErasingSSDsWithBlkdiscard>`__,
to clean previous partition tables and improve performance.
All content will be irrevocably destroyed::
for i in ${DISK}; do
blkdiscard -f $i &
done
wait
This is a quick operation and should be completed under one
minute.
For other device specific methods, see
`Memory cell clearing <https://wiki.archlinux.org/title/Solid_state_drive/Memory_cell_clearing>`__
#. Partition the disks.
See `Overview <0-overview.html>`__ for details::
for i in ${DISK}; do
sgdisk --zap-all $i
sgdisk -n1:1M:+${INST_PARTSIZE_ESP}G -t1:EF00 $i
sgdisk -n2:0:+${INST_PARTSIZE_BPOOL}G -t2:BE00 $i
if [ "${INST_PARTSIZE_SWAP}" != "" ]; then
sgdisk -n4:0:+${INST_PARTSIZE_SWAP}G -t4:8200 $i
fi
if [ "${INST_PARTSIZE_RPOOL}" = "" ]; then
sgdisk -n3:0:0 -t3:BF00 $i
else
sgdisk -n3:0:+${INST_PARTSIZE_RPOOL}G -t3:BF00 $i
fi
sgdisk -a1 -n5:24K:+1000K -t5:EF02 $i
done
#. Create boot pool::
disk_num=0; for i in $DISK; do disk_num=$(( $disk_num + 1 )); done
if [ $disk_num -gt 1 ]; then INST_VDEV_BPOOL=mirror; fi
zpool create \
-o compatibility=grub2 \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl \
-O canmount=off \
-O compression=lz4 \
-O devices=off \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/boot \
-R /mnt \
bpool_$INST_UUID \
$INST_VDEV_BPOOL \
$(for i in ${DISK}; do
printf "$i-part2 ";
done)
You should not need to customize any of the options for the boot pool.
GRUB does not support all of the zpool features. See ``spa_feature_names``
in `grub-core/fs/zfs/zfs.c
<http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276>`__.
This step creates a separate boot pool for ``/boot`` with the features
limited to only those that GRUB supports, allowing the root pool to use
any/all features.
Features enabled with ``-o compatibility=grub2`` can be seen
`here <https://github.com/openzfs/zfs/blob/master/cmd/zpool/compatibility.d/grub2>`__.
#. Create root pool::
zpool create \
-o ashift=12 \
-o autotrim=on \
-R /mnt \
-O acltype=posixacl \
-O canmount=off \
-O compression=zstd \
-O dnodesize=auto \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/ \
rpool_$INST_UUID \
$INST_VDEV \
$(for i in ${DISK}; do
printf "$i-part3 ";
done)
**Notes:**
- The use of ``ashift=12`` is recommended here because many drives
today have 4 KiB (or larger) physical sectors, even though they
present 512 B logical sectors. Also, a future replacement drive may
have 4 KiB physical sectors (in which case ``ashift=12`` is desirable)
or 4 KiB logical sectors (in which case ``ashift=12`` is required).
- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you
do not want this, remove that option, but later add
``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create``
for ``/var/log``, as `journald requires ACLs
<https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported>`__
- Setting ``normalization=formD`` eliminates some corner cases relating
to UTF-8 filename normalization. It also implies ``utf8only=on``,
which means that only UTF-8 filenames are allowed. If you care to
support non-UTF-8 filenames, do not use this option. For a discussion
of why requiring UTF-8 filenames may be a bad idea, see `The problems
with enforced UTF-8 only filenames
<http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames>`__.
- ``recordsize`` is unset (leaving it at the default of 128 KiB). If you
want to tune it (e.g. ``-o recordsize=1M``), see `these
<https://jrs-s.net/2019/04/03/on-zfs-recordsize/>`__ `various
<http://blog.programster.org/zfs-record-size>`__ `blog
<https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSFileRecordsizeGrowth>`__
`posts
<https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSRecordsizeAndCompression>`__.
- Setting ``relatime=on`` is a middle ground between classic POSIX
``atime`` behavior (with its significant performance impact) and
``atime=off`` (which provides the best performance by completely
disabling atime updates). Since Linux 2.6.30, ``relatime`` has been
the default for other filesystems. See `RedHats documentation
<https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime>`__
for further information.
- Setting ``xattr=sa`` `vastly improves the performance of extended
attributes
<https://github.com/zfsonlinux/zfs/commit/82a37189aac955c81a59a5ecc3400475adb56355>`__.
Inside ZFS, extended attributes are used to implement POSIX ACLs.
Extended attributes can also be used by user-space applications.
`They are used by some desktop GUI applications.
<https://en.wikipedia.org/wiki/Extended_file_attributes#Linux>`__
`They can be used by Samba to store Windows ACLs and DOS attributes;
they are required for a Samba Active Directory domain controller.
<https://wiki.samba.org/index.php/Setting_up_a_Share_Using_Windows_ACLs>`__
Note that ``xattr=sa`` is `Linux-specific
<https://openzfs.org/wiki/Platform_code_differences>`__. If you move your
``xattr=sa`` pool to another OpenZFS implementation besides ZFS-on-Linux,
extended attributes will not be readable (though your data will be). If
portability of extended attributes is important to you, omit the
``-O xattr=sa`` above. Even if you do not want ``xattr=sa`` for the whole
pool, it is probably fine to use it for ``/var/log``.
- Make sure to include the ``-part3`` portion of the drive path. If you
forget that, you are specifying the whole disk, which ZFS will then
re-partition, and you will lose the bootloader partition(s).
#. This section implements dataset layout as described in `overview <0-overview.html>`__.
Create root system container:
- Unencrypted::
zfs create \
-o canmount=off \
-o mountpoint=none \
rpool_$INST_UUID/$INST_ID
- Encrypted:
Pick a strong password. Once compromised, changing password will not keep your
data safe. See ``zfs-change-key(8)`` for more info::
zfs create \
-o canmount=off \
-o mountpoint=none \
-o encryption=on \
-o keylocation=prompt \
-o keyformat=passphrase \
rpool_$INST_UUID/$INST_ID
Create other system datasets::
zfs create -o canmount=off -o mountpoint=none bpool_$INST_UUID/$INST_ID
zfs create -o canmount=off -o mountpoint=none bpool_$INST_UUID/$INST_ID/BOOT
zfs create -o canmount=off -o mountpoint=none rpool_$INST_UUID/$INST_ID/ROOT
zfs create -o canmount=off -o mountpoint=none rpool_$INST_UUID/$INST_ID/DATA
zfs create -o mountpoint=/boot -o canmount=noauto bpool_$INST_UUID/$INST_ID/BOOT/default
zfs create -o mountpoint=/ -o canmount=off rpool_$INST_UUID/$INST_ID/DATA/default
zfs create -o mountpoint=/ -o canmount=off rpool_$INST_UUID/$INST_ID/DATA/local
zfs create -o mountpoint=/ -o canmount=noauto rpool_$INST_UUID/$INST_ID/ROOT/default
zfs mount rpool_$INST_UUID/$INST_ID/ROOT/default
zfs mount bpool_$INST_UUID/$INST_ID/BOOT/default
for i in {usr,var,var/lib};
do
zfs create -o canmount=off rpool_$INST_UUID/$INST_ID/DATA/default/$i
done
for i in {home,root,srv,usr/local,var/log,var/spool};
do
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/$i
done
chmod 750 /mnt/root
for i in {nix,}; do
zfs create -o canmount=on -o mountpoint=/$i rpool_$INST_UUID/$INST_ID/DATA/local/$i
done
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/state
for i in {/etc/nixos,/etc/cryptkey.d}; do
mkdir -p /mnt/state/$i /mnt/$i
mount -o bind /mnt/state/$i /mnt/$i
done
zfs create -o mountpoint=/ -o canmount=noauto rpool_$INST_UUID/$INST_ID/ROOT/empty
zfs snapshot rpool_$INST_UUID/$INST_ID/ROOT/empty@start
#. Format and mount ESP::
for i in ${DISK}; do
mkfs.vfat -n EFI ${i}-part1
mkdir -p /mnt/boot/efis/${i##*/}-part1
mount -t vfat ${i}-part1 /mnt/boot/efis/${i##*/}-part1
done
#. Create optional user data datasets to omit data from rollback::
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/games
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/www
# for GNOME
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/AccountsService
# for Docker
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/docker
# for NFS
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/nfs
# for LXC
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/lxc
# for LibVirt
zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/libvirt
##other application
# zfs create -o canmount=on rpool_$INST_UUID/$INST_ID/DATA/default/var/lib/$name
Add other datasets when needed, such as PostgreSQL.
#. Generate initial NixOS system configuration::
nixos-generate-config --root /mnt
This command will generate two files, ``configuration.nix``
and ``hardware-configuration-zfs.nix``, which will be the starting point
of configuring the system.
#. Edit config file to import ZFS options::
sed -i "s|./hardware-configuration.nix|./hardware-configuration-zfs.nix ./${INST_CONFIG_FILE}|g" /mnt/etc/nixos/configuration.nix
# backup, prevent being overwritten by nixos-generate-config
mv /mnt/etc/nixos/hardware-configuration.nix /mnt/etc/nixos/hardware-configuration-zfs.nix
#. ZFS options::
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
{ config, pkgs, ... }:
{ boot.supportedFilesystems = [ "zfs" ];
networking.hostId = "$(head -c 8 /etc/machine-id)";
boot.zfs.devNodes = "${INST_PRIMARY_DISK%/*}";
EOF
ZFS datasets should be mounted with ``-o zfsutil`` option::
sed -i 's|fsType = "zfs";|fsType = "zfs"; options = [ "zfsutil" "X-mount.mkdir" ];|g' \
/mnt/etc/nixos/hardware-configuration-zfs.nix
Allow EFI system partition mounting to fail at boot::
sed -i 's|fsType = "vfat";|fsType = "vfat"; options = [ "x-systemd.idle-timeout=1min" "x-systemd.automount" "noauto" ];|g' \
/mnt/etc/nixos/hardware-configuration-zfs.nix
Restrict kernel to versions supported by ZFS::
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
boot.kernelPackages = config.boot.zfs.package.latestCompatibleLinuxPackages;
EOF
Disable cache::
mkdir -p /mnt/state/etc/zfs/
rm -f /mnt/state/etc/zfs/zpool.cache
touch /mnt/state/etc/zfs/zpool.cache
chmod a-w /mnt/state/etc/zfs/zpool.cache
chattr +i /mnt/state/etc/zfs/zpool.cache
#. If swap is enabled::
if [ "${INST_PARTSIZE_SWAP}" != "" ]; then
sed -i '/swapDevices/d' /mnt/etc/nixos/hardware-configuration-zfs.nix
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
swapDevices = [
EOF
for i in $DISK; do
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
{ device = "$i-part4"; randomEncryption.enable = true; }
EOF
done
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
];
EOF
fi
#. For immutable root file system, save machine-id and other files::
mkdir -p /mnt/state/etc/{ssh,zfs}
systemd-machine-id-setup --print > /mnt/state/etc/machine-id
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
systemd.services.zfs-mount.enable = false;
environment.etc."machine-id".source = "/state/etc/machine-id";
environment.etc."aliases".source = "/state/etc/aliases";
environment.etc."zfs/zpool.cache".source
= "/state/etc/zfs/zpool.cache";
boot.loader.efi.efiSysMountPoint = "/boot/efis/${INST_PRIMARY_DISK##*/}-part1";
EOF
touch /mnt/state/etc/aliases
#. Configure GRUB boot loader for both legacy boot and UEFI::
sed -i '/boot.loader/d' /mnt/etc/nixos/configuration.nix
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<-'EOF'
boot.loader.efi.canTouchEfiVariables = false;
##if UEFI firmware can detect entries
#boot.loader.efi.canTouchEfiVariables = true;
boot.loader = {
generationsDir.copyKernels = true;
##for problematic UEFI firmware
grub.efiInstallAsRemovable = true;
grub.enable = true;
grub.version = 2;
grub.copyKernels = true;
grub.efiSupport = true;
grub.zfsSupport = true;
# for systemd-autofs
grub.extraPrepareConfig = ''
mkdir -p /boot/efis
for i in /boot/efis/*; do mount $i ; done
'';
grub.extraInstallCommands = ''
export ESP_MIRROR=$(mktemp -d -p /tmp)
EOF
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
cp -r /boot/efis/${INST_PRIMARY_DISK##*/}-part1/EFI \$ESP_MIRROR
EOF
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<-'EOF'
for i in /boot/efis/*; do
cp -r $ESP_MIRROR/EFI $i
done
rm -rf $ESP_MIRROR
'';
grub.devices = [
EOF
for i in $DISK; do
printf " \"$i\"\n" >>/mnt/etc/nixos/${INST_CONFIG_FILE}
done
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
];
};
EOF

View File

@@ -0,0 +1,132 @@
.. highlight:: sh
System Installation
======================
.. contents:: Table of Contents
:local:
#. Partition the disks::
for i in ${DISK}; do
sgdisk --zap-all $i
sgdisk -n1:1M:+1G -t1:EF00 $i
sgdisk -n2:0:+4G -t2:BE00 $i
test -z $INST_PARTSIZE_SWAP || sgdisk -n4:0:+${INST_PARTSIZE_SWAP}G -t4:8200 $i
if test -z $INST_PARTSIZE_RPOOL; then
sgdisk -n3:0:0 -t3:BF00 $i
else
sgdisk -n3:0:+${INST_PARTSIZE_RPOOL}G -t3:BF00 $i
fi
sgdisk -a1 -n5:24K:+1000K -t5:EF02 $i
done
#. Create boot pool::
zpool create \
-o compatibility=grub2 \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl \
-O canmount=off \
-O compression=lz4 \
-O devices=off \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/boot \
-R /mnt \
bpool \
mirror \
$(for i in ${DISK}; do
printf "$i-part2 ";
done)
If not using a multi-disk setup, remove ``mirror``.
You should not need to customize any of the options for the boot pool.
GRUB does not support all of the zpool features. See ``spa_feature_names``
in `grub-core/fs/zfs/zfs.c
<http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276>`__.
This step creates a separate boot pool for ``/boot`` with the features
limited to only those that GRUB supports, allowing the root pool to use
any/all features.
Features enabled with ``-o compatibility=grub2`` can be seen
`here <https://github.com/openzfs/zfs/blob/master/cmd/zpool/compatibility.d/grub2>`__.
#. Create root pool::
zpool create \
-o ashift=12 \
-o autotrim=on \
-R /mnt \
-O acltype=posixacl \
-O canmount=off \
-O compression=zstd \
-O dnodesize=auto \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/ \
rpool \
mirror \
$(for i in ${DISK}; do
printf "$i-part3 ";
done)
If not using a multi-disk setup, remove ``mirror``.
#. This section implements dataset layout as described in `overview <1-preparation.html>`__.
Create root system container:
- Unencrypted::
zfs create \
-o canmount=off \
-o mountpoint=none \
rpool/nixos
- Encrypted:
Pick a strong password. Once compromised, changing password will not keep your
data safe. See ``zfs-change-key(8)`` for more info::
zfs create \
-o canmount=off \
-o mountpoint=none \
-o encryption=on \
-o keylocation=prompt \
-o keyformat=passphrase \
rpool/nixos
Create system datasets::
zfs create -o canmount=on -o mountpoint=/ rpool/nixos/root
zfs create -o canmount=on -o mountpoint=/home rpool/nixos/home
zfs create -o canmount=off -o mountpoint=/var rpool/nixos/var
zfs create -o canmount=on rpool/nixos/var/lib
zfs create -o canmount=on rpool/nixos/var/log
Create boot dataset::
zfs create -o canmount=on -o mountpoint=/boot bpool/nixos
#. Format and mount ESP::
for i in ${DISK}; do
mkfs.vfat -n EFI ${i}-part1
mkdir -p /mnt/boot/efis/${i##*/}-part1
mount -t vfat ${i}-part1 /mnt/boot/efis/${i##*/}-part1
done
mkdir -p /mnt/boot/efi
mount -t vfat $(echo $DISK | cut -f1 -d\ )-part1 /mnt/boot/efi

View File

@@ -1,235 +0,0 @@
.. highlight:: sh
Optional Configuration
======================
.. contents:: Table of Contents
:local:
Skip to `System Installation <./4-system-installation.html>`__ section if
no optional configuration is needed.
Mail notification for ZFS status
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For headless applications such as NAS, it is useful to set up mail notification
for hardware changes and monitor for scrub results.
#. Set up an alias for root account::
tee -a /state/etc/aliases <<EOF
root: user@example.com
EOF
#. Set up mail transfer agent, the program that sends email::
programs.msmtp = {
enable = true;
setSendmail = true;
defaults = {
aliases = "/state/etc/aliases";
port = 465;
tls_trust_file = "/etc/ssl/certs/ca-certificates.crt";
tls = "on";
auth = "plain";
tls_starttls = "off";
};
accounts = {
default = {
host = "mail.example.com";
# set secure permissions for password file
passwordeval = "cat /state/etc/emailpass.txt";
user = "user@example.com";
from = "user@example.com";
};
};
};
#. Enable mail notification for ZFS Event Daemon::
services.zfs.zed.settings = {
ZED_EMAIL_ADDR = [ "root" ];
ZED_EMAIL_PROG = "${pkgs.msmtp}/bin/msmtp";
ZED_EMAIL_OPTS = "@ADDRESS@";
ZED_NOTIFY_VERBOSE = true;
};
# this option does not work
services.zfs.zed.enableMail = false;
Supply password with SSH
~~~~~~~~~~~~~~~~~~~~~~~~
Note: if you choose to encrypt boot pool, where decryption is handled
by GRUB, as described in the next section, configuration performed
in this section will have no effect.
This example uses DHCP::
mkdir -p /mnt/etc/state/ssh/
ssh-keygen -t ed25519 -N "" -f /mnt/state/etc/ssh/ssh_host_initrd_ed25519_key
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
#networking.interfaces.enp1s0.useDHCP = true;
boot = {
initrd.network = {
enable = true;
ssh = {
enable = true;
hostKeys = [ /state/etc/ssh/ssh_host_initrd_ed25519_key ];
authorizedKeys = [ "$YOUR_PUBLIC_KEY" ];
};
postCommands = ''
echo "zfs load-key -a; killall zfs" >> /root/.profile
'';
};
};
EOF
Encrypt boot pool
~~~~~~~~~~~~~~~~~~~
Note: This will disable password with SSH. The password previously set for
root pool will be replaced by keyfile, embedded in initrd.
#. Add package::
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
environment.systemPackages = [ pkgs.cryptsetup ];
EOF
#. LUKS password::
LUKS_PWD=secure-passwd
You will need to enter the same password for
each disk at boot. As root pool key is
protected by this password, the previous warning
about password strength still apply.
Double-check password here. Complete reinstallation is
needed if entered wrong.
#. Create encryption keys::
mkdir -p /mnt/etc/cryptkey.d/
chmod 700 /mnt/etc/cryptkey.d/
dd bs=32 count=1 if=/dev/urandom of=/mnt/etc/cryptkey.d/rpool_$INST_UUID-${INST_ID}-key-zfs
dd bs=32 count=1 if=/dev/urandom of=/mnt/etc/cryptkey.d/bpool_$INST_UUID-key-luks
chmod u=r,go= /mnt/etc/cryptkey.d/*
#. Backup boot pool::
zfs snapshot -r bpool_$INST_UUID/$INST_ID@pre-luks
zfs send -Rv bpool_$INST_UUID/$INST_ID@pre-luks > /mnt/root/bpool_$INST_UUID-${INST_ID}-pre-luks
#. Unmount EFI partition::
for i in ${DISK}; do
umount /mnt/boot/efis/${i##*/}-part1
done
#. Destroy boot pool::
zpool destroy bpool_$INST_UUID
#. Create LUKS containers::
for i in ${DISK}; do
cryptsetup luksFormat -q --type luks1 --key-file /mnt/etc/cryptkey.d/bpool_$INST_UUID-key-luks $i-part2
echo $LUKS_PWD | cryptsetup luksAddKey --key-file /mnt/etc/cryptkey.d/bpool_$INST_UUID-key-luks $i-part2
cryptsetup open ${i}-part2 ${i##*/}-part2-luks-bpool_$INST_UUID --key-file /mnt/etc/cryptkey.d/bpool_$INST_UUID-key-luks
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
boot.initrd.luks.devices = {
"${i##*/}-part2-luks-bpool_$INST_UUID" = {
device = "${i}-part2";
allowDiscards = true;
keyFile = "/etc/cryptkey.d/bpool_$INST_UUID-key-luks";
};
};
EOF
done
GRUB 2.06 still does not have complete support for LUKS2, LUKS1
is used instead.
#. Embed key file in initrd::
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
boot.initrd.secrets = {
"/etc/cryptkey.d/rpool_$INST_UUID-${INST_ID}-key-zfs" = "/etc/cryptkey.d/rpool_$INST_UUID-${INST_ID}-key-zfs";
"/etc/cryptkey.d/bpool_$INST_UUID-key-luks" = "/etc/cryptkey.d/bpool_$INST_UUID-key-luks";
};
EOF
#. Recreate boot pool with mappers as vdev::
disk_num=0; for i in $DISK; do disk_num=$(( $disk_num + 1 )); done
if [ $disk_num -gt 1 ]; then INST_VDEV_BPOOL=mirror; fi
zpool create \
-d -o feature@async_destroy=enabled \
-o feature@bookmarks=enabled \
-o feature@embedded_data=enabled \
-o feature@empty_bpobj=enabled \
-o feature@enabled_txg=enabled \
-o feature@extensible_dataset=enabled \
-o feature@filesystem_limits=enabled \
-o feature@hole_birth=enabled \
-o feature@large_blocks=enabled \
-o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl \
-O canmount=off \
-O compression=lz4 \
-O devices=off \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/boot \
-R /mnt \
bpool_$INST_UUID \
$INST_VDEV_BPOOL \
$(for i in ${DISK}; do
printf "/dev/mapper/${i##*/}-part2-luks-bpool_$INST_UUID ";
done)
#. Restore boot pool backup::
zfs recv bpool_${INST_UUID}/${INST_ID} < /mnt/root/bpool_$INST_UUID-${INST_ID}-pre-luks
rm /mnt/root/bpool_$INST_UUID-${INST_ID}-pre-luks
#. Mount boot dataset and EFI partitions::
zfs mount bpool_$INST_UUID/$INST_ID/BOOT/default
for i in ${DISK}; do
mount ${i}-part1 /mnt/boot/efis/${i##*/}-part1
done
#. As keys are stored in initrd,
set secure permissions for ``/boot``::
chmod 700 /mnt/boot
#. Change root pool password to key file::
mkdir -p /etc/cryptkey.d/
cp /mnt/etc/cryptkey.d/rpool_$INST_UUID-${INST_ID}-key-zfs /etc/cryptkey.d/rpool_$INST_UUID-${INST_ID}-key-zfs
zfs change-key -l \
-o keylocation=file:///etc/cryptkey.d/rpool_$INST_UUID-${INST_ID}-key-zfs \
-o keyformat=raw \
rpool_$INST_UUID/$INST_ID
#. Enable GRUB cryptodisk::
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
boot.loader.grub.enableCryptodisk = true;
EOF
#. **Important**: Back up root dataset key ``/etc/cryptkey.d/rpool_$INST_UUID-${INST_ID}-key-zfs``
to a secure location.
In the possible event of LUKS container corruption,
data on root set will only be available
with this key.

View File

@@ -0,0 +1,102 @@
.. highlight:: sh
System Configuration
======================
.. contents:: Table of Contents
:local:
#. Disable cache, stale cache will prevent system from booting::
mkdir -p /mnt/state/etc/zfs/
rm -f /mnt/state/etc/zfs/zpool.cache
touch /mnt/state/etc/zfs/zpool.cache
chmod a-w /mnt/state/etc/zfs/zpool.cache
chattr +i /mnt/state/etc/zfs/zpool.cache
#. Generate initial system configuration::
nixos-generate-config --root /mnt
#. Import ZFS-specific configuration::
sed -i "s|./hardware-configuration.nix|./hardware-configuration.nix ./zfs.nix|g" /mnt/etc/nixos/configuration.nix
#. Configure hostid::
tee -a /mnt/etc/nixos/zfs.nix <<EOF
{ config, pkgs, ... }:
{ boot.supportedFilesystems = [ "zfs" ];
networking.hostId = "$(head -c 8 /etc/machine-id)";
boot.kernelPackages = config.boot.zfs.package.latestCompatibleLinuxPackages;
EOF
#. Configure bootloader for both legacy boot and UEFI boot and mirror bootloader::
sed -i '/boot.loader/d' /mnt/etc/nixos/configuration.nix
sed -i '/services.xserver/d' /mnt/etc/nixos/configuration.nix
tee -a /mnt/etc/nixos/zfs.nix <<-'EOF'
boot.loader.efi.efiSysMountPoint = "/boot/efi";
boot.loader.efi.canTouchEfiVariables = false;
boot.loader.generationsDir.copyKernels = true;
boot.loader.grub.efiInstallAsRemovable = true;
boot.loader.grub.enable = true;
boot.loader.grub.version = 2;
boot.loader.grub.copyKernels = true;
boot.loader.grub.efiSupport = true;
boot.loader.grub.zfsSupport = true;
boot.loader.grub.extraPrepareConfig = ''
mkdir -p /boot/efis
for i in /boot/efis/*; do mount $i ; done
mkdir -p /boot/efi
mount /boot/efi
'';
boot.loader.grub.extraInstallCommands = ''
ESP_MIRROR=$(mktemp -d)
cp -r /boot/efi/EFI $ESP_MIRROR
for i in /boot/efis/*; do
cp -r $ESP_MIRROR/EFI $i
done
rm -rf $ESP_MIRROR
'';
boot.loader.grub.devices = [
EOF
for i in $DISK; do
printf " \"$i\"\n" >>/mnt/etc/nixos/zfs.nix
done
tee -a /mnt/etc/nixos/zfs.nix <<EOF
];
EOF
#. Mount datasets with zfsutil option::
sed -i 's|fsType = "zfs";|fsType = "zfs"; options = [ "zfsutil" "X-mount.mkdir" ];|g' \
/mnt/etc/nixos/hardware-configuration.nix
#. Set root password::
rootPwd=$(mkpasswd -m SHA-512 -s)
Declare password in configuration::
tee -a /mnt/etc/nixos/zfs.nix <<EOF
users.users.root.initialHashedPassword = "${rootPwd}";
}
EOF
#. Install system and apply configuration::
nixos-install -v --show-trace --no-root-passwd --root /mnt
#. Unmount filesystems::
umount -Rl /mnt
zpool export -a
#. Reboot::
reboot

View File

@@ -1,235 +0,0 @@
.. highlight:: sh
System Installation
======================
.. contents:: Table of Contents
:local:
Additional configuration
~~~~~~~~~~~~~~~~~~~~~~~~~
As NixOS configuration is declarative, post-installation tasks,
such as user accounts and package selection, can all be done by
specifing them in configuration. See `NixOS manual <https://nixos.org/nixos/manual/>`__
for details.
For timezone, hostname, networking, keyboard layout, etc,
see ``/mnt/etc/nixos/configuration.nix``.
Set root password
-----------------
This optional step is an example
of declaratively configuring the system.
#. Generate password hash::
INST_ROOT_PASSWD=$(mkpasswd -m SHA-512 -s)
#. Declare `initialHashedPassword
<https://nixos.org/manual/nixos/stable/options.html#opt-users.users._name_.initialHashedPassword>`__
for root user::
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
users.users.root.initialHashedPassword = "${INST_ROOT_PASSWD}";
EOF
System installation
~~~~~~~~~~~~~~~~~~~
#. Finalize the config file::
tee -a /mnt/etc/nixos/${INST_CONFIG_FILE} <<EOF
}
EOF
#. Take a snapshot of the clean installation, without state
for future use::
zfs snapshot -r rpool_$INST_UUID/$INST_ID@install_start
zfs snapshot -r bpool_$INST_UUID/$INST_ID@install_start
#. Back up bash inputs::
history -w /mnt/home/sys-install-pre-chroot.txt
#. Apply configuration
If root password hash is not set::
nixos-install -v --show-trace --root /mnt
You will be prompted for a new root password.
If password hash has been set::
nixos-install -v --show-trace --no-root-passwd --root /mnt
#. If boot pool encryption is used and installation fails with::
#mktemp: failed to create directory via template
#/mnt/tmp.coRUoqzl1P/initrd-secrets.XXXXXXXXXX: No such file or directory
#failed to create initrd secrets: No such file or directory
This is `a bug <https://github.com/NixOS/nixpkgs/issues/157989>`__.
Complete the installation by executing::
nixos-enter --root /mnt -- nixos-rebuild boot
Finish installation
~~~~~~~~~~~~~~~~~~~~
#. Take a snapshot of the clean installation for future use::
zfs snapshot -r rpool_$INST_UUID/$INST_ID@install
zfs snapshot -r bpool_$INST_UUID/$INST_ID@install
#. Unmount EFI system partition::
umount /mnt/boot/efis/*
#. Export pools::
zpool export bpool_$INST_UUID
zpool export rpool_$INST_UUID
#. Reboot::
reboot
Upgrade NixOS
~~~~~~~~~~~~~
Routine updates within the same major version
=============================================
Updates within the same major version, such as from [21.11].001 to
[21.11].100, can be done with one of the following commands::
# take immediate effect
nixos-rebuild --upgrade switch
# update upon reboot
nixos-rebuild --upgrade boot
Upgrade to a newer major version
================================
Upgrading to a newer major version involves switching software
distribution channel.
#. View existing channels, run as root::
nix-channel --list
#nixos https://nixos.org/channels/nixos-21.11
#this is the major version released around November 2021
#. View available channels::
w3m https://hydra.nixos.org/project/nixos
#. Switch to a newer channel (22.05)::
nix-channel --add https://nixos.org/channels/nixos-22.05 nixos
#. In ``/etc/nixos/configuration.nix``::
system.stateVersion = "22.05";
If using Home Manager, in ``~/.config/nixpkgs/home.nix``::
home.stateVersion = "22.05";
#. Then follow the procedures for updating witin minor versions.
Immutable root file system
~~~~~~~~~~~~~~~~~~~~~~~~~~
This section is optional.
Often, programs generate mutable files in paths such as
``/etc`` and ``/var/lib``. The generated files can be considered a
part of the system state.
This generated state is not declaratively managed
by NixOS and can not be reproduced from NixOS configuration.
To ensure that the system state is fully managed by NixOS and reproducible,
we need to periodically purge the system state and let NixOS
regenerate root file system from scratch.
Also see: `Erase your darlings:
immutable infrastructure for mutable systems <https://grahamc.com/blog/erase-your-darlings>`__.
Save mutable data to alternative path
-------------------------------------
Before enabling purging on root dataset, we need to back up
essential mutable data first, such as host SSH key and network connections.
Below are some tips.
- Some programs support specifying another
location for mutable data, such as
Wireguard::
networking.wireguard.interfaces.wg0.privateKeyFile = "/state/etc/wireguard/wg0";
- For programs without a configurable data path,
`environment.etc <https://nixos.org/manual/nixos/stable/options.html#opt-environment.etc>`__
may be used::
environment.etc = {
"ssh/ssh_host_rsa_key".source = "/state/etc/ssh/ssh_host_rsa_key";
}
- systemds tmpfiles.d rules are also an option::
systemd.tmpfiles.rules = [
"L /var/lib/bluetooth - - - - /state/var/lib/bluetooth"
];
- Bind mount::
for i in {/etc/nixos,/etc/cryptkey.d}; do
mkdir -p /state/$i /$i
mount -o bind /state/$i /$i
done
nixos-generate-config --show-hardware-config
Boot from empty root file system
--------------------------------
After backing up mutable data, you can try switching to
an empty dataset as root file system.
#. Check current root file system::
ROOT_FS=$(df --output=source /|tail -n1)
# rpool/ROOT/default
#. Set empty file system as root::
sed -i "s,${ROOT_FS},${ROOT_FS%/*}/empty,g" /etc/nixos/hardware-configuration-zfs.nix
#. Apply changes and reboot::
nixos-rebuild boot
reboot
#. If everything went fine, add the output of the following command to configuration::
ROOT_FS=$(df --output=source /|tail -n1)
cat <<EOF
boot.initrd.postDeviceCommands = ''
zpool import -Nf ${ROOT_FS%%/*}
zfs rollback -r ${ROOT_FS%/*}/empty@start
'';
EOF
#. Apply and reboot::
nixos-rebuild boot
reboot

View File

@@ -1,145 +0,0 @@
.. highlight:: sh
Recovery
======================
.. contents:: Table of Contents
:local:
GRUB Tips
-------------
Boot from GRUB rescue
~~~~~~~~~~~~~~~~~~~~~~~
If bootloader file is damaged, it's still possible
to boot computer with GRUB rescue image.
This section is also applicable if you are in
``grub rescue>``.
#. On another computer, generate rescue image with::
pacman -S --needed mtools libisoburn grub
grub-install
grub-mkrescue -o grub-rescue.img
dd if=grub-rescue.img of=/dev/your-usb-stick
Boot computer from the rescue media.
Both legacy and EFI mode are supported.
Or `download generated GRUB rescue image <https://gitlab.com/m_zhou/bieaz/uploads/e0847a8675cda4317ea7f48abb1d9f10/grub-rescue-2.06.img.7z>`__.
#. List available disks with ``ls`` command::
grub> ls (hd # press tab
Possible devices are:
hd0 hd1 hd2 hd3
#. List partitions by pressing tab key:
.. code-block:: text
grub> ls (hd0 # press tab
Possible partitions are:
Device hd0: No known filesystem detected - Sector size 512B - Total size 20971520KiB
Partition hd0,gpt1: Filesystem type fat - Label `EFI', UUID 0DF5-3A76 - Partition start at 1024KiB - Total size 1048576KiB
Partition hd0,gpt2: No known filesystem detected - Partition start at 1049600KiB - Total size 4194304KiB
- If boot pool is encrypted:
Unlock it with ``cryptomount``::
grub> insmod luks
grub> cryptomount hd0,gpt2
Attempting to decrypt master key...
Enter passphrase for hd0,gpt2 (af5a240e13e24483acf02600d61e0f36):
Slot 1 opened
Unlocked LUKS container is ``(crypto0)``:
.. code-block:: text
grub> ls (crypto0)
Device crypto0: Filesystem type zfs - Label `bpool_ip3tdb' - Last modification
time 2021-05-03 12:14:08 Monday, UUID f14d7bdf89fe21fb - Sector size 512B -
Total size 4192256KiB
- If boot pool is not encrypted:
.. code-block:: text
grub> ls (hd0,gpt2)
Device hd0,gpt2: Filesystem type zfs - Label `bpool_ip3tdb' - Last modification
time 2021-05-03 12:14:08 Monday, UUID f14d7bdf89fe21fb - Sector size 512B -
Total size 4192256KiB
#. List boot environments nested inside ``bpool/$INST_ID/BOOT``::
grub> ls (crypto0)/sys/BOOT
@/ default/ be0/
#. Instruct GRUB to load configuration from ``be0`` boot environment::
grub> prefix=(crypto0)/sys/BOOT/be0/@/grub
grub> configfile $prefix/grub.cfg
#. GRUB menu should now appear.
#. After entering system, reinstall GRUB.
Switch GRUB prefix when disk fails
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you are using LUKS encrypted boot pool with multiple disks,
the primary disk failed, GRUB will fail to load configuration.
If there's still enough redundancy for the boot pool, try fix
GRUB with the following method:
#. Ensure ``Slot 1 opened`` message
is shown
.. code-block:: text
Welcome to GRUB!
error: no such cryptodisk found.
Attempting to decrypt master key...
Enter passphrase for hd0,gpt2 (c0987ea1a51049e9b3056622804de62a):
Slot 1 opened
error: disk `cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf' not found.
Entering rescue mode...
grub rescue>
If ``error: access denied.`` is shown,
try re-enter password with::
grub rescue> cryptomount hd0,gpt2
#. Check prefix::
grub rescue > set
# prefix=(cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf)/sys/BOOT/be0@/grub
# root=cryptouuid/47ed1b7eb0014bc9a70aede3d8714faf
#. Set correct ``prefix`` and ``root`` by replacing
``cryptouuid/UUID`` with ``crypto0``::
grub rescue> prefix=(crypto0)/sys/BOOT/default@/grub
grub rescue> root=crypto0
#. Boot GRUB::
grub rescue> insmod normal
grub rescue> normal
GRUB should then boot normally.
#. After entering system, edit ``/etc/fstab`` to promote
one backup to ``/boot/efi``.
#. Make the change to ``prefix`` and ``root``
permanent by `reinstalling GRUB <#grub-installation>`__.