Alpine, Arch Linux, Fedora, RHEL, NixOS Root on ZFS guide: add CI/CD tests

Remove unmaintained Arch Linux guides.

Signed-off-by: Maurice Zhou <yuchen@apvc.uk>
This commit is contained in:
Maurice Zhou
2023-04-05 12:46:27 +02:00
committed by George Melikov
parent a67d02b8ac
commit 4fb5fb694f
43 changed files with 3655 additions and 2519 deletions

View File

@@ -1,11 +1,503 @@
.. highlight:: sh
.. ifconfig:: zfs_root_test
# For the CI/CD test run of this guide,
# Enable verbose logging of bash shell and fail immediately when
# a commmand fails.
set -vxeuf
.. In this document, there are three types of code-block markups:
``::`` are commands intended for both the vm test and the users
``.. ifconfig:: zfs_root_test`` are commands intended only for vm test
``.. code-block:: sh`` are commands intended only for users
NixOS Root on ZFS
=======================================
Start from "Preparation".
**Note for arm64**:
Contents
--------
.. toctree::
:maxdepth: 2
:glob:
Currently there is a bug with the grub installation script. See `here
<https://github.com/NixOS/nixpkgs/issues/222491>`__ for details.
Root on ZFS/*
**Note for Immutable Root**:
Immutable root can be enabled or disabled by setting
``zfs-root.boot.immutable`` option inside per-host configuration.
**Customization**
Unless stated otherwise, it is not recommended to customize system
configuration before reboot.
Preparation
---------------------------
#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.
#. Download `NixOS Live Image
<https://nixos.org/download.html#nixos-iso>`__ and boot from it.
.. code-block:: sh
sha256sum -c ./nixos-*.sha256
dd if=input-file of=output-file bs=1M
#. Connect to the Internet.
#. Set root password or ``/root/.ssh/authorized_keys``.
#. Start SSH server
.. code-block:: sh
systemctl restart sshd
#. Connect from another computer
.. code-block:: sh
ssh root@192.168.1.91
#. Target disk
List available disks with
.. code-block:: sh
find /dev/disk/by-id/
If virtio is used as disk bus, power off the VM and set serial numbers for disk.
For QEMU, use ``-drive format=raw,file=disk2.img,serial=AaBb``.
For libvirt, edit domain XML. See `this page
<https://bugzilla.redhat.com/show_bug.cgi?id=1245013>`__ for examples.
Declare disk array
.. code-block:: sh
DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
For single disk installation, use
.. code-block:: sh
DISK='/dev/disk/by-id/disk1'
.. ifconfig:: zfs_root_test
::
# for github test run, use chroot and loop devices
DISK="$(losetup --all| grep nixos | cut -f1 -d: | xargs -t -I '{}' printf '{} ')"
# if there is no loopdev, then we are using qemu virtualized test
# run, use sata disks instead
if test -z "${DISK}"; then
DISK=$(find /dev/disk/by-id -type l | grep -v DVD-ROM | grep -v -- -part | xargs -t -I '{}' printf '{} ')
fi
#. Set a mount point
::
MNT=$(mktemp -d)
#. Set partition size:
Set swap size in GB, set to 1 if you don't want swap to
take up too much space
.. code-block:: sh
SWAPSIZE=4
.. ifconfig:: zfs_root_test
# For the test run, use 1GB swap space to avoid hitting CI/CD
# quota
SWAPSIZE=1
Set how much space should be left at the end of the disk, minimum 1GB
::
RESERVE=1
#. Enable Nix Flakes functionality
::
mkdir -p ~/.config/nix
echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf
#. Install programs needed for system installation
::
if ! command -v git; then nix-env -f '<nixpkgs>' -iA git; fi
if ! command -v jq; then nix-env -f '<nixpkgs>' -iA jq; fi
if ! command -v partprobe; then nix-env -f '<nixpkgs>' -iA parted; fi
.. ifconfig:: zfs_root_test
::
# install missing packages in chroot
if (echo "${DISK}" | grep "/dev/loop"); then
nix-env -f '<nixpkgs>' -iA nixos-install-tools
fi
System Installation
---------------------------
#. Partition the disks.
Note: you must clear all existing partition tables and data structures from the disks,
especially those with existing ZFS pools or mdraid and those that have been used as live media.
Those data structures may interfere with boot process.
For flash-based storage, this can be done by uncommenting the blkdiscard command below:
::
partition_disk () {
local disk="${1}"
#blkdiscard -f "${disk}"
parted --script --align=optimal "${disk}" -- \
mklabel gpt \
mkpart EFI 2MiB 1GiB \
mkpart bpool 1GiB 5GiB \
mkpart rpool 5GiB -$((SWAPSIZE + RESERVE))GiB \
mkpart swap -$((SWAPSIZE + RESERVE))GiB -"${RESERVE}"GiB \
mkpart BIOS 1MiB 2MiB \
set 1 esp on \
set 5 bios_grub on \
set 5 legacy_boot on
partprobe "${disk}"
udevadm settle
}
for i in ${DISK}; do
partition_disk "${i}"
done
.. ifconfig:: zfs_root_test
::
# When working with GitHub chroot runners, we are using loop
# devices as installation target. However, the alias support for
# loop device was just introduced in March 2023. See
# https://github.com/systemd/systemd/pull/26693
# For now, we will create the aliases maunally as a workaround
looppart="1 2 3 4 5"
for i in ${DISK}; do
for j in ${looppart}; do
if test -e "${i}p${j}"; then
ln -s "${i}p${j}" "${i}-part${j}"
fi
done
done
#. Setup encrypted swap. This is useful if the available memory is
small::
for i in ${DISK}; do
cryptsetup open --type plain --key-file /dev/random "${i}"-part4 "${i##*/}"-part4
mkswap /dev/mapper/"${i##*/}"-part4
swapon /dev/mapper/"${i##*/}"-part4
done
#. Create boot pool
::
# shellcheck disable=SC2046
zpool create \
-o compatibility=grub2 \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl \
-O canmount=off \
-O compression=lz4 \
-O devices=off \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/boot \
-R "${MNT}" \
bpool \
mirror \
$(for i in ${DISK}; do
printf '%s ' "${i}-part2";
done)
If not using a multi-disk setup, remove ``mirror``.
You should not need to customize any of the options for the boot pool.
GRUB does not support all of the zpool features. See ``spa_feature_names``
in `grub-core/fs/zfs/zfs.c
<http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276>`__.
This step creates a separate boot pool for ``/boot`` with the features
limited to only those that GRUB supports, allowing the root pool to use
any/all features.
Features enabled with ``-o compatibility=grub2`` can be seen
`here <https://github.com/openzfs/zfs/blob/master/cmd/zpool/compatibility.d/grub2>`__.
#. Create root pool
::
# shellcheck disable=SC2046
zpool create \
-o ashift=12 \
-o autotrim=on \
-R "${MNT}" \
-O acltype=posixacl \
-O canmount=off \
-O compression=zstd \
-O dnodesize=auto \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/ \
rpool \
mirror \
$(for i in ${DISK}; do
printf '%s ' "${i}-part3";
done)
If not using a multi-disk setup, remove ``mirror``.
#. Create root system container:
- Unencrypted
::
zfs create \
-o canmount=off \
-o mountpoint=none \
rpool/nixos
- Encrypted:
Pick a strong password. Once compromised, changing password will not keep your
data safe. See ``zfs-change-key(8)`` for more info
.. code-block:: sh
zfs create \
-o canmount=off \
-o mountpoint=none \
-o encryption=on \
-o keylocation=prompt \
-o keyformat=passphrase \
rpool/nixos
You can automate this step (insecure) with: ``echo POOLPASS | zfs create ...``.
Create system datasets,
manage mountpoints with ``mountpoint=legacy``
::
zfs create -o mountpoint=legacy rpool/nixos/root
mount -t zfs rpool/nixos/root "${MNT}"/
zfs create -o mountpoint=legacy rpool/nixos/home
mkdir "${MNT}"/home
mount -t zfs rpool/nixos/home "${MNT}"/home
zfs create -o mountpoint=legacy rpool/nixos/var
zfs create -o mountpoint=legacy rpool/nixos/var/lib
zfs create -o mountpoint=legacy rpool/nixos/var/log
zfs create -o mountpoint=none bpool/nixos
zfs create -o mountpoint=legacy bpool/nixos/root
mkdir "${MNT}"/boot
mount -t zfs bpool/nixos/root "${MNT}"/boot
mkdir -p "${MNT}"/var/log
mkdir -p "${MNT}"/var/lib
mount -t zfs rpool/nixos/var/lib "${MNT}"/var/lib
mount -t zfs rpool/nixos/var/log "${MNT}"/var/log
zfs create -o mountpoint=legacy rpool/nixos/empty
zfs snapshot rpool/nixos/empty@start
#. Format and mount ESP
::
for i in ${DISK}; do
mkfs.vfat -n EFI "${i}"-part1
mkdir -p "${MNT}"/boot/efis/"${i##*/}"-part1
mount -t vfat -o iocharset=iso8859-1 "${i}"-part1 "${MNT}"/boot/efis/"${i##*/}"-part1
done
System Configuration
---------------------------
#. Clone template flake configuration
.. code-block:: sh
mkdir -p "${MNT}"/etc
git clone --depth 1 --branch openzfs-guide \
https://github.com/ne9z/dotfiles-flake.git "${MNT}"/etc/nixos
.. ifconfig:: zfs_root_test
::
# Use vm branch of the template config for test run
mkdir -p "${MNT}"/etc
git clone --depth 1 --branch openzfs-guide-testvm \
https://github.com/ne9z/dotfiles-flake.git "${MNT}"/etc/nixos
# for debugging: show template revision
git -C "${MNT}"/etc/nixos log -n1
#. From now on, the complete configuration of the system will be
tracked by git, set a user name and email address to continue
::
rm -rf "${MNT}"/etc/nixos/.git
git -C "${MNT}"/etc/nixos/ init -b main
git -C "${MNT}"/etc/nixos/ add "${MNT}"/etc/nixos/
git -C "${MNT}"/etc/nixos config user.email "you@example.com"
git -C "${MNT}"/etc/nixos config user.name "Alice Q. Nixer"
git -C "${MNT}"/etc/nixos commit -asm 'initial commit'
#. Customize configuration to your hardware
::
for i in ${DISK}; do
sed -i \
"s|/dev/disk/by-id/|${i%/*}/|" \
"${MNT}"/etc/nixos/hosts/exampleHost/default.nix
break
done
diskNames=""
for i in ${DISK}; do
diskNames="${diskNames} \"${i##*/}\""
done
sed -i "s|\"bootDevices_placeholder\"|${diskNames}|g" \
"${MNT}"/etc/nixos/hosts/exampleHost/default.nix
sed -i "s|\"abcd1234\"|\"$(head -c4 /dev/urandom | od -A none -t x4| sed 's| ||g' || true)\"|g" \
"${MNT}"/etc/nixos/hosts/exampleHost/default.nix
sed -i "s|\"x86_64-linux\"|\"$(uname -m || true)-linux\"|g" \
"${MNT}"/etc/nixos/flake.nix
cp "$(command -v nixos-generate-config || true)" ./nixos-generate-config
chmod a+rw ./nixos-generate-config
# shellcheck disable=SC2016
echo 'print STDOUT $initrdAvailableKernelModules' >> ./nixos-generate-config
kernelModules="$(./nixos-generate-config --show-hardware-config --no-filesystems | tail -n1 || true)"
sed -i "s|\"kernelModules_placeholder\"|${kernelModules}|g" \
"${MNT}"/etc/nixos/hosts/exampleHost/default.nix
.. ifconfig:: zfs_root_test
::
# show generated config
cat "${MNT}"/etc/nixos/hosts/exampleHost/default.nix
#. Set root password
.. code-block:: sh
rootPwd=$(mkpasswd -m SHA-512)
.. ifconfig:: zfs_root_test
::
# Use "test" for root password in test run
rootPwd=$(echo yourpassword | mkpasswd -m SHA-512 -)
Declare password in configuration
::
sed -i \
"s|rootHash_placeholder|${rootPwd}|" \
"${MNT}"/etc/nixos/configuration.nix
#. You can enable NetworkManager for wireless networks and GNOME
desktop environment in ``configuration.nix``.
#. Commit changes to local repo
::
git -C "${MNT}"/etc/nixos commit -asm 'initial installation'
#. Update flake lock file to track latest system version
::
nix flake update --commit-lock-file \
"git+file://${MNT}/etc/nixos"
#. Install system and apply configuration
.. code-block:: sh
nixos-install \
--root "${MNT}" \
--no-root-passwd \
--flake "git+file://${MNT}/etc/nixos#exampleHost"
.. ifconfig:: zfs_root_test
::
if (echo "${DISK}" | grep "/dev/loop"); then
# nixos-install command might fail in a chroot environment
# due to
# https://github.com/NixOS/nixpkgs/issues/220211
# it should be sufficient to test if the configuration builds
nix build "git+file://${MNT}/etc/nixos/#nixosConfigurations.exampleHost.config.system.build.toplevel"
nixos-install \
--root "${MNT}" \
--no-root-passwd \
--flake "git+file://${MNT}/etc/nixos#exampleHost" || true
else
# but with qemu test installation must be fully working
nixos-install \
--root "${MNT}" \
--no-root-passwd \
--flake "git+file://${MNT}/etc/nixos#exampleHost"
fi
.. ifconfig:: zfs_root_test
::
# list contents of boot dir to confirm
# that the mirroring succeeded
find "${MNT}"/boot/efis/ -type d
#. Unmount filesystems
::
umount -Rl "${MNT}"
zpool export -a
#. Reboot
.. code-block:: sh
reboot
.. ifconfig:: zfs_root_test
::
# For qemu test run, power off instead.
# Test run is successful if the vm powers off
if ! (echo "${DISK}" | grep "/dev/loop"); then
poweroff
fi
#. For instructions on maintenance tasks, see `Root on ZFS maintenance
page <../zfs_root_maintenance.html>`__.

View File

@@ -1,60 +0,0 @@
.. highlight:: sh
Preparation
======================
.. contents:: Table of Contents
:local:
**Note for arm64**
Currently there is a bug with the grub installation script. See `here
<https://github.com/NixOS/nixpkgs/issues/222491>`__ for details.
**Note for Immutable Root**
Immutable root can be enabled or disabled by setting
``zfs-root.boot.immutable`` option inside per-host configuration.
#. Disable Secure Boot. ZFS modules can not be loaded if Secure Boot is enabled.
#. Download `NixOS Live Image
<https://nixos.org/download.html#download-nixos>`__ and boot from it.
#. Connect to the Internet.
#. Set root password or ``/root/.ssh/authorized_keys``.
#. Start SSH server::
systemctl restart sshd
#. Connect from another computer::
ssh root@192.168.1.91
#. Target disk
List available disks with::
find /dev/disk/by-id/
If using virtio as disk bus, use ``/dev/disk/by-path/``.
Declare disk array::
DISK='/dev/disk/by-id/ata-FOO /dev/disk/by-id/nvme-BAR'
For single disk installation, use::
DISK='/dev/disk/by-id/disk1'
#. Set partition size:
Set swap size, set to 1 if you don't want swap to
take up too much space::
INST_PARTSIZE_SWAP=4
It is recommeneded to set this value higher if your computer has
less than 8GB of memory, otherwise ZFS might fail to build.
Root pool size, use all remaining disk space if not set::
INST_PARTSIZE_RPOOL=

View File

@@ -1,150 +0,0 @@
.. highlight:: sh
System Installation
======================
.. contents:: Table of Contents
:local:
#. Partition the disks::
for i in ${DISK}; do
# wipe flash-based storage device to improve
# performance.
# ALL DATA WILL BE LOST
# blkdiscard -f $i
sgdisk --zap-all $i
sgdisk -n1:1M:+1G -t1:EF00 $i
sgdisk -n2:0:+4G -t2:BE00 $i
sgdisk -n4:0:+${INST_PARTSIZE_SWAP}G -t4:8200 $i
if test -z $INST_PARTSIZE_RPOOL; then
sgdisk -n3:0:0 -t3:BF00 $i
else
sgdisk -n3:0:+${INST_PARTSIZE_RPOOL}G -t3:BF00 $i
fi
sgdisk -a1 -n5:24K:+1000K -t5:EF02 $i
sync && udevadm settle && sleep 3
cryptsetup open --type plain --key-file /dev/random $i-part4 ${i##*/}-part4
mkswap /dev/mapper/${i##*/}-part4
swapon /dev/mapper/${i##*/}-part4
done
#. Create boot pool::
zpool create \
-o compatibility=grub2 \
-o ashift=12 \
-o autotrim=on \
-O acltype=posixacl \
-O canmount=off \
-O compression=lz4 \
-O devices=off \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/boot \
-R /mnt \
bpool \
mirror \
$(for i in ${DISK}; do
printf "$i-part2 ";
done)
If not using a multi-disk setup, remove ``mirror``.
You should not need to customize any of the options for the boot pool.
GRUB does not support all of the zpool features. See ``spa_feature_names``
in `grub-core/fs/zfs/zfs.c
<http://git.savannah.gnu.org/cgit/grub.git/tree/grub-core/fs/zfs/zfs.c#n276>`__.
This step creates a separate boot pool for ``/boot`` with the features
limited to only those that GRUB supports, allowing the root pool to use
any/all features.
Features enabled with ``-o compatibility=grub2`` can be seen
`here <https://github.com/openzfs/zfs/blob/master/cmd/zpool/compatibility.d/grub2>`__.
#. Create root pool::
zpool create \
-o ashift=12 \
-o autotrim=on \
-R /mnt \
-O acltype=posixacl \
-O canmount=off \
-O compression=zstd \
-O dnodesize=auto \
-O normalization=formD \
-O relatime=on \
-O xattr=sa \
-O mountpoint=/ \
rpool \
mirror \
$(for i in ${DISK}; do
printf "$i-part3 ";
done)
If not using a multi-disk setup, remove ``mirror``.
#. Create root system container:
- Unencrypted::
zfs create \
-o canmount=off \
-o mountpoint=none \
rpool/nixos
- Encrypted:
Pick a strong password. Once compromised, changing password will not keep your
data safe. See ``zfs-change-key(8)`` for more info::
zfs create \
-o canmount=off \
-o mountpoint=none \
-o encryption=on \
-o keylocation=prompt \
-o keyformat=passphrase \
rpool/nixos
You can automate this step (insecure) with: ``echo POOLPASS | zfs create ...``.
Create system datasets, let NixOS declaratively
manage mountpoints with ``mountpoint=legacy``::
zfs create -o mountpoint=legacy rpool/nixos/root
mount -t zfs rpool/nixos/root /mnt/
zfs create -o mountpoint=legacy rpool/nixos/home
mkdir /mnt/home
mount -t zfs rpool/nixos/home /mnt/home
zfs create -o mountpoint=legacy rpool/nixos/var
zfs create -o mountpoint=legacy rpool/nixos/var/lib
zfs create -o mountpoint=legacy rpool/nixos/var/log
zfs create -o mountpoint=none bpool/nixos
zfs create -o mountpoint=legacy bpool/nixos/root
mkdir /mnt/boot
mount -t zfs bpool/nixos/root /mnt/boot
mkdir -p /mnt/var/log
mkdir -p /mnt/var/lib
mount -t zfs rpool/nixos/var/lib /mnt/var/lib
mount -t zfs rpool/nixos/var/log /mnt/var/log
zfs create -o mountpoint=legacy rpool/nixos/empty
zfs snapshot rpool/nixos/empty@start
#. Format and mount ESP::
for i in ${DISK}; do
mkfs.vfat -n EFI ${i}-part1
mkdir -p /mnt/boot/efis/${i##*/}-part1
mount -t vfat ${i}-part1 /mnt/boot/efis/${i##*/}-part1
done

View File

@@ -1,189 +0,0 @@
.. highlight:: sh
System Configuration
======================
.. contents:: Table of Contents
:local:
#. Enter ephemeral nix-shell with git support::
mkdir -p /mnt/etc/
echo DISK=\"$DISK\" > ~/disk
nix-shell -p git
#. Clone template flake configuration::
source ~/disk
git clone https://github.com/ne9z/dotfiles-flake.git /mnt/etc/nixos
git -C /mnt/etc/nixos checkout openzfs-guide
#. Customize configuration to your hardware::
for i in $DISK; do
sed -i \
"s|/dev/disk/by-id/|${i%/*}/|" \
/mnt/etc/nixos/hosts/exampleHost/default.nix
break
done
diskNames=""
for i in $DISK; do
diskNames="$diskNames \"${i##*/}\""
done
sed -i "s|\"bootDevices_placeholder\"|$diskNames|g" \
/mnt/etc/nixos/hosts/exampleHost/default.nix
sed -i "s|\"abcd1234\"|\"$(head -c4 /dev/urandom | od -A none -t x4| sed 's| ||g')\"|g" \
/mnt/etc/nixos/hosts/exampleHost/default.nix
sed -i "s|\"x86_64-linux\"|\"$(uname -m)-linux\"|g" \
/mnt/etc/nixos/flake.nix
#. Set root password::
rootPwd=$(mkpasswd -m SHA-512 -s)
Declare password in configuration::
sed -i \
"s|rootHash_placeholder|${rootPwd}|" \
/mnt/etc/nixos/configuration.nix
#. You can enable NetworkManager for wireless networks and GNOME
desktop environment in ``configuration.nix``.
#. From now on, the complete configuration of the system will be
tracked by git, set a user name and email address to continue::
git -C /mnt/etc/nixos config user.email "you@example.com"
git -C /mnt/etc/nixos config user.name "Alice Q. Nixer"
#. Commit changes to local repo::
git -C /mnt/etc/nixos commit -asm 'initial installation'
#. Update flake lock file to track latest system version::
nix \
--extra-experimental-features 'nix-command flakes' \
flake update --commit-lock-file \
"git+file:///mnt/etc/nixos"
#. Install system and apply configuration::
nixos-install --no-root-passwd --flake "git+file:///mnt/etc/nixos#exampleHost"
#. Exit ephemeral nix shell with git::
exit
#. Unmount filesystems::
umount -Rl /mnt
zpool export -a
#. Reboot::
reboot
Replace a failed disk
=====================
When a disk fails in a mirrored setup, the disk can be
replaced with the following procedure.
#. Shutdown the computer.
#. Replace the failed disk with another disk. The
replacement should be at least the same size or
larger than the failed disk.
#. Boot the computer. When a disk fails, the system will boot, albeit
several minutes slower than normal. This is due to
the initrd and systemd designed to only import a pool
in degraded state after a 90s timeout. Swap
partition on that disk will also fail.
#. Launch a ephemeral nix shell with gptfdisk::
nix-shell -p gptfdisk
#. Identify the bad disk and a working old disk::
ZPOOL_VDEV_NAME_PATH=1 zpool status
pool: bpool
status: DEGRADED
action: Replace the device using 'zpool replace'.
...
config: bpool
mirror-0
2387489723748 UNAVAIL 0 0 0 was /dev/disk/by-id/ata-BAD-part2
/dev/disk/by-id/ata-OLD-part2 ONLINE 0 0 0
#. Store the bad disk and a working old disk in a variable, omit the partition number ``-partN``::
BAD=/dev/disk/by-id/ata-BAD
OLD=/dev/disk/by-id/ata-OLD
#. Identify the new disk::
find /dev/disk/by-id/
/dev/disk/by-id/ata-OLD-part1
/dev/disk/by-id/ata-OLD-part2
...
/dev/disk/by-id/ata-OLD-part5
/dev/disk/by-id/ata-NEW <-- new disk w/o partition table
#. Store the new disk in a variable::
NEW=/dev/disk/by-id/ata-NEW
#. Replicate partition table on the new disk::
sgdisk -Z $NEW
sgdisk --backup=backup $OLD
sgdisk --load-backup=backup $NEW
sgdisk --randomize-guids $NEW
#. If the new disk is larger than the old disk, expand root pool partition size::
sgdisk --delete=3 $NEW
# expand to all remaining disk space
sgdisk -n3:0:0 -t3:BF00 $NEW
Note that this space will only become available once all disks in the mirrored pool are
replaced with larger disks.
#. Format and mount EFI system partition::
mkfs.vfat -n EFI ${NEW}-part1
mkdir -p /boot/efis/${NEW##*/}-part1
mount -t vfat ${NEW}-part1 /boot/efis/${NEW##*/}-part1
#. Replace failed disk in pool::
zpool offline bpool ${BAD}-part2
zpool offline rpool ${BAD}-part3
zpool replace bpool ${BAD}-part2 ${NEW}-part2
zpool replace rpool ${BAD}-part3 ${NEW}-part3
zpool online bpool ${NEW}-part2
zpool online rpool ${NEW}-part3
Let the new disk resilver. Check status with ``zpool status``.
#. Update NixOS system configuration and commit changes to git repo::
sed -i "s|${BAD##*/}|${NEW##*/}|" /etc/nixos/hosts/exampleHost/default.nix
git -C /etc/nixos commit
#. Apply the updated NixOS system configuration, reinstall bootloader, then reboot::
nixos-rebuild boot --install-bootloader
reboot

View File

@@ -45,8 +45,10 @@ to modprobe until you make these changes and reboot.
tee -a /etc/nixos/zfs.nix <<EOF
{ config, pkgs, ... }:
{ boot.supportedFilesystems = [ "zfs" ];
networking.hostId = (builtins.substring 0 8 (builtins.readFile "/etc/machine-id"));
{
boot.supportedFilesystems = [ "zfs" ];
networking.hostId = "$(head -c4 /dev/urandom | od -A none -t x4 | sed 's| ||g')";
boot.zfs.forceImportRoot = false;
}
EOF
@@ -56,40 +58,8 @@ to modprobe until you make these changes and reboot.
Root on ZFS
-----------
ZFS can be used as root file system for NixOS.
An installation guide is available.
Start from "Preparation".
.. toctree::
:maxdepth: 1
:glob:
:maxdepth: 1
:glob:
Root on ZFS/*
Contribute
----------
#. Fork and clone `this repo <https://github.com/openzfs/openzfs-docs>`__.
#. Launch an ephemeral nix-shell with the following packages::
nix-shell -p python39 python39Packages.pip gnumake \
python39Packages.setuptools
#. Create python virtual environment and install packages::
cd openzfs-docs
python -m venv .venv
source .venv/bin/activate
pip install -r docs/requirements.txt
#. Make your changes.
#. Test::
make html
sensible-browser _build/html/index.html
#. ``git commit --signoff`` to a branch, ``git push``, and create a pull
request. Mention @ne9z.
*