From ae6aa9cb4dfb36d730a055100e2124c9672ae519 Mon Sep 17 00:00:00 2001 From: Richard Laager Date: Fri, 22 May 2020 18:10:51 -0500 Subject: [PATCH] Debian/Ubuntu: Cleanup formatting This cleans up a bunch of formatting from the rst conversion. I didn't make the manual fixes to the 16.04 or Stretch versions, since they aren't really being maintained these days and are just for reference for existing installations. Signed-off-by: Richard Laager --- .../Debian/Debian Buster Root on ZFS.rst | 1079 +++++++---------- .../Debian/Debian Stretch Root on ZFS.rst | 774 ++++++------ .../Ubuntu/Ubuntu 16.04 Root on ZFS.rst | 586 +++++---- .../Ubuntu/Ubuntu 18.04 Root on ZFS.rst | 1030 +++++++--------- 4 files changed, 1593 insertions(+), 1876 deletions(-) diff --git a/docs/Getting Started/Debian/Debian Buster Root on ZFS.rst b/docs/Getting Started/Debian/Debian Buster Root on ZFS.rst index c4cba85..bafcd29 100644 --- a/docs/Getting Started/Debian/Debian Buster Root on ZFS.rst +++ b/docs/Getting Started/Debian/Debian Buster Root on ZFS.rst @@ -1,8 +1,10 @@ +.. highlight:: sh + Debian Buster Root on ZFS ========================= .. contents:: Table of Contents - :local: + :local: Overview -------- @@ -10,21 +12,21 @@ Overview Caution ~~~~~~~ -- This HOWTO uses a whole physical disk. -- Do not use these instructions for dual-booting. -- Backup your data. Any existing data will be lost. +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. System Requirements ~~~~~~~~~~~~~~~~~~~ -- `64-bit Debian GNU/Linux Buster Live CD w/ GUI (e.g. gnome - iso) `__ -- `A 64-bit kernel is strongly - encouraged. `__ -- Installing on a drive which presents 4KiB logical sectors (a “4Kn” - drive) only works with UEFI booting. This not unique to ZFS. `GRUB - does not and will not work on 4Kn with legacy (BIOS) - booting. `__ +- `64-bit Debian GNU/Linux Buster Live CD w/ GUI (e.g. gnome + iso) `__ +- `A 64-bit kernel is strongly + encouraged. `__ +- Installing on a drive which presents 4KiB logical sectors (a “4Kn” + drive) only works with UEFI booting. This not unique to ZFS. `GRUB + does not and will not work on 4Kn with legacy (BIOS) + booting. `__ Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory is recommended for normal performance in basic workloads. If you @@ -45,32 +47,29 @@ mention @rlaager. Contributing ~~~~~~~~~~~~ -1) Fork and clone: https://github.com/openzfs/openzfs-docs +1. Fork and clone: https://github.com/openzfs/openzfs-docs -2) Install the tools: +2. Install the tools:: -:: + # On Debian 11 / Ubuntu 20.04 or later: + sudo apt install python3-sphinx python3-sphinx-issues python3-sphinx-rtd-theme - # On Debian 11 / Ubuntu 20.04 or later: - sudo apt install python3-sphinx python3-sphinx-issues python3-sphinx-rtd-theme - # On earlier releases: - sudo apt install pip3 - pip3 install -r requirements.txt - # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: - PATH=$HOME/.local/bin:$PATH + # On earlier releases: + sudo apt install pip3 + pip3 install -r requirements.txt + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH -3) Make your changes. +3. Make your changes. -4) Test: +4. Test:: -:: + cd docs + make html + sensible-browser _build/html/index.html - cd docs - make html - sensible-browser _build/html/index.html - -5) ``git commit --signoff`` to a branch, ``git push``, and create a pull request. - Mention @rlaager. +5. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. Encryption ~~~~~~~~~~ @@ -109,55 +108,45 @@ Internet as appropriate (e.g. join your WiFi network). environment: If you have a second system, using SSH to access the target system can -be convenient. +be convenient:: -:: - - sudo apt update - sudo apt install --yes openssh-server - sudo systemctl restart ssh + sudo apt update + sudo apt install --yes openssh-server + sudo systemctl restart ssh **Hint:** You can find your IP address with ``ip addr show scope global | grep inet``. Then, from your main machine, connect with ``ssh user@IP``. -1.3 Become root: +1.3 Become root:: -:: + sudo -i - sudo -i +1.4 Setup and update the repositories:: -1.4 Setup and update the repositories: + echo deb http://deb.debian.org/debian buster contrib >> /etc/apt/sources.list + echo deb http://deb.debian.org/debian buster-backports main contrib >> /etc/apt/sources.list + apt update -:: +1.5 Install ZFS in the Live CD environment:: - echo deb http://deb.debian.org/debian buster contrib >> /etc/apt/sources.list - echo deb http://deb.debian.org/debian buster-backports main contrib >> /etc/apt/sources.list - apt update + apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r) + apt install --yes -t buster-backports --no-install-recommends zfs-dkms + modprobe zfs + apt install --yes -t buster-backports zfsutils-linux -1.5 Install ZFS in the Live CD environment: - -:: - - apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r) - apt install --yes -t buster-backports --no-install-recommends zfs-dkms - modprobe zfs - apt install --yes -t buster-backports zfsutils-linux - -- The dkms dependency is installed manually just so it comes from - buster and not buster-backports. This is not critical. -- We need to get the module built and loaded before installing - zfsutils-linux or `zfs-mount.service will fail to - start `__. +- The dkms dependency is installed manually just so it comes from + buster and not buster-backports. This is not critical. +- We need to get the module built and loaded before installing + zfsutils-linux or `zfs-mount.service will fail to + start `__. Step 2: Disk Formatting ----------------------- -2.1 Set a variable with the disk name: +2.1 Set a variable with the disk name:: -:: - - DISK=/dev/disk/by-id/scsi-SATA_disk1 + DISK=/dev/disk/by-id/scsi-SATA_disk1 Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the ``/dev/sd*`` device nodes directly can cause sporadic import failures, @@ -165,89 +154,73 @@ especially on systems that have more than one storage pool. **Hints:** -- ``ls -la /dev/disk/by-id`` will list the aliases. -- Are you doing this in a virtual machine? If your virtual disk is - missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using - KVM with virtio; otherwise, read the - `troubleshooting <#troubleshooting>`__ section. +- ``ls -la /dev/disk/by-id`` will list the aliases. +- Are you doing this in a virtual machine? If your virtual disk is + missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using + KVM with virtio; otherwise, read the + `troubleshooting <#troubleshooting>`__ section. 2.2 If you are re-using a disk, clear it as necessary: -If the disk was previously used in an MD array, zero the superblock: +If the disk was previously used in an MD array, zero the superblock:: -:: + apt install --yes mdadm + mdadm --zero-superblock --force $DISK - apt install --yes mdadm - mdadm --zero-superblock --force $DISK +Clear the partition table:: -Clear the partition table: - -:: - - sgdisk --zap-all $DISK + sgdisk --zap-all $DISK 2.3 Partition your disk(s): -Run this if you need legacy (BIOS) booting: +Run this if you need legacy (BIOS) booting:: -:: + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK - sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK +Run this for UEFI booting (for use now or in the future):: -Run this for UEFI booting (for use now or in the future): + sgdisk -n2:1M:+512M -t2:EF00 $DISK -:: +Run this for the boot pool:: - sgdisk -n2:1M:+512M -t2:EF00 $DISK - -Run this for the boot pool: - -:: - - sgdisk -n3:0:+1G -t3:BF01 $DISK + sgdisk -n3:0:+1G -t3:BF01 $DISK Choose one of the following options: -2.3a Unencrypted or ZFS native encryption: +2.3a Unencrypted or ZFS native encryption:: -:: + sgdisk -n4:0:0 -t4:BF01 $DISK - sgdisk -n4:0:0 -t4:BF01 $DISK +2.3b LUKS:: -2.3b LUKS: - -:: - - sgdisk -n4:0:0 -t4:8300 $DISK + sgdisk -n4:0:0 -t4:8300 $DISK If you are creating a mirror or raidz topology, repeat the partitioning commands for all the disks which will be part of the pool. -2.4 Create the boot pool: +2.4 Create the boot pool:: -:: - - zpool create -o ashift=12 -d \ - -o feature@async_destroy=enabled \ - -o feature@bookmarks=enabled \ - -o feature@embedded_data=enabled \ - -o feature@empty_bpobj=enabled \ - -o feature@enabled_txg=enabled \ - -o feature@extensible_dataset=enabled \ - -o feature@filesystem_limits=enabled \ - -o feature@hole_birth=enabled \ - -o feature@large_blocks=enabled \ - -o feature@lz4_compress=enabled \ - -o feature@spacemap_histogram=enabled \ - -o feature@userobj_accounting=enabled \ - -o feature@zpool_checkpoint=enabled \ - -o feature@spacemap_v2=enabled \ - -o feature@project_quota=enabled \ - -o feature@resilver_defer=enabled \ - -o feature@allocation_classes=enabled \ - -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ - -O normalization=formD -O relatime=on -O xattr=sa \ - -O mountpoint=/ -R /mnt bpool ${DISK}-part3 + zpool create -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@userobj_accounting=enabled \ + -o feature@zpool_checkpoint=enabled \ + -o feature@spacemap_v2=enabled \ + -o feature@project_quota=enabled \ + -o feature@resilver_defer=enabled \ + -o feature@allocation_classes=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ + -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt bpool ${DISK}-part3 You should not need to customize any of the options for the boot pool. @@ -261,128 +234,120 @@ read-only compatible features are "supported" by GRUB. **Hints:** -- If you are creating a mirror or raidz topology, create the pool using - ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3`` - (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and - list the partitions from additional disks). -- The pool name is arbitrary. If changed, the new name must be used - consistently. The ``bpool`` convention originated in this HOWTO. +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). +- The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. 2.5 Create the root pool: Choose one of the following options: -2.5a Unencrypted: +2.5a Unencrypted:: -:: + zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt rpool ${DISK}-part4 - zpool create -o ashift=12 \ - -O acltype=posixacl -O canmount=off -O compression=lz4 \ - -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ - -O mountpoint=/ -R /mnt rpool ${DISK}-part4 +2.5b LUKS:: -2.5b LUKS: + apt install --yes cryptsetup + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1 -:: +2.5c ZFS native encryption:: - apt install --yes cryptsetup - cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 - cryptsetup luksOpen ${DISK}-part4 luks1 - zpool create -o ashift=12 \ - -O acltype=posixacl -O canmount=off -O compression=lz4 \ - -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ - -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1 + zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \ + -O mountpoint=/ -R /mnt rpool ${DISK}-part4 -2.5c ZFS native encryption: - -:: - - zpool create -o ashift=12 \ - -O acltype=posixacl -O canmount=off -O compression=lz4 \ - -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ - -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \ - -O mountpoint=/ -R /mnt rpool ${DISK}-part4 - -- The use of ``ashift=12`` is recommended here because many drives - today have 4KiB (or larger) physical sectors, even though they - present 512B logical sectors. Also, a future replacement drive may - have 4KiB physical sectors (in which case ``ashift=12`` is desirable) - or 4KiB logical sectors (in which case ``ashift=12`` is required). -- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you - do not want this, remove that option, but later add - ``-o acltype=posixacl`` (note: lowercase "o") to the ``zfs create`` - for ``/var/log``, as `journald requires - ACLs `__ -- Setting ``normalization=formD`` eliminates some corner cases relating - to UTF-8 filename normalization. It also implies ``utf8only=on``, - which means that only UTF-8 filenames are allowed. If you care to - support non-UTF-8 filenames, do not use this option. For a discussion - of why requiring UTF-8 filenames may be a bad idea, see `The problems - with enforced UTF-8 only - filenames `__. -- Setting ``relatime=on`` is a middle ground between classic POSIX - ``atime`` behavior (with its significant performance impact) and - ``atime=off`` (which provides the best performance by completely - disabling atime updates). Since Linux 2.6.30, ``relatime`` has been - the default for other filesystems. See `RedHat's - documentation `__ - for further information. -- Setting ``xattr=sa`` `vastly improves the performance of extended - attributes `__. - Inside ZFS, extended attributes are used to implement POSIX ACLs. - Extended attributes can also be used by user-space applications. - `They are used by some desktop GUI - applications. `__ - `They can be used by Samba to store Windows ACLs and DOS attributes; - they are required for a Samba Active Directory domain - controller. `__ - Note that ```xattr=sa`` is - Linux-specific. `__ - If you move your ``xattr=sa`` pool to another OpenZFS implementation - besides ZFS-on-Linux, extended attributes will not be readable - (though your data will be). If portability of extended attributes is - important to you, omit the ``-O xattr=sa`` above. Even if you do not - want ``xattr=sa`` for the whole pool, it is probably fine to use it - for ``/var/log``. -- Make sure to include the ``-part4`` portion of the drive path. If you - forget that, you are specifying the whole disk, which ZFS will then - re-partition, and you will lose the bootloader partition(s). -- For LUKS, the key size chosen is 512 bits. However, XTS mode requires - two keys, so the LUKS key is split in half. Thus, ``-s 512`` means - AES-256. -- ZFS native encryption uses ``aes-256-ccm`` by default. `AES-GCM seems - to be generally preferred over - AES-CCM `__, - `is faster - now `__, - and `will be even faster in the - future `__. -- Your passphrase will likely be the weakest link. Choose wisely. See - `section 5 of the cryptsetup - FAQ `__ - for guidance. +- The use of ``ashift=12`` is recommended here because many drives + today have 4KiB (or larger) physical sectors, even though they + present 512B logical sectors. Also, a future replacement drive may + have 4KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4KiB logical sectors (in which case ``ashift=12`` is required). +- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase "o") to the ``zfs create`` + for ``/var/log``, as `journald requires + ACLs `__ +- Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only + filenames `__. +- Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat's + documentation `__ + for further information. +- Setting ``xattr=sa`` `vastly improves the performance of extended + attributes `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI + applications. `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain + controller. `__ + Note that ``xattr=sa`` is + `Linux-specific `__. + If you move your ``xattr=sa`` pool to another OpenZFS implementation + besides ZFS-on-Linux, extended attributes will not be readable + (though your data will be). If portability of extended attributes is + important to you, omit the ``-O xattr=sa`` above. Even if you do not + want ``xattr=sa`` for the whole pool, it is probably fine to use it + for ``/var/log``. +- Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). +- For LUKS, the key size chosen is 512 bits. However, XTS mode requires + two keys, so the LUKS key is split in half. Thus, ``-s 512`` means + AES-256. +- ZFS native encryption uses ``aes-256-ccm`` by default. `AES-GCM seems + to be generally preferred over + AES-CCM `__, + `is faster + now `__, + and `will be even faster in the + future `__. +- Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup + FAQ `__ + for guidance. **Hints:** -- If you are creating a mirror or raidz topology, create the pool using - ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4`` - (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and - list the partitions from additional disks). For LUKS, use - ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will - have to create using ``cryptsetup``. -- The pool name is arbitrary. If changed, the new name must be used - consistently. On systems that can automatically install to ZFS, the - root pool is named ``rpool`` by default. +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). For LUKS, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will + have to create using ``cryptsetup``. +- The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the + root pool is named ``rpool`` by default. Step 3: System Installation --------------------------- -3.1 Create filesystem datasets to act as containers: +3.1 Create filesystem datasets to act as containers:: -:: - - zfs create -o canmount=off -o mountpoint=none rpool/ROOT - zfs create -o canmount=off -o mountpoint=none bpool/BOOT + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through ``pkg image-update`` or @@ -390,111 +355,83 @@ incremented for major system changes through ``pkg image-update`` or unimplemented. Even without such a tool, it can still be used for manually created clones. -3.2 Create filesystem datasets for the root and boot filesystems: +3.2 Create filesystem datasets for the root and boot filesystems:: -:: + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian + zfs mount rpool/ROOT/debian - zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian - zfs mount rpool/ROOT/debian - - zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian - zfs mount bpool/BOOT/debian + zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian + zfs mount bpool/BOOT/debian With ZFS, it is not normally necessary to use a mount command (either ``mount`` or ``zfs mount``). This situation is an exception because of ``canmount=noauto``. -3.3 Create datasets: +3.3 Create datasets:: -:: - - zfs create rpool/home - zfs create -o mountpoint=/root rpool/home/root - zfs create -o canmount=off rpool/var - zfs create -o canmount=off rpool/var/lib - zfs create rpool/var/log - zfs create rpool/var/spool + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool The datasets below are optional, depending on your preferences and/or software choices. -If you wish to exclude these from snapshots: +If you wish to exclude these from snapshots:: -:: + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp - zfs create -o com.sun:auto-snapshot=false rpool/var/cache - zfs create -o com.sun:auto-snapshot=false rpool/var/tmp - chmod 1777 /mnt/var/tmp +If you use /opt on this system:: -If you use /opt on this system: + zfs create rpool/opt -:: +If you use /srv on this system:: - zfs create rpool/opt + zfs create rpool/srv -If you use /srv on this system: +If you use /usr/local on this system:: -:: + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local - zfs create rpool/srv +If this system will have games installed:: -If you use /usr/local on this system: + zfs create rpool/var/games -:: +If this system will store local email in /var/mail:: - zfs create -o canmount=off rpool/usr - zfs create rpool/usr/local + zfs create rpool/var/mail -If this system will have games installed: +If this system will use Snap packages:: -:: + zfs create rpool/var/snap - zfs create rpool/var/games +If you use /var/www on this system:: -If this system will store local email in /var/mail: + zfs create rpool/var/www -:: +If this system will use GNOME:: - zfs create rpool/var/mail - -If this system will use Snap packages: - -:: - - zfs create rpool/var/snap - -If you use /var/www on this system: - -:: - - zfs create rpool/var/www - -If this system will use GNOME: - -:: - - zfs create rpool/var/lib/AccountsService + zfs create rpool/var/lib/AccountsService If this system will use Docker (which manages its own datasets & -snapshots): +snapshots):: -:: + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker - zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker +If this system will use NFS (locking):: -If this system will use NFS (locking): - -:: - - zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs A tmpfs is recommended later, but if you want a separate dataset for -/tmp: +/tmp:: -:: - - zfs create -o com.sun:auto-snapshot=false rpool/tmp - chmod 1777 /mnt/tmp + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling @@ -510,12 +447,10 @@ of your root filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want to limit the maximum space used. Otherwise, you can use a tmpfs (RAM filesystem) later. -3.4 Install the minimal system: +3.4 Install the minimal system:: -:: - - debootstrap buster /mnt - zfs set devices=off rpool + debootstrap buster /mnt + zfs set devices=off rpool The ``debootstrap`` command leaves the new system in an unconfigured state. An alternative to using ``debootstrap`` is to copy the entirety @@ -525,35 +460,34 @@ Step 4: System Configuration ---------------------------- 4.1 Configure the hostname (change ``HOSTNAME`` to the desired -hostname). +hostname):: -:: + echo HOSTNAME > /mnt/etc/hostname + vi /mnt/etc/hosts - echo HOSTNAME > /mnt/etc/hostname +.. code-block:: text - vi /mnt/etc/hosts - Add a line: - 127.0.1.1 HOSTNAME - or if the system has a real name in DNS: - 127.0.1.1 FQDN HOSTNAME + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME **Hint:** Use ``nano`` if you find ``vi`` confusing. 4.2 Configure the network interface: -Find the interface name: +Find the interface name:: -:: + ip addr show - ip addr show +Adjust NAME below to match your interface name:: -Adjust NAME below to match your interface name: + vi /mnt/etc/network/interfaces.d/NAME -:: +.. code-block:: text - vi /mnt/etc/network/interfaces.d/NAME - auto NAME - iface NAME inet dhcp + auto NAME + iface NAME inet dhcp Customize this file if the system is not a DHCP client. @@ -561,67 +495,70 @@ Customize this file if the system is not a DHCP client. :: - vi /mnt/etc/apt/sources.list - deb http://deb.debian.org/debian buster main contrib - deb-src http://deb.debian.org/debian buster main contrib + vi /mnt/etc/apt/sources.list - vi /mnt/etc/apt/sources.list.d/buster-backports.list - deb http://deb.debian.org/debian buster-backports main contrib - deb-src http://deb.debian.org/debian buster-backports main contrib +.. code-block:: sourceslist - vi /mnt/etc/apt/preferences.d/90_zfs - Package: libnvpair1linux libuutil1linux libzfs2linux libzfslinux-dev libzpool2linux python3-pyzfs pyzfs-doc spl spl-dkms zfs-dkms zfs-dracut zfs-initramfs zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed - Pin: release n=buster-backports - Pin-Priority: 990 - -4.4 Bind the virtual filesystems from the LiveCD environment to the new -system and ``chroot`` into it: + deb http://deb.debian.org/debian buster main contrib + deb-src http://deb.debian.org/debian buster main contrib :: - mount --rbind /dev /mnt/dev - mount --rbind /proc /mnt/proc - mount --rbind /sys /mnt/sys - chroot /mnt /usr/bin/env DISK=$DISK bash --login + vi /mnt/etc/apt/sources.list.d/buster-backports.list + +.. code-block:: sourceslist + + deb http://deb.debian.org/debian buster-backports main contrib + deb-src http://deb.debian.org/debian buster-backports main contrib + +:: + + vi /mnt/etc/apt/preferences.d/90_zfs + +.. code-block:: control + + Package: libnvpair1linux libuutil1linux libzfs2linux libzfslinux-dev libzpool2linux python3-pyzfs pyzfs-doc spl spl-dkms zfs-dkms zfs-dracut zfs-initramfs zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed + Pin: release n=buster-backports + Pin-Priority: 990 + +4.4 Bind the virtual filesystems from the LiveCD environment to the new +system and ``chroot`` into it:: + + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK bash --login **Note:** This is using ``--rbind``, not ``--bind``. -4.5 Configure a basic system environment: +4.5 Configure a basic system environment:: -:: + ln -s /proc/self/mounts /etc/mtab + apt update - ln -s /proc/self/mounts /etc/mtab - apt update - - apt install --yes locales - dpkg-reconfigure locales + apt install --yes locales + dpkg-reconfigure locales Even if you prefer a non-English system language, always ensure that -``en_US.UTF-8`` is available. +``en_US.UTF-8`` is available:: -:: + dpkg-reconfigure tzdata - dpkg-reconfigure tzdata +4.6 Install ZFS in the chroot environment for the new system:: -4.6 Install ZFS in the chroot environment for the new system: + apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64 + apt install --yes zfs-initramfs -:: +4.7 For LUKS installs only, setup crypttab:: - apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64 - apt install --yes zfs-initramfs + apt install --yes cryptsetup -4.7 For LUKS installs only, setup crypttab: + echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab -:: - - apt install --yes cryptsetup - - echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ - luks,discard,initramfs > /etc/crypttab - -- The use of ``initramfs`` is a work-around for `cryptsetup does not - support - ZFS `__. +- The use of ``initramfs`` is a work-around for `cryptsetup does not + support + ZFS `__. **Hint:** If you are creating a mirror or raidz topology, repeat the ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. @@ -630,40 +567,34 @@ Even if you prefer a non-English system language, always ensure that Choose one of the following options: -4.8a Install GRUB for legacy (BIOS) booting +4.8a Install GRUB for legacy (BIOS) booting:: -:: - - apt install --yes grub-pc + apt install --yes grub-pc Install GRUB to the disk(s), not the partition(s). -4.8b Install GRUB for UEFI booting +4.8b Install GRUB for UEFI booting:: -:: + apt install dosfstools + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \ + /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab + mount /boot/efi + apt install --yes grub-efi-amd64 shim-signed - apt install dosfstools - mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 - mkdir /boot/efi - echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \ - /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab - mount /boot/efi - apt install --yes grub-efi-amd64 shim-signed - -- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which - present 4 KiB logical sectors (“4Kn” drives) to meet the minimum - cluster size (given the partition size of 512 MiB) for FAT32. It also - works fine on drives which present 512 B sectors. +- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which + present 4 KiB logical sectors (“4Kn” drives) to meet the minimum + cluster size (given the partition size of 512 MiB) for FAT32. It also + works fine on drives which present 512 B sectors. **Note:** If you are creating a mirror or raidz topology, this step only installs GRUB on the first disk. The other disk(s) will be handled later. -4.9 Set a root password +4.9 Set a root password:: -:: - - passwd + passwd 4.10 Enable importing bpool @@ -673,23 +604,26 @@ or whether ``zfs-import-scan.service`` is enabled. :: - vi /etc/systemd/system/zfs-import-bpool.service - [Unit] - DefaultDependencies=no - Before=zfs-import-scan.service - Before=zfs-import-cache.service + vi /etc/systemd/system/zfs-import-bpool.service - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=/sbin/zpool import -N -o cachefile=none bpool +.. code-block:: ini - [Install] - WantedBy=zfs-import.target + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + + [Install] + WantedBy=zfs-import.target :: - systemctl enable zfs-import-bpool.service + systemctl enable zfs-import-bpool.service 4.11 Optional (but recommended): Mount a tmpfs to /tmp @@ -699,8 +633,8 @@ tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. :: - cp /usr/share/systemd/tmp.mount /etc/systemd/system/ - systemctl enable tmp.mount + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount 4.12 Optional (but kindly requested): Install popcon @@ -710,85 +644,69 @@ long-term attention from the distro. :: - apt install --yes popularity-contest + apt install --yes popularity-contest Choose Yes at the prompt. Step 5: GRUB Installation ------------------------- -5.1 Verify that the ZFS boot filesystem is recognized: +5.1 Verify that the ZFS boot filesystem is recognized:: -:: + grub-probe /boot - grub-probe /boot +5.2 Refresh the initrd files:: -5.2 Refresh the initrd files: - -:: - - update-initramfs -u -k all + update-initramfs -u -k all **Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because `cryptsetup does not support ZFS `__. -5.3 Workaround GRUB's missing zpool-features support: +5.3 Workaround GRUB's missing zpool-features support:: -:: + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" - vi /etc/default/grub - Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" +5.4 Optional (but highly recommended): Make debugging GRUB easier:: -5.4 Optional (but highly recommended): Make debugging GRUB easier: - -:: - - vi /etc/default/grub - Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT - Uncomment: GRUB_TERMINAL=console - Save and quit. + vi /etc/default/grub + # Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. Later, once the system has rebooted twice and you are sure everything is working, you can undo these changes, if desired. -5.5 Update the boot configuration: +5.5 Update the boot configuration:: -:: - - update-grub + update-grub **Note:** Ignore errors from ``osprober``, if present. 5.6 Install the boot loader -5.6a For legacy (BIOS) booting, install GRUB to the MBR: +5.6a For legacy (BIOS) booting, install GRUB to the MBR:: -:: - - grub-install $DISK + grub-install $DISK Note that you are installing GRUB to the whole disk, not a partition. If you are creating a mirror or raidz topology, repeat the ``grub-install`` command for each disk in the pool. -5.6b For UEFI booting, install GRUB: +5.6b For UEFI booting, install GRUB:: -:: - - grub-install --target=x86_64-efi --efi-directory=/boot/efi \ - --bootloader-id=debian --recheck --no-floppy + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=debian --recheck --no-floppy It is not necessary to specify the disk here. If you are creating a mirror or raidz topology, the additional disks will be handled later. -5.7 Verify that the ZFS module is installed: +5.7 Verify that the ZFS module is installed:: -:: - - ls /boot/grub/*/zfs.mod + ls /boot/grub/*/zfs.mod 5.8 Fix filesystem mount ordering @@ -804,131 +722,101 @@ the separate mountpoints, which is important for things like ``PrivateTmp`` feature of systemd automatically use ``After=var-tmp.mount``. -For UEFI booting, unmount /boot/efi first: +For UEFI booting, unmount /boot/efi first:: -:: + umount /boot/efi - umount /boot/efi +Everything else applies to both BIOS and UEFI booting:: -Everything else applies to both BIOS and UEFI booting: + zfs set mountpoint=legacy bpool/BOOT/debian + echo bpool/BOOT/debian /boot zfs \ + nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab -:: + mkdir /etc/zfs/zfs-list.cache + touch /etc/zfs/zfs-list.cache/rpool + ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d + zed -F & - zfs set mountpoint=legacy bpool/BOOT/debian - echo bpool/BOOT/debian /boot zfs \ - nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab +Verify that zed updated the cache by making sure this is not empty:: - mkdir /etc/zfs/zfs-list.cache - touch /etc/zfs/zfs-list.cache/rpool - ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d - zed -F & + cat /etc/zfs/zfs-list.cache/rpool -Verify that zed updated the cache by making sure this is not empty: +If it is empty, force a cache update and check again:: -:: + zfs set canmount=noauto rpool/ROOT/debian - cat /etc/zfs/zfs-list.cache/rpool +Stop zed:: -If it is empty, force a cache update and check again: + fg + Press Ctrl-C. -:: +Fix the paths to eliminate /mnt:: - zfs set canmount=noauto rpool/ROOT/debian - -Stop zed: - -:: - - fg - Press Ctrl-C. - -Fix the paths to eliminate /mnt: - -:: - - sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/rpool + sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/rpool Step 6: First Boot ------------------ -6.1 Snapshot the initial installation: +6.1 Snapshot the initial installation:: -:: - - zfs snapshot bpool/BOOT/debian@install - zfs snapshot rpool/ROOT/debian@install + zfs snapshot bpool/BOOT/debian@install + zfs snapshot rpool/ROOT/debian@install In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to save space. -6.2 Exit from the ``chroot`` environment back to the LiveCD environment: +6.2 Exit from the ``chroot`` environment back to the LiveCD environment:: -:: - - exit + exit 6.3 Run these commands in the LiveCD environment to unmount all -filesystems: +filesystems:: -:: + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + zpool export -a - mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} - zpool export -a +6.4 Reboot:: -6.4 Reboot: - -:: - - reboot + reboot 6.5 Wait for the newly installed system to boot normally. Login as root. -6.6 Create a user account: +6.6 Create a user account:: -:: - - zfs create rpool/home/YOURUSERNAME - adduser YOURUSERNAME - cp -a /etc/skel/. /home/YOURUSERNAME - chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME + zfs create rpool/home/YOURUSERNAME + adduser YOURUSERNAME + cp -a /etc/skel/. /home/YOURUSERNAME + chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME 6.7 Add your user account to the default set of groups for an -administrator: +administrator:: -:: - - usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME + usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME 6.8 Mirror GRUB If you installed to multiple disks, install GRUB on the additional disks: -6.8a For legacy (BIOS) booting: +6.8a For legacy (BIOS) booting:: -:: + dpkg-reconfigure grub-pc + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. - dpkg-reconfigure grub-pc - Hit enter until you get to the device selection screen. - Select (using the space bar) all of the disks (not partitions) in your pool. +6.8b For UEFI booting:: -6.8b UEFI + umount /boot/efi -:: +For the second and subsequent disks (increment debian-2 to -3, etc.):: - umount /boot/efi + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' -For the second and subsequent disks (increment debian-2 to -3, etc.): - -:: - - dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ - of=/dev/disk/by-id/scsi-SATA_disk2-part2 - efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ - -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' - - mount /boot/efi + mount /boot/efi Step 7: (Optional) Configure Swap --------------------------------- @@ -938,14 +826,12 @@ zvol for swap can result in lockup, regardless of how much swap is still available. This issue is currently being investigated in: `https://github.com/zfsonlinux/zfs/issues/7734 `__ -7.1 Create a volume dataset (zvol) for use as a swap device: +7.1 Create a volume dataset (zvol) for use as a swap device:: -:: - - zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ - -o logbias=throughput -o sync=always \ - -o primarycache=metadata -o secondarycache=none \ - -o com.sun:auto-snapshot=false rpool/swap + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap You can adjust the size (the ``4G`` part) to your needs. @@ -963,9 +849,9 @@ files. Never use a short ``/dev/zdX`` device name. :: - mkswap -f /dev/zvol/rpool/swap - echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab - echo RESUME=none > /etc/initramfs-tools/conf.d/resume + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume The ``RESUME=none`` is necessary to disable resuming from hibernation. This does not work, as the zvol is not present (because the pool has not @@ -973,26 +859,20 @@ yet been imported) at the time the resume script runs. If it is not disabled, the boot process hangs for 30 seconds waiting for the swap zvol to appear. -7.3 Enable the swap device: +7.3 Enable the swap device:: -:: - - swapon -av + swapon -av Step 8: Full Software Installation ---------------------------------- -8.1 Upgrade the minimal system: +8.1 Upgrade the minimal system:: -:: + apt dist-upgrade --yes - apt dist-upgrade --yes +8.2 Install a regular set of software:: -8.2 Install a regular set of software: - -:: - - tasksel + tasksel 8.3 Optional: Disable log compression: @@ -1002,21 +882,17 @@ Also, if you are making snapshots of ``/var/log``, logrotate’s compression will actually waste space, as the uncompressed data will live on in the snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment out ``compress``, or use this loop (copy-and-paste -highly recommended): +highly recommended):: -:: + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done - for file in /etc/logrotate.d/* ; do - if grep -Eq "(^|[^#y])compress" "$file" ; then - sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" - fi - done +8.4 Reboot:: -8.4 Reboot: - -:: - - reboot + reboot Step 9: Final Cleanup --------------------- @@ -1024,18 +900,14 @@ Step 9: Final Cleanup 9.1 Wait for the system to boot normally. Login using the account you created. Ensure the system (including networking) works normally. -9.2 Optional: Delete the snapshots of the initial installation: +9.2 Optional: Delete the snapshots of the initial installation:: -:: + sudo zfs destroy bpool/BOOT/debian@install + sudo zfs destroy rpool/ROOT/debian@install - sudo zfs destroy bpool/BOOT/debian@install - sudo zfs destroy rpool/ROOT/debian@install +9.3 Optional: Disable the root password:: -9.3 Optional: Disable the root password - -:: - - sudo usermod -p '*' root + sudo usermod -p '*' root 9.4 Optional: Re-enable the graphical boot process: @@ -1044,21 +916,19 @@ you are using LUKS, it makes the prompt look nicer. :: - sudo vi /etc/default/grub - Add quiet to GRUB_CMDLINE_LINUX_DEFAULT - Comment out GRUB_TERMINAL=console - Save and quit. + sudo vi /etc/default/grub + # Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + # Comment out GRUB_TERMINAL=console + # Save and quit. - sudo update-grub + sudo update-grub **Note:** Ignore errors from ``osprober``, if present. -9.5 Optional: For LUKS installs only, backup the LUKS header: +9.5 Optional: For LUKS installs only, backup the LUKS header:: -:: - - sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ - --header-backup-file luks1-header.dat + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat Store that backup somewhere safe (e.g. cloud storage). It is protected by your LUKS passphrase, but you may wish to use additional encryption. @@ -1075,46 +945,38 @@ Rescuing using a Live CD Go through `Step 1: Prepare The Install Environment <#step-1-prepare-the-install-environment>`__. -For LUKS, first unlock the disk(s): +For LUKS, first unlock the disk(s):: -:: + apt install --yes cryptsetup + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. - apt install --yes cryptsetup - cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 - Repeat for additional disks, if this is a mirror or raidz topology. +Mount everything correctly:: -Mount everything correctly: + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs load-key -a + zfs mount rpool/ROOT/debian + zfs mount -a -:: +If needed, you can chroot into your installed environment:: - zpool export -a - zpool import -N -R /mnt rpool - zpool import -N -R /mnt bpool - zfs load-key -a - zfs mount rpool/ROOT/debian - zfs mount -a - -If needed, you can chroot into your installed environment: - -:: - - mount --rbind /dev /mnt/dev - mount --rbind /proc /mnt/proc - mount --rbind /sys /mnt/sys - chroot /mnt /bin/bash --login - mount /boot - mount -a + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + chroot /mnt /bin/bash --login + mount /boot + mount -a Do whatever you need to do to fix your system. -When done, cleanup: +When done, cleanup:: -:: - - exit - mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} - zpool export -a - reboot + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + zpool export -a + reboot MPT2SAS ~~~~~~~ @@ -1148,9 +1010,9 @@ this error message. VMware ~~~~~~ -- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere - configuration. Doing this ensures that ``/dev/disk`` aliases are - created in the guest. +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere + configuration. Doing this ensures that ``/dev/disk`` aliases are + created in the guest. QEMU/KVM/XEN ~~~~~~~~~~~~ @@ -1159,19 +1021,22 @@ Set a unique serial number on each virtual disk using libvirt or qemu (e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). To be able to use UEFI in guests (instead of only BIOS booting), run -this on the host: +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" + ] :: - sudo apt install ovmf - - sudo vi /etc/libvirt/qemu.conf - Uncomment these lines: - nvram = [ - "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", - "/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd", - "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd", - "/usr/share/AAVMF/AAVMF32_CODE.fd:/usr/share/AAVMF/AAVMF32_VARS.fd" - ] - - sudo systemctl restart libvirtd.service + sudo systemctl restart libvirtd.service diff --git a/docs/Getting Started/Debian/Debian Stretch Root on ZFS.rst b/docs/Getting Started/Debian/Debian Stretch Root on ZFS.rst index 111ffd2..c1c7d95 100644 --- a/docs/Getting Started/Debian/Debian Stretch Root on ZFS.rst +++ b/docs/Getting Started/Debian/Debian Stretch Root on ZFS.rst @@ -2,7 +2,7 @@ Debian Stretch Root on ZFS ========================== .. contents:: Table of Contents - :local: + :local: Overview -------- @@ -10,26 +10,27 @@ Overview Newer release available ~~~~~~~~~~~~~~~~~~~~~~~ -- See :doc:`Debian Buster Root on ZFS <./Debian Buster Root on ZFS>` for new installs. +- See :doc:`Debian Buster Root on ZFS <./Debian Buster Root on ZFS>` for new + installs. Caution ~~~~~~~ -- This HOWTO uses a whole physical disk. -- Do not use these instructions for dual-booting. -- Backup your data. Any existing data will be lost. +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. System Requirements ~~~~~~~~~~~~~~~~~~~ -- `64-bit Debian GNU/Linux Stretch Live - CD `__ -- `A 64-bit kernel is strongly - encouraged. `__ -- Installing on a drive which presents 4KiB logical sectors (a “4Kn” - drive) only works with UEFI booting. This not unique to ZFS. `GRUB - does not and will not work on 4Kn with legacy (BIOS) - booting. `__ +- `64-bit Debian GNU/Linux Stretch Live + CD `__ +- `A 64-bit kernel is strongly + encouraged. `__ +- Installing on a drive which presents 4KiB logical sectors (a “4Kn” + drive) only works with UEFI booting. This not unique to ZFS. `GRUB + does not and will not work on 4Kn with legacy (BIOS) + booting. `__ Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory is recommended for normal performance in basic workloads. If you @@ -50,32 +51,29 @@ mention @rlaager. Contributing ~~~~~~~~~~~~ -1) Fork and clone: https://github.com/openzfs/openzfs-docs +1. Fork and clone: https://github.com/openzfs/openzfs-docs -2) Install the tools: +2. Install the tools:: -:: + # On Debian 11 / Ubuntu 20.04 or later: + sudo apt install python3-sphinx python3-sphinx-issues python3-sphinx-rtd-theme - # On Debian 11 / Ubuntu 20.04 or later: - sudo apt install python3-sphinx python3-sphinx-issues python3-sphinx-rtd-theme - # On earlier releases: - sudo apt install pip3 - pip3 install -r requirements.txt - # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: - PATH=$HOME/.local/bin:$PATH + # On earlier releases: + sudo apt install pip3 + pip3 install -r requirements.txt + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH -3) Make your changes. +3. Make your changes. -4) Test: +4. Test:: -:: + cd docs + make html + sensible-browser _build/html/index.html - cd docs - make html - sensible-browser _build/html/index.html - -5) ``git commit --signoff`` to a branch, ``git push``, and create a pull request. - Mention @rlaager. +5. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. Encryption ~~~~~~~~~~ @@ -109,9 +107,9 @@ be convenient. :: - $ sudo apt update - $ sudo apt install --yes openssh-server - $ sudo systemctl restart ssh + $ sudo apt update + $ sudo apt install --yes openssh-server + $ sudo systemctl restart ssh **Hint:** You can find your IP address with ``ip addr show scope global | grep inet``. Then, from your main machine, @@ -121,26 +119,26 @@ connect with ``ssh user@IP``. :: - $ sudo -i + $ sudo -i 1.4 Setup and update the repositories: :: - # echo deb http://deb.debian.org/debian stretch contrib >> /etc/apt/sources.list - # echo deb http://deb.debian.org/debian stretch-backports main contrib >> /etc/apt/sources.list - # apt update + # echo deb http://deb.debian.org/debian stretch contrib >> /etc/apt/sources.list + # echo deb http://deb.debian.org/debian stretch-backports main contrib >> /etc/apt/sources.list + # apt update 1.5 Install ZFS in the Live CD environment: :: - # apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r) - # apt install --yes -t stretch-backports zfs-dkms - # modprobe zfs + # apt install --yes debootstrap gdisk dkms dpkg-dev linux-headers-$(uname -r) + # apt install --yes -t stretch-backports zfs-dkms + # modprobe zfs -- The dkms dependency is installed manually just so it comes from - stretch and not stretch-backports. This is not critical. +- The dkms dependency is installed manually just so it comes from + stretch and not stretch-backports. This is not critical. Step 2: Disk Formatting ----------------------- @@ -149,25 +147,25 @@ Step 2: Disk Formatting :: - If the disk was previously used in an MD array, zero the superblock: - # apt install --yes mdadm - # mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1 + If the disk was previously used in an MD array, zero the superblock: + # apt install --yes mdadm + # mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1 - Clear the partition table: - # sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1 + Clear the partition table: + # sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1 2.2 Partition your disk(s): :: - Run this if you need legacy (BIOS) booting: - # sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/disk/by-id/scsi-SATA_disk1 + Run this if you need legacy (BIOS) booting: + # sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/disk/by-id/scsi-SATA_disk1 - Run this for UEFI booting (for use now or in the future): - # sgdisk -n2:1M:+512M -t2:EF00 /dev/disk/by-id/scsi-SATA_disk1 + Run this for UEFI booting (for use now or in the future): + # sgdisk -n2:1M:+512M -t2:EF00 /dev/disk/by-id/scsi-SATA_disk1 - Run this for the boot pool: - # sgdisk -n3:0:+1G -t3:BF01 /dev/disk/by-id/scsi-SATA_disk1 + Run this for the boot pool: + # sgdisk -n3:0:+1G -t3:BF01 /dev/disk/by-id/scsi-SATA_disk1 Choose one of the following options: @@ -175,13 +173,13 @@ Choose one of the following options: :: - # sgdisk -n4:0:0 -t4:BF01 /dev/disk/by-id/scsi-SATA_disk1 + # sgdisk -n4:0:0 -t4:BF01 /dev/disk/by-id/scsi-SATA_disk1 2.2b LUKS: :: - # sgdisk -n4:0:0 -t4:8300 /dev/disk/by-id/scsi-SATA_disk1 + # sgdisk -n4:0:0 -t4:8300 /dev/disk/by-id/scsi-SATA_disk1 Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the ``/dev/sd*`` device nodes directly can cause sporadic import failures, @@ -189,36 +187,36 @@ especially on systems that have more than one storage pool. **Hints:** -- ``ls -la /dev/disk/by-id`` will list the aliases. -- Are you doing this in a virtual machine? If your virtual disk is - missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using - KVM with virtio; otherwise, read the - `troubleshooting <#troubleshooting>`__ section. -- If you are creating a mirror or raidz topology, repeat the - partitioning commands for all the disks which will be part of the - pool. +- ``ls -la /dev/disk/by-id`` will list the aliases. +- Are you doing this in a virtual machine? If your virtual disk is + missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using + KVM with virtio; otherwise, read the + `troubleshooting <#troubleshooting>`__ section. +- If you are creating a mirror or raidz topology, repeat the + partitioning commands for all the disks which will be part of the + pool. 2.3 Create the boot pool: :: - # zpool create -o ashift=12 -d \ - -o feature@async_destroy=enabled \ - -o feature@bookmarks=enabled \ - -o feature@embedded_data=enabled \ - -o feature@empty_bpobj=enabled \ - -o feature@enabled_txg=enabled \ - -o feature@extensible_dataset=enabled \ - -o feature@filesystem_limits=enabled \ - -o feature@hole_birth=enabled \ - -o feature@large_blocks=enabled \ - -o feature@lz4_compress=enabled \ - -o feature@spacemap_histogram=enabled \ - -o feature@userobj_accounting=enabled \ - -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ - -O normalization=formD -O relatime=on -O xattr=sa \ - -O mountpoint=/ -R /mnt \ - bpool /dev/disk/by-id/scsi-SATA_disk1-part3 + # zpool create -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@userobj_accounting=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ + -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt \ + bpool /dev/disk/by-id/scsi-SATA_disk1-part3 You should not need to customize any of the options for the boot pool. @@ -232,12 +230,12 @@ read-only compatible features are "supported" by GRUB. **Hints:** -- If you are creating a mirror or raidz topology, create the pool using - ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3`` - (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and - list the partitions from additional disks). -- The pool name is arbitrary. If changed, the new name must be used - consistently. The ``bpool`` convention originated in this HOWTO. +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). +- The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. 2.4 Create the root pool: @@ -247,89 +245,89 @@ Choose one of the following options: :: - # zpool create -o ashift=12 \ - -O acltype=posixacl -O canmount=off -O compression=lz4 \ - -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ - -O mountpoint=/ -R /mnt \ - rpool /dev/disk/by-id/scsi-SATA_disk1-part4 + # zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt \ + rpool /dev/disk/by-id/scsi-SATA_disk1-part4 2.4b LUKS: :: - # apt install --yes cryptsetup - # cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 \ - /dev/disk/by-id/scsi-SATA_disk1-part4 - # cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 - # zpool create -o ashift=12 \ - -O acltype=posixacl -O canmount=off -O compression=lz4 \ - -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ - -O mountpoint=/ -R /mnt \ - rpool /dev/mapper/luks1 + # apt install --yes cryptsetup + # cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 \ + /dev/disk/by-id/scsi-SATA_disk1-part4 + # cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 -- The use of ``ashift=12`` is recommended here because many drives - today have 4KiB (or larger) physical sectors, even though they - present 512B logical sectors. Also, a future replacement drive may - have 4KiB physical sectors (in which case ``ashift=12`` is desirable) - or 4KiB logical sectors (in which case ``ashift=12`` is required). -- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you - do not want this, remove that option, but later add - ``-o acltype=posixacl`` (note: lowercase "o") to the ``zfs create`` - for ``/var/log``, as `journald requires - ACLs `__ -- Setting ``normalization=formD`` eliminates some corner cases relating - to UTF-8 filename normalization. It also implies ``utf8only=on``, - which means that only UTF-8 filenames are allowed. If you care to - support non-UTF-8 filenames, do not use this option. For a discussion - of why requiring UTF-8 filenames may be a bad idea, see `The problems - with enforced UTF-8 only - filenames `__. -- Setting ``relatime=on`` is a middle ground between classic POSIX - ``atime`` behavior (with its significant performance impact) and - ``atime=off`` (which provides the best performance by completely - disabling atime updates). Since Linux 2.6.30, ``relatime`` has been - the default for other filesystems. See `RedHat's - documentation `__ - for further information. -- Setting ``xattr=sa`` `vastly improves the performance of extended - attributes `__. - Inside ZFS, extended attributes are used to implement POSIX ACLs. - Extended attributes can also be used by user-space applications. - `They are used by some desktop GUI - applications. `__ - `They can be used by Samba to store Windows ACLs and DOS attributes; - they are required for a Samba Active Directory domain - controller. `__ - Note that ```xattr=sa`` is - Linux-specific. `__ - If you move your ``xattr=sa`` pool to another OpenZFS implementation - besides ZFS-on-Linux, extended attributes will not be readable - (though your data will be). If portability of extended attributes is - important to you, omit the ``-O xattr=sa`` above. Even if you do not - want ``xattr=sa`` for the whole pool, it is probably fine to use it - for ``/var/log``. -- Make sure to include the ``-part4`` portion of the drive path. If you - forget that, you are specifying the whole disk, which ZFS will then - re-partition, and you will lose the bootloader partition(s). -- For LUKS, the key size chosen is 512 bits. However, XTS mode requires - two keys, so the LUKS key is split in half. Thus, ``-s 512`` means - AES-256. -- Your passphrase will likely be the weakest link. Choose wisely. See - `section 5 of the cryptsetup - FAQ `__ - for guidance. +- The use of ``ashift=12`` is recommended here because many drives + today have 4KiB (or larger) physical sectors, even though they + present 512B logical sectors. Also, a future replacement drive may + have 4KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4KiB logical sectors (in which case ``ashift=12`` is required). +- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase "o") to the ``zfs create`` + for ``/var/log``, as `journald requires + ACLs `__ +- Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only + filenames `__. +- Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat's + documentation `__ + for further information. +- Setting ``xattr=sa`` `vastly improves the performance of extended + attributes `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI + applications. `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain + controller. `__ + Note that ```xattr=sa`` is + Linux-specific. `__ + If you move your ``xattr=sa`` pool to another OpenZFS implementation + besides ZFS-on-Linux, extended attributes will not be readable + (though your data will be). If portability of extended attributes is + important to you, omit the ``-O xattr=sa`` above. Even if you do not + want ``xattr=sa`` for the whole pool, it is probably fine to use it + for ``/var/log``. +- Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). +- For LUKS, the key size chosen is 512 bits. However, XTS mode requires + two keys, so the LUKS key is split in half. Thus, ``-s 512`` means + AES-256. +- Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup + FAQ `__ + for guidance. **Hints:** -- If you are creating a mirror or raidz topology, create the pool using - ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4`` - (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and - list the partitions from additional disks). For LUKS, use - ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will - have to create using ``cryptsetup``. -- The pool name is arbitrary. If changed, the new name must be used - consistently. On systems that can automatically install to ZFS, the - root pool is named ``rpool`` by default. +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). For LUKS, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will + have to create using ``cryptsetup``. +- The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the + root pool is named ``rpool`` by default. Step 3: System Installation --------------------------- @@ -338,8 +336,8 @@ Step 3: System Installation :: - # zfs create -o canmount=off -o mountpoint=none rpool/ROOT - # zfs create -o canmount=off -o mountpoint=none bpool/BOOT + # zfs create -o canmount=off -o mountpoint=none rpool/ROOT + # zfs create -o canmount=off -o mountpoint=none bpool/BOOT On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through ``pkg image-update`` or @@ -351,11 +349,11 @@ manually created clones. :: - # zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian - # zfs mount rpool/ROOT/debian + # zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian + # zfs mount rpool/ROOT/debian - # zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian - # zfs mount bpool/BOOT/debian + # zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/debian + # zfs mount bpool/BOOT/debian With ZFS, it is not normally necessary to use a mount command (either ``mount`` or ``zfs mount``). This situation is an exception because of @@ -365,55 +363,55 @@ With ZFS, it is not normally necessary to use a mount command (either :: - # zfs create rpool/home - # zfs create -o mountpoint=/root rpool/home/root - # zfs create -o canmount=off rpool/var - # zfs create -o canmount=off rpool/var/lib - # zfs create rpool/var/log - # zfs create rpool/var/spool + # zfs create rpool/home + # zfs create -o mountpoint=/root rpool/home/root + # zfs create -o canmount=off rpool/var + # zfs create -o canmount=off rpool/var/lib + # zfs create rpool/var/log + # zfs create rpool/var/spool - The datasets below are optional, depending on your preferences and/or - software choices: + The datasets below are optional, depending on your preferences and/or + software choices: - If you wish to exclude these from snapshots: - # zfs create -o com.sun:auto-snapshot=false rpool/var/cache - # zfs create -o com.sun:auto-snapshot=false rpool/var/tmp - # chmod 1777 /mnt/var/tmp + If you wish to exclude these from snapshots: + # zfs create -o com.sun:auto-snapshot=false rpool/var/cache + # zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + # chmod 1777 /mnt/var/tmp - If you use /opt on this system: - # zfs create rpool/opt + If you use /opt on this system: + # zfs create rpool/opt - If you use /srv on this system: - # zfs create rpool/srv + If you use /srv on this system: + # zfs create rpool/srv - If you use /usr/local on this system: - # zfs create -o canmount=off rpool/usr - # zfs create rpool/usr/local + If you use /usr/local on this system: + # zfs create -o canmount=off rpool/usr + # zfs create rpool/usr/local - If this system will have games installed: - # zfs create rpool/var/games + If this system will have games installed: + # zfs create rpool/var/games - If this system will store local email in /var/mail: - # zfs create rpool/var/mail + If this system will store local email in /var/mail: + # zfs create rpool/var/mail - If this system will use Snap packages: - # zfs create rpool/var/snap + If this system will use Snap packages: + # zfs create rpool/var/snap - If you use /var/www on this system: - # zfs create rpool/var/www + If you use /var/www on this system: + # zfs create rpool/var/www - If this system will use GNOME: - # zfs create rpool/var/lib/AccountsService + If this system will use GNOME: + # zfs create rpool/var/lib/AccountsService - If this system will use Docker (which manages its own datasets & snapshots): - # zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker + If this system will use Docker (which manages its own datasets & snapshots): + # zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker - If this system will use NFS (locking): - # zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + If this system will use NFS (locking): + # zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs - A tmpfs is recommended later, but if you want a separate dataset for /tmp: - # zfs create -o com.sun:auto-snapshot=false rpool/tmp - # chmod 1777 /mnt/tmp + A tmpfs is recommended later, but if you want a separate dataset for /tmp: + # zfs create -o com.sun:auto-snapshot=false rpool/tmp + # chmod 1777 /mnt/tmp The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling @@ -433,8 +431,8 @@ you can use a tmpfs (RAM filesystem) later. :: - # debootstrap stretch /mnt - # zfs set devices=off rpool + # debootstrap stretch /mnt + # zfs set devices=off rpool The ``debootstrap`` command leaves the new system in an unconfigured state. An alternative to using ``debootstrap`` is to copy the entirety @@ -448,13 +446,13 @@ hostname). :: - # echo HOSTNAME > /mnt/etc/hostname + # echo HOSTNAME > /mnt/etc/hostname - # vi /mnt/etc/hosts - Add a line: - 127.0.1.1 HOSTNAME - or if the system has a real name in DNS: - 127.0.1.1 FQDN HOSTNAME + # vi /mnt/etc/hosts + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME **Hint:** Use ``nano`` if you find ``vi`` confusing. @@ -462,12 +460,12 @@ hostname). :: - Find the interface name: - # ip addr show + Find the interface name: + # ip addr show - # vi /mnt/etc/network/interfaces.d/NAME - auto NAME - iface NAME inet dhcp + # vi /mnt/etc/network/interfaces.d/NAME + auto NAME + iface NAME inet dhcp Customize this file if the system is not a DHCP client. @@ -475,28 +473,28 @@ Customize this file if the system is not a DHCP client. :: - # vi /mnt/etc/apt/sources.list - deb http://deb.debian.org/debian stretch main contrib - deb-src http://deb.debian.org/debian stretch main contrib + # vi /mnt/etc/apt/sources.list + deb http://deb.debian.org/debian stretch main contrib + deb-src http://deb.debian.org/debian stretch main contrib - # vi /mnt/etc/apt/sources.list.d/stretch-backports.list - deb http://deb.debian.org/debian stretch-backports main contrib - deb-src http://deb.debian.org/debian stretch-backports main contrib + # vi /mnt/etc/apt/sources.list.d/stretch-backports.list + deb http://deb.debian.org/debian stretch-backports main contrib + deb-src http://deb.debian.org/debian stretch-backports main contrib - # vi /mnt/etc/apt/preferences.d/90_zfs - Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms zfs-dkms zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed - Pin: release n=stretch-backports - Pin-Priority: 990 + # vi /mnt/etc/apt/preferences.d/90_zfs + Package: libnvpair1linux libuutil1linux libzfs2linux libzpool2linux spl-dkms zfs-dkms zfs-test zfsutils-linux zfsutils-linux-dev zfs-zed + Pin: release n=stretch-backports + Pin-Priority: 990 4.4 Bind the virtual filesystems from the LiveCD environment to the new system and ``chroot`` into it: :: - # mount --rbind /dev /mnt/dev - # mount --rbind /proc /mnt/proc - # mount --rbind /sys /mnt/sys - # chroot /mnt /bin/bash --login + # mount --rbind /dev /mnt/dev + # mount --rbind /proc /mnt/proc + # mount --rbind /sys /mnt/sys + # chroot /mnt /bin/bash --login **Note:** This is using ``--rbind``, not ``--bind``. @@ -504,39 +502,39 @@ system and ``chroot`` into it: :: - # ln -s /proc/self/mounts /etc/mtab - # apt update + # ln -s /proc/self/mounts /etc/mtab + # apt update - # apt install --yes locales - # dpkg-reconfigure locales + # apt install --yes locales + # dpkg-reconfigure locales Even if you prefer a non-English system language, always ensure that ``en_US.UTF-8`` is available. :: - # dpkg-reconfigure tzdata + # dpkg-reconfigure tzdata 4.6 Install ZFS in the chroot environment for the new system: :: - # apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64 - # apt install --yes zfs-initramfs + # apt install --yes dpkg-dev linux-headers-amd64 linux-image-amd64 + # apt install --yes zfs-initramfs 4.7 For LUKS installs only, setup crypttab: :: - # apt install --yes cryptsetup + # apt install --yes cryptsetup - # echo luks1 UUID=$(blkid -s UUID -o value \ - /dev/disk/by-id/scsi-SATA_disk1-part4) none \ - luks,discard,initramfs > /etc/crypttab + # echo luks1 UUID=$(blkid -s UUID -o value \ + /dev/disk/by-id/scsi-SATA_disk1-part4) none \ + luks,discard,initramfs > /etc/crypttab -- The use of ``initramfs`` is a work-around for `cryptsetup does not - support - ZFS `__. +- The use of ``initramfs`` is a work-around for `cryptsetup does not + support + ZFS `__. **Hint:** If you are creating a mirror or raidz topology, repeat the ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. @@ -549,7 +547,7 @@ Choose one of the following options: :: - # apt install --yes grub-pc + # apt install --yes grub-pc Install GRUB to the disk(s), not the partition(s). @@ -557,19 +555,19 @@ Install GRUB to the disk(s), not the partition(s). :: - # apt install dosfstools - # mkdosfs -F 32 -s 1 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part2 - # mkdir /boot/efi - # echo PARTUUID=$(blkid -s PARTUUID -o value \ - /dev/disk/by-id/scsi-SATA_disk1-part2) \ - /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab - # mount /boot/efi - # apt install --yes grub-efi-amd64 shim + # apt install dosfstools + # mkdosfs -F 32 -s 1 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part2 + # mkdir /boot/efi + # echo PARTUUID=$(blkid -s PARTUUID -o value \ + /dev/disk/by-id/scsi-SATA_disk1-part2) \ + /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab + # mount /boot/efi + # apt install --yes grub-efi-amd64 shim -- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which - present 4 KiB logical sectors (“4Kn” drives) to meet the minimum - cluster size (given the partition size of 512 MiB) for FAT32. It also - works fine on drives which present 512 B sectors. +- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which + present 4 KiB logical sectors (“4Kn” drives) to meet the minimum + cluster size (given the partition size of 512 MiB) for FAT32. It also + works fine on drives which present 512 B sectors. **Note:** If you are creating a mirror or raidz topology, this step only installs GRUB on the first disk. The other disk(s) will be handled @@ -579,7 +577,7 @@ later. :: - # passwd + # passwd 4.10 Enable importing bpool @@ -589,21 +587,21 @@ or whether ``zfs-import-scan.service`` is enabled. :: - # vi /etc/systemd/system/zfs-import-bpool.service - [Unit] - DefaultDependencies=no - Before=zfs-import-scan.service - Before=zfs-import-cache.service + # vi /etc/systemd/system/zfs-import-bpool.service + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=/sbin/zpool import -N -o cachefile=none bpool + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool - [Install] - WantedBy=zfs-import.target + [Install] + WantedBy=zfs-import.target - # systemctl enable zfs-import-bpool.service + # systemctl enable zfs-import-bpool.service 4.11 Optional (but recommended): Mount a tmpfs to /tmp @@ -613,8 +611,8 @@ tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. :: - # cp /usr/share/systemd/tmp.mount /etc/systemd/system/ - # systemctl enable tmp.mount + # cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + # systemctl enable tmp.mount 4.12 Optional (but kindly requested): Install popcon @@ -624,7 +622,7 @@ long-term attention from the distro. :: - # apt install --yes popularity-contest + # apt install --yes popularity-contest Choose Yes at the prompt. @@ -635,15 +633,15 @@ Step 5: GRUB Installation :: - # grub-probe /boot - zfs + # grub-probe /boot + zfs 5.2 Refresh the initrd files: :: - # update-initramfs -u -k all - update-initramfs: Generating /boot/initrd.img-4.9.0-8-amd64 + # update-initramfs -u -k all + update-initramfs: Generating /boot/initrd.img-4.9.0-8-amd64 **Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because `cryptsetup does not @@ -654,17 +652,17 @@ ZFS `__. :: - # vi /etc/default/grub - Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" + # vi /etc/default/grub + Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian" 5.4 Optional (but highly recommended): Make debugging GRUB easier: :: - # vi /etc/default/grub - Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT - Uncomment: GRUB_TERMINAL=console - Save and quit. + # vi /etc/default/grub + Remove quiet from: GRUB_CMDLINE_LINUX_DEFAULT + Uncomment: GRUB_TERMINAL=console + Save and quit. Later, once the system has rebooted twice and you are sure everything is working, you can undo these changes, if desired. @@ -673,11 +671,11 @@ working, you can undo these changes, if desired. :: - # update-grub - Generating grub configuration file ... - Found linux image: /boot/vmlinuz-4.9.0-8-amd64 - Found initrd image: /boot/initrd.img-4.9.0-8-amd64 - done + # update-grub + Generating grub configuration file ... + Found linux image: /boot/vmlinuz-4.9.0-8-amd64 + Found initrd image: /boot/initrd.img-4.9.0-8-amd64 + done **Note:** Ignore errors from ``osprober``, if present. @@ -687,9 +685,9 @@ working, you can undo these changes, if desired. :: - # grub-install /dev/disk/by-id/scsi-SATA_disk1 - Installing for i386-pc platform. - Installation finished. No error reported. + # grub-install /dev/disk/by-id/scsi-SATA_disk1 + Installing for i386-pc platform. + Installation finished. No error reported. Do not reboot the computer until you get exactly that result message. Note that you are installing GRUB to the whole disk, not a partition. @@ -701,14 +699,14 @@ If you are creating a mirror or raidz topology, repeat the :: - # grub-install --target=x86_64-efi --efi-directory=/boot/efi \ - --bootloader-id=debian --recheck --no-floppy + # grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=debian --recheck --no-floppy 5.7 Verify that the ZFS module is installed: :: - # ls /boot/grub/*/zfs.mod + # ls /boot/grub/*/zfs.mod 5.8 Fix filesystem mount ordering @@ -735,28 +733,28 @@ filesystems. :: - For UEFI booting, unmount /boot/efi first: - # umount /boot/efi + For UEFI booting, unmount /boot/efi first: + # umount /boot/efi - Everything else applies to both BIOS and UEFI booting: + Everything else applies to both BIOS and UEFI booting: - # zfs set mountpoint=legacy bpool/BOOT/debian - # echo bpool/BOOT/debian /boot zfs \ - nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab + # zfs set mountpoint=legacy bpool/BOOT/debian + # echo bpool/BOOT/debian /boot zfs \ + nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab - # zfs set mountpoint=legacy rpool/var/log - # echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab + # zfs set mountpoint=legacy rpool/var/log + # echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab - # zfs set mountpoint=legacy rpool/var/spool - # echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab + # zfs set mountpoint=legacy rpool/var/spool + # echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab - If you created a /var/tmp dataset: - # zfs set mountpoint=legacy rpool/var/tmp - # echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab + If you created a /var/tmp dataset: + # zfs set mountpoint=legacy rpool/var/tmp + # echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab - If you created a /tmp dataset: - # zfs set mountpoint=legacy rpool/tmp - # echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab + If you created a /tmp dataset: + # zfs set mountpoint=legacy rpool/tmp + # echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab Step 6: First Boot ------------------ @@ -765,8 +763,8 @@ Step 6: First Boot :: - # zfs snapshot bpool/BOOT/debian@install - # zfs snapshot rpool/ROOT/debian@install + # zfs snapshot bpool/BOOT/debian@install + # zfs snapshot rpool/ROOT/debian@install In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to @@ -776,21 +774,21 @@ save space. :: - # exit + # exit 6.3 Run these commands in the LiveCD environment to unmount all filesystems: :: - # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} - # zpool export -a + # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + # zpool export -a 6.4 Reboot: :: - # reboot + # reboot 6.5 Wait for the newly installed system to boot normally. Login as root. @@ -798,17 +796,17 @@ filesystems: :: - # zfs create rpool/home/YOURUSERNAME - # adduser YOURUSERNAME - # cp -a /etc/skel/.[!.]* /home/YOURUSERNAME - # chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME + # zfs create rpool/home/YOURUSERNAME + # adduser YOURUSERNAME + # cp -a /etc/skel/.[!.]* /home/YOURUSERNAME + # chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME 6.7 Add your user account to the default set of groups for an administrator: :: - # usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME + # usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME 6.8 Mirror GRUB @@ -819,23 +817,23 @@ disks: :: - # dpkg-reconfigure grub-pc - Hit enter until you get to the device selection screen. - Select (using the space bar) all of the disks (not partitions) in your pool. + # dpkg-reconfigure grub-pc + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. 6.8b UEFI :: - # umount /boot/efi + # umount /boot/efi - For the second and subsequent disks (increment debian-2 to -3, etc.): - # dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ - of=/dev/disk/by-id/scsi-SATA_disk2-part2 - # efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ - -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' + For the second and subsequent disks (increment debian-2 to -3, etc.): + # dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + # efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "debian-2" -l '\EFI\debian\grubx64.efi' - # mount /boot/efi + # mount /boot/efi Step 7: (Optional) Configure Swap --------------------------------- @@ -849,10 +847,10 @@ available. This issue is currently being investigated in: :: - # zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ - -o logbias=throughput -o sync=always \ - -o primarycache=metadata -o secondarycache=none \ - -o com.sun:auto-snapshot=false rpool/swap + # zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap You can adjust the size (the ``4G`` part) to your needs. @@ -870,9 +868,9 @@ files. Never use a short ``/dev/zdX`` device name. :: - # mkswap -f /dev/zvol/rpool/swap - # echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab - # echo RESUME=none > /etc/initramfs-tools/conf.d/resume + # mkswap -f /dev/zvol/rpool/swap + # echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + # echo RESUME=none > /etc/initramfs-tools/conf.d/resume The ``RESUME=none`` is necessary to disable resuming from hibernation. This does not work, as the zvol is not present (because the pool has not @@ -884,7 +882,7 @@ zvol to appear. :: - # swapon -av + # swapon -av Step 8: Full Software Installation ---------------------------------- @@ -893,13 +891,13 @@ Step 8: Full Software Installation :: - # apt dist-upgrade --yes + # apt dist-upgrade --yes 8.2 Install a regular set of software: :: - # tasksel + # tasksel 8.3 Optional: Disable log compression: @@ -913,17 +911,17 @@ highly recommended): :: - # for file in /etc/logrotate.d/* ; do - if grep -Eq "(^|[^#y])compress" "$file" ; then - sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" - fi - done + # for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done 8.4 Reboot: :: - # reboot + # reboot Step 9: Final Cleanup ~~~~~~~~~~~~~~~~~~~~~ @@ -935,14 +933,14 @@ created. Ensure the system (including networking) works normally. :: - $ sudo zfs destroy bpool/BOOT/debian@install - $ sudo zfs destroy rpool/ROOT/debian@install + $ sudo zfs destroy bpool/BOOT/debian@install + $ sudo zfs destroy rpool/ROOT/debian@install 9.3 Optional: Disable the root password :: - $ sudo usermod -p '*' root + $ sudo usermod -p '*' root 9.4 Optional: Re-enable the graphical boot process: @@ -951,12 +949,12 @@ you are using LUKS, it makes the prompt look nicer. :: - $ sudo vi /etc/default/grub - Add quiet to GRUB_CMDLINE_LINUX_DEFAULT - Comment out GRUB_TERMINAL=console - Save and quit. + $ sudo vi /etc/default/grub + Add quiet to GRUB_CMDLINE_LINUX_DEFAULT + Comment out GRUB_TERMINAL=console + Save and quit. - $ sudo update-grub + $ sudo update-grub **Note:** Ignore errors from ``osprober``, if present. @@ -964,8 +962,8 @@ you are using LUKS, it makes the prompt look nicer. :: - $ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ - --header-backup-file luks1-header.dat + $ sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat Store that backup somewhere safe (e.g. cloud storage). It is protected by your LUKS passphrase, but you may wish to use additional encryption. @@ -987,27 +985,27 @@ get the mounts right: :: - For LUKS, first unlock the disk(s): - # apt install --yes cryptsetup - # cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 - Repeat for additional disks, if this is a mirror or raidz topology. + For LUKS, first unlock the disk(s): + # apt install --yes cryptsetup + # cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + Repeat for additional disks, if this is a mirror or raidz topology. - # zpool export -a - # zpool import -N -R /mnt rpool - # zpool import -N -R /mnt bpool - # zfs mount rpool/ROOT/debian - # zfs mount -a + # zpool export -a + # zpool import -N -R /mnt rpool + # zpool import -N -R /mnt bpool + # zfs mount rpool/ROOT/debian + # zfs mount -a If needed, you can chroot into your installed environment: :: - # mount --rbind /dev /mnt/dev - # mount --rbind /proc /mnt/proc - # mount --rbind /sys /mnt/sys - # chroot /mnt /bin/bash --login - # mount /boot - # mount -a + # mount --rbind /dev /mnt/dev + # mount --rbind /proc /mnt/proc + # mount --rbind /sys /mnt/sys + # chroot /mnt /bin/bash --login + # mount /boot + # mount -a Do whatever you need to do to fix your system. @@ -1015,10 +1013,10 @@ When done, cleanup: :: - # exit - # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} - # zpool export -a - # reboot + # exit + # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + # zpool export -a + # reboot MPT2SAS ~~~~~~~ @@ -1052,9 +1050,9 @@ this error message. VMware ~~~~~~ -- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere - configuration. Doing this ensures that ``/dev/disk`` aliases are - created in the guest. +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere + configuration. Doing this ensures that ``/dev/disk`` aliases are + created in the guest. QEMU/KVM/XEN ~~~~~~~~~~~~ @@ -1067,11 +1065,11 @@ this on the host: :: - $ sudo apt install ovmf - $ sudo vi /etc/libvirt/qemu.conf - Uncomment these lines: - nvram = [ - "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", - "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" - ] - $ sudo service libvirt-bin restart + $ sudo apt install ovmf + $ sudo vi /etc/libvirt/qemu.conf + Uncomment these lines: + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" + ] + $ sudo service libvirt-bin restart diff --git a/docs/Getting Started/Ubuntu/Ubuntu 16.04 Root on ZFS.rst b/docs/Getting Started/Ubuntu/Ubuntu 16.04 Root on ZFS.rst index f522284..d333a58 100644 --- a/docs/Getting Started/Ubuntu/Ubuntu 16.04 Root on ZFS.rst +++ b/docs/Getting Started/Ubuntu/Ubuntu 16.04 Root on ZFS.rst @@ -2,7 +2,7 @@ Ubuntu 16.04 Root on ZFS ======================== .. contents:: Table of Contents - :local: + :local: Overview -------- @@ -10,26 +10,27 @@ Overview Newer release available ~~~~~~~~~~~~~~~~~~~~~~~ -- See :doc:`Ubuntu 18.04 Root on ZFS <./Ubuntu 18.04 Root on ZFS>` for new installs. +- See :doc:`Ubuntu 18.04 Root on ZFS <./Ubuntu 18.04 Root on ZFS>` for new + installs. Caution ~~~~~~~ -- This HOWTO uses a whole physical disk. -- Do not use these instructions for dual-booting. -- Backup your data. Any existing data will be lost. +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. System Requirements ~~~~~~~~~~~~~~~~~~~ -- `64-bit Ubuntu 16.04.5 ("Xenial") Desktop - CD `__ - (*not* the server image) -- `A 64-bit kernel is strongly - encouraged. `__ -- A drive which presents 512B logical sectors. Installing on a drive - which presents 4KiB logical sectors (a “4Kn” drive) should work with - UEFI partitioning, but this has not been tested. +- `64-bit Ubuntu 16.04.5 ("Xenial") Desktop + CD `__ + (*not* the server image) +- `A 64-bit kernel is strongly + encouraged. `__ +- A drive which presents 512B logical sectors. Installing on a drive + which presents 4KiB logical sectors (a “4Kn” drive) should work with + UEFI partitioning, but this has not been tested. Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory is recommended for normal performance in basic workloads. If you @@ -50,32 +51,29 @@ mention @rlaager. Contributing ~~~~~~~~~~~~ -1) Fork and clone: https://github.com/openzfs/openzfs-docs +1. Fork and clone: https://github.com/openzfs/openzfs-docs -2) Install the tools: +2. Install the tools:: -:: + # On Debian 11 / Ubuntu 20.04 or later: + sudo apt install python3-sphinx python3-sphinx-issues python3-sphinx-rtd-theme - # On Debian 11 / Ubuntu 20.04 or later: - sudo apt install python3-sphinx python3-sphinx-issues python3-sphinx-rtd-theme - # On earlier releases: - sudo apt install pip3 - pip3 install -r requirements.txt - # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: - PATH=$HOME/.local/bin:$PATH + # On earlier releases: + sudo apt install pip3 + pip3 install -r requirements.txt + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH -3) Make your changes. +3. Make your changes. -4) Test: +4. Test:: -:: + cd docs + make html + sensible-browser _build/html/index.html - cd docs - make html - sensible-browser _build/html/index.html - -5) ``git commit --signoff`` to a branch, ``git push``, and create a pull request. - Mention @rlaager. +5. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. Encryption ~~~~~~~~~~ @@ -119,8 +117,8 @@ terminal (press Ctrl-Alt-T). :: - $ sudo apt-add-repository universe - $ sudo apt update + $ sudo apt-add-repository universe + $ sudo apt update 1.3 Optional: Start the OpenSSH server in the Live CD environment: @@ -129,9 +127,9 @@ be convenient. :: - $ passwd - There is no current password; hit enter at that prompt. - $ sudo apt --yes install openssh-server + $ passwd + There is no current password; hit enter at that prompt. + $ sudo apt --yes install openssh-server **Hint:** You can find your IP address with ``ip addr show scope global | grep inet``. Then, from your main machine, @@ -141,13 +139,13 @@ connect with ``ssh ubuntu@IP``. :: - $ sudo -i + $ sudo -i 1.5 Install ZFS in the Live CD environment: :: - # apt install --yes debootstrap gdisk zfs-initramfs + # apt install --yes debootstrap gdisk zfs-initramfs **Note:** You can ignore the two error lines about "AppStream". They are harmless. @@ -159,22 +157,22 @@ Step 2: Disk Formatting :: - If the disk was previously used in an MD array, zero the superblock: - # apt install --yes mdadm - # mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1 + If the disk was previously used in an MD array, zero the superblock: + # apt install --yes mdadm + # mdadm --zero-superblock --force /dev/disk/by-id/scsi-SATA_disk1 - Clear the partition table: - # sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1 + Clear the partition table: + # sgdisk --zap-all /dev/disk/by-id/scsi-SATA_disk1 2.2 Partition your disk: :: - Run this if you need legacy (BIOS) booting: - # sgdisk -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-SATA_disk1 + Run this if you need legacy (BIOS) booting: + # sgdisk -a1 -n2:34:2047 -t2:EF02 /dev/disk/by-id/scsi-SATA_disk1 - Run this for UEFI booting (for use now or in the future): - # sgdisk -n3:1M:+512M -t3:EF00 /dev/disk/by-id/scsi-SATA_disk1 + Run this for UEFI booting (for use now or in the future): + # sgdisk -n3:1M:+512M -t3:EF00 /dev/disk/by-id/scsi-SATA_disk1 Choose one of the following options: @@ -182,14 +180,14 @@ Choose one of the following options: :: - # sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-SATA_disk1 + # sgdisk -n1:0:0 -t1:BF01 /dev/disk/by-id/scsi-SATA_disk1 2.2b LUKS: :: - # sgdisk -n4:0:+512M -t4:8300 /dev/disk/by-id/scsi-SATA_disk1 - # sgdisk -n1:0:0 -t1:8300 /dev/disk/by-id/scsi-SATA_disk1 + # sgdisk -n4:0:+512M -t4:8300 /dev/disk/by-id/scsi-SATA_disk1 + # sgdisk -n1:0:0 -t1:8300 /dev/disk/by-id/scsi-SATA_disk1 Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the ``/dev/sd*`` device nodes directly can cause sporadic import failures, @@ -197,12 +195,12 @@ especially on systems that have more than one storage pool. **Hints:** -- ``ls -la /dev/disk/by-id`` will list the aliases. -- Are you doing this in a virtual machine? If your virtual disk is - missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using - KVM with virtio; otherwise, read the - `troubleshooting `__ - section. +- ``ls -la /dev/disk/by-id`` will list the aliases. +- Are you doing this in a virtual machine? If your virtual disk is + missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using + KVM with virtio; otherwise, read the + `troubleshooting `__ + section. 2.3 Create the root pool: @@ -212,61 +210,61 @@ Choose one of the following options: :: - # zpool create -o ashift=12 \ - -O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \ - -O mountpoint=/ -R /mnt \ - rpool /dev/disk/by-id/scsi-SATA_disk1-part1 + # zpool create -o ashift=12 \ + -O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \ + -O mountpoint=/ -R /mnt \ + rpool /dev/disk/by-id/scsi-SATA_disk1-part1 2.3b LUKS: :: - # cryptsetup luksFormat -c aes-xts-plain64 -s 256 -h sha256 \ - /dev/disk/by-id/scsi-SATA_disk1-part1 - # cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part1 luks1 - # zpool create -o ashift=12 \ - -O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \ - -O mountpoint=/ -R /mnt \ - rpool /dev/mapper/luks1 + # cryptsetup luksFormat -c aes-xts-plain64 -s 256 -h sha256 \ + /dev/disk/by-id/scsi-SATA_disk1-part1 + # cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part1 luks1 + # zpool create -o ashift=12 \ + -O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \ + -O mountpoint=/ -R /mnt \ + rpool /dev/mapper/luks1 **Notes:** -- The use of ``ashift=12`` is recommended here because many drives - today have 4KiB (or larger) physical sectors, even though they - present 512B logical sectors. Also, a future replacement drive may - have 4KiB physical sectors (in which case ``ashift=12`` is desirable) - or 4KiB logical sectors (in which case ``ashift=12`` is required). -- Setting ``normalization=formD`` eliminates some corner cases relating - to UTF-8 filename normalization. It also implies ``utf8only=on``, - which means that only UTF-8 filenames are allowed. If you care to - support non-UTF-8 filenames, do not use this option. For a discussion - of why requiring UTF-8 filenames may be a bad idea, see `The problems - with enforced UTF-8 only - filenames `__. -- Make sure to include the ``-part1`` portion of the drive path. If you - forget that, you are specifying the whole disk, which ZFS will then - re-partition, and you will lose the bootloader partition(s). -- For LUKS, the key size chosen is 256 bits. However, XTS mode requires - two keys, so the LUKS key is split in half. Thus, ``-s 256`` means - AES-128, which is the LUKS and Ubuntu default. -- Your passphrase will likely be the weakest link. Choose wisely. See - `section 5 of the cryptsetup - FAQ `__ - for guidance. +- The use of ``ashift=12`` is recommended here because many drives + today have 4KiB (or larger) physical sectors, even though they + present 512B logical sectors. Also, a future replacement drive may + have 4KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4KiB logical sectors (in which case ``ashift=12`` is required). +- Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only + filenames `__. +- Make sure to include the ``-part1`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). +- For LUKS, the key size chosen is 256 bits. However, XTS mode requires + two keys, so the LUKS key is split in half. Thus, ``-s 256`` means + AES-128, which is the LUKS and Ubuntu default. +- Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup + FAQ `__ + for guidance. **Hints:** -- The root pool does not have to be a single disk; it can have a mirror - or raidz topology. In that case, repeat the partitioning commands for - all the disks which will be part of the pool. Then, create the pool - using - ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part1 /dev/disk/by-id/scsi-SATA_disk2-part1`` - (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and - list the partitions from additional disks). -- The pool name is arbitrary. On systems that can automatically install - to ZFS, the root pool is named ``rpool`` by default. If you work with - multiple systems, it might be wise to use ``hostname``, - ``hostname0``, or ``hostname-1`` instead. +- The root pool does not have to be a single disk; it can have a mirror + or raidz topology. In that case, repeat the partitioning commands for + all the disks which will be part of the pool. Then, create the pool + using + ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part1 /dev/disk/by-id/scsi-SATA_disk2-part1`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). +- The pool name is arbitrary. On systems that can automatically install + to ZFS, the root pool is named ``rpool`` by default. If you work with + multiple systems, it might be wise to use ``hostname``, + ``hostname0``, or ``hostname-1`` instead. Step 3: System Installation --------------------------- @@ -275,7 +273,7 @@ Step 3: System Installation :: - # zfs create -o canmount=off -o mountpoint=none rpool/ROOT + # zfs create -o canmount=off -o mountpoint=none rpool/ROOT On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through ``pkg image-update`` or @@ -288,8 +286,8 @@ system: :: - # zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu - # zfs mount rpool/ROOT/ubuntu + # zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu + # zfs mount rpool/ROOT/ubuntu With ZFS, it is not normally necessary to use a mount command (either ``mount`` or ``zfs mount``). This situation is an exception because of @@ -299,26 +297,26 @@ With ZFS, it is not normally necessary to use a mount command (either :: - # zfs create -o setuid=off rpool/home - # zfs create -o mountpoint=/root rpool/home/root - # zfs create -o canmount=off -o setuid=off -o exec=off rpool/var - # zfs create -o com.sun:auto-snapshot=false rpool/var/cache - # zfs create rpool/var/log - # zfs create rpool/var/spool - # zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp + # zfs create -o setuid=off rpool/home + # zfs create -o mountpoint=/root rpool/home/root + # zfs create -o canmount=off -o setuid=off -o exec=off rpool/var + # zfs create -o com.sun:auto-snapshot=false rpool/var/cache + # zfs create rpool/var/log + # zfs create rpool/var/spool + # zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp - If you use /srv on this system: - # zfs create rpool/srv + If you use /srv on this system: + # zfs create rpool/srv - If this system will have games installed: - # zfs create rpool/var/games + If this system will have games installed: + # zfs create rpool/var/games - If this system will store local email in /var/mail: - # zfs create rpool/var/mail + If this system will store local email in /var/mail: + # zfs create rpool/var/mail - If this system will use NFS (locking): - # zfs create -o com.sun:auto-snapshot=false \ - -o mountpoint=/var/lib/nfs rpool/var/nfs + If this system will use NFS (locking): + # zfs create -o com.sun:auto-snapshot=false \ + -o mountpoint=/var/lib/nfs rpool/var/nfs The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling @@ -333,17 +331,17 @@ to exclude transient data. :: - # mke2fs -t ext2 /dev/disk/by-id/scsi-SATA_disk1-part4 - # mkdir /mnt/boot - # mount /dev/disk/by-id/scsi-SATA_disk1-part4 /mnt/boot + # mke2fs -t ext2 /dev/disk/by-id/scsi-SATA_disk1-part4 + # mkdir /mnt/boot + # mount /dev/disk/by-id/scsi-SATA_disk1-part4 /mnt/boot 3.5 Install the minimal system: :: - # chmod 1777 /mnt/var/tmp - # debootstrap xenial /mnt - # zfs set devices=off rpool + # chmod 1777 /mnt/var/tmp + # debootstrap xenial /mnt + # zfs set devices=off rpool The ``debootstrap`` command leaves the new system in an unconfigured state. An alternative to using ``debootstrap`` is to copy the entirety @@ -357,13 +355,13 @@ hostname). :: - # echo HOSTNAME > /mnt/etc/hostname + # echo HOSTNAME > /mnt/etc/hostname - # vi /mnt/etc/hosts - Add a line: - 127.0.1.1 HOSTNAME - or if the system has a real name in DNS: - 127.0.1.1 FQDN HOSTNAME + # vi /mnt/etc/hosts + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME **Hint:** Use ``nano`` if you find ``vi`` confusing. @@ -371,12 +369,12 @@ hostname). :: - Find the interface name: - # ip addr show + Find the interface name: + # ip addr show - # vi /mnt/etc/network/interfaces.d/NAME - auto NAME - iface NAME inet dhcp + # vi /mnt/etc/network/interfaces.d/NAME + auto NAME + iface NAME inet dhcp Customize this file if the system is not a DHCP client. @@ -384,25 +382,25 @@ Customize this file if the system is not a DHCP client. :: - # vi /mnt/etc/apt/sources.list - deb http://archive.ubuntu.com/ubuntu xenial main universe - deb-src http://archive.ubuntu.com/ubuntu xenial main universe + # vi /mnt/etc/apt/sources.list + deb http://archive.ubuntu.com/ubuntu xenial main universe + deb-src http://archive.ubuntu.com/ubuntu xenial main universe - deb http://security.ubuntu.com/ubuntu xenial-security main universe - deb-src http://security.ubuntu.com/ubuntu xenial-security main universe + deb http://security.ubuntu.com/ubuntu xenial-security main universe + deb-src http://security.ubuntu.com/ubuntu xenial-security main universe - deb http://archive.ubuntu.com/ubuntu xenial-updates main universe - deb-src http://archive.ubuntu.com/ubuntu xenial-updates main universe + deb http://archive.ubuntu.com/ubuntu xenial-updates main universe + deb-src http://archive.ubuntu.com/ubuntu xenial-updates main universe 4.4 Bind the virtual filesystems from the LiveCD environment to the new system and ``chroot`` into it: :: - # mount --rbind /dev /mnt/dev - # mount --rbind /proc /mnt/proc - # mount --rbind /sys /mnt/sys - # chroot /mnt /bin/bash --login + # mount --rbind /dev /mnt/dev + # mount --rbind /proc /mnt/proc + # mount --rbind /sys /mnt/sys + # chroot /mnt /bin/bash --login **Note:** This is using ``--rbind``, not ``--bind``. @@ -410,59 +408,59 @@ system and ``chroot`` into it: :: - # locale-gen en_US.UTF-8 + # locale-gen en_US.UTF-8 Even if you prefer a non-English system language, always ensure that ``en_US.UTF-8`` is available. :: - # echo LANG=en_US.UTF-8 > /etc/default/locale + # echo LANG=en_US.UTF-8 > /etc/default/locale - # dpkg-reconfigure tzdata + # dpkg-reconfigure tzdata - # ln -s /proc/self/mounts /etc/mtab - # apt update - # apt install --yes ubuntu-minimal + # ln -s /proc/self/mounts /etc/mtab + # apt update + # apt install --yes ubuntu-minimal - If you prefer nano over vi, install it: - # apt install --yes nano + If you prefer nano over vi, install it: + # apt install --yes nano 4.6 Install ZFS in the chroot environment for the new system: :: - # apt install --yes --no-install-recommends linux-image-generic - # apt install --yes zfs-initramfs + # apt install --yes --no-install-recommends linux-image-generic + # apt install --yes zfs-initramfs 4.7 For LUKS installs only: :: - # echo UUID=$(blkid -s UUID -o value \ - /dev/disk/by-id/scsi-SATA_disk1-part4) \ - /boot ext2 defaults 0 2 >> /etc/fstab + # echo UUID=$(blkid -s UUID -o value \ + /dev/disk/by-id/scsi-SATA_disk1-part4) \ + /boot ext2 defaults 0 2 >> /etc/fstab - # apt install --yes cryptsetup + # apt install --yes cryptsetup - # echo luks1 UUID=$(blkid -s UUID -o value \ - /dev/disk/by-id/scsi-SATA_disk1-part1) none \ - luks,discard,initramfs > /etc/crypttab + # echo luks1 UUID=$(blkid -s UUID -o value \ + /dev/disk/by-id/scsi-SATA_disk1-part1) none \ + luks,discard,initramfs > /etc/crypttab - # vi /etc/udev/rules.d/99-local-crypt.rules - ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}" - ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}" + # vi /etc/udev/rules.d/99-local-crypt.rules + ENV{DM_NAME}!="", SYMLINK+="$env{DM_NAME}" + ENV{DM_NAME}!="", SYMLINK+="dm-name-$env{DM_NAME}" - # ln -s /dev/mapper/luks1 /dev/luks1 + # ln -s /dev/mapper/luks1 /dev/luks1 **Notes:** -- The use of ``initramfs`` is a work-around for `cryptsetup does not - support - ZFS `__. -- The 99-local-crypt.rules file and symlink in /dev are a work-around - for `grub-probe assuming all devices are in - /dev `__. +- The use of ``initramfs`` is a work-around for `cryptsetup does not + support + ZFS `__. +- The 99-local-crypt.rules file and symlink in /dev are a work-around + for `grub-probe assuming all devices are in + /dev `__. 4.8 Install GRUB @@ -472,7 +470,7 @@ Choose one of the following options: :: - # apt install --yes grub-pc + # apt install --yes grub-pc Install GRUB to the disk(s), not the partition(s). @@ -480,27 +478,27 @@ Install GRUB to the disk(s), not the partition(s). :: - # apt install dosfstools - # mkdosfs -F 32 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part3 - # mkdir /boot/efi - # echo PARTUUID=$(blkid -s PARTUUID -o value \ - /dev/disk/by-id/scsi-SATA_disk1-part3) \ - /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab - # mount /boot/efi - # apt install --yes grub-efi-amd64 + # apt install dosfstools + # mkdosfs -F 32 -n EFI /dev/disk/by-id/scsi-SATA_disk1-part3 + # mkdir /boot/efi + # echo PARTUUID=$(blkid -s PARTUUID -o value \ + /dev/disk/by-id/scsi-SATA_disk1-part3) \ + /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab + # mount /boot/efi + # apt install --yes grub-efi-amd64 4.9 Setup system groups: :: - # addgroup --system lpadmin - # addgroup --system sambashare + # addgroup --system lpadmin + # addgroup --system sambashare 4.10 Set a root password :: - # passwd + # passwd 4.11 Fix filesystem mount ordering @@ -518,12 +516,12 @@ feature of systemd automatically use ``After=var-tmp.mount``. :: - # zfs set mountpoint=legacy rpool/var/log - # zfs set mountpoint=legacy rpool/var/tmp - # cat >> /etc/fstab << EOF - rpool/var/log /var/log zfs defaults 0 0 - rpool/var/tmp /var/tmp zfs defaults 0 0 - EOF + # zfs set mountpoint=legacy rpool/var/log + # zfs set mountpoint=legacy rpool/var/tmp + # cat >> /etc/fstab << EOF + rpool/var/log /var/log zfs defaults 0 0 + rpool/var/tmp /var/tmp zfs defaults 0 0 + EOF Step 5: GRUB Installation ------------------------- @@ -532,8 +530,8 @@ Step 5: GRUB Installation :: - # grub-probe / - zfs + # grub-probe / + zfs **Note:** GRUB uses ``zpool status`` in order to determine the location of devices. `grub-probe assumes all devices are in @@ -550,14 +548,14 @@ following: :: - # export ZPOOL_VDEV_NAME_PATH=YES + # export ZPOOL_VDEV_NAME_PATH=YES 5.2 Refresh the initrd files: :: - # update-initramfs -c -k all - update-initramfs: Generating /boot/initrd.img-4.4.0-21-generic + # update-initramfs -c -k all + update-initramfs: Generating /boot/initrd.img-4.4.0-21-generic **Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because `cryptsetup does not @@ -568,11 +566,11 @@ ZFS `__. :: - # vi /etc/default/grub - Comment out: GRUB_HIDDEN_TIMEOUT=0 - Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT - Uncomment: GRUB_TERMINAL=console - Save and quit. + # vi /etc/default/grub + Comment out: GRUB_HIDDEN_TIMEOUT=0 + Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT + Uncomment: GRUB_TERMINAL=console + Save and quit. Later, once the system has rebooted twice and you are sure everything is working, you can undo these changes, if desired. @@ -581,11 +579,11 @@ working, you can undo these changes, if desired. :: - # update-grub - Generating grub configuration file ... - Found linux image: /boot/vmlinuz-4.4.0-21-generic - Found initrd image: /boot/initrd.img-4.4.0-21-generic - done + # update-grub + Generating grub configuration file ... + Found linux image: /boot/vmlinuz-4.4.0-21-generic + Found initrd image: /boot/initrd.img-4.4.0-21-generic + done 5.5 Install the boot loader @@ -593,9 +591,9 @@ working, you can undo these changes, if desired. :: - # grub-install /dev/disk/by-id/scsi-SATA_disk1 - Installing for i386-pc platform. - Installation finished. No error reported. + # grub-install /dev/disk/by-id/scsi-SATA_disk1 + Installing for i386-pc platform. + Installation finished. No error reported. Do not reboot the computer until you get exactly that result message. Note that you are installing GRUB to the whole disk, not a partition. @@ -607,14 +605,14 @@ disk in the pool. :: - # grub-install --target=x86_64-efi --efi-directory=/boot/efi \ - --bootloader-id=ubuntu --recheck --no-floppy + # grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=ubuntu --recheck --no-floppy 5.6 Verify that the ZFS module is installed: :: - # ls /boot/grub/*/zfs.mod + # ls /boot/grub/*/zfs.mod Step 6: First Boot ------------------ @@ -623,7 +621,7 @@ Step 6: First Boot :: - # zfs snapshot rpool/ROOT/ubuntu@install + # zfs snapshot rpool/ROOT/ubuntu@install In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to @@ -633,21 +631,21 @@ save space. :: - # exit + # exit 6.3 Run these commands in the LiveCD environment to unmount all filesystems: :: - # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} - # zpool export rpool + # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + # zpool export rpool 6.4 Reboot: :: - # reboot + # reboot 6.5 Wait for the newly installed system to boot normally. Login as root. @@ -659,21 +657,21 @@ Choose one of the following options: :: - # zfs create rpool/home/YOURUSERNAME - # adduser YOURUSERNAME - # cp -a /etc/skel/.[!.]* /home/YOURUSERNAME - # chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME + # zfs create rpool/home/YOURUSERNAME + # adduser YOURUSERNAME + # cp -a /etc/skel/.[!.]* /home/YOURUSERNAME + # chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME 6.6b eCryptfs: :: - # apt install ecryptfs-utils + # apt install ecryptfs-utils - # zfs create -o compression=off -o mountpoint=/home/.ecryptfs/YOURUSERNAME \ - rpool/home/temp-YOURUSERNAME - # adduser --encrypt-home YOURUSERNAME - # zfs rename rpool/home/temp-YOURUSERNAME rpool/home/YOURUSERNAME + # zfs create -o compression=off -o mountpoint=/home/.ecryptfs/YOURUSERNAME \ + rpool/home/temp-YOURUSERNAME + # adduser --encrypt-home YOURUSERNAME + # zfs rename rpool/home/temp-YOURUSERNAME rpool/home/YOURUSERNAME The temporary name for the dataset is required to work-around `a bug in ecryptfs-setup-private `__. @@ -692,7 +690,7 @@ administrator: :: - # usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo YOURUSERNAME + # usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo YOURUSERNAME 6.8 Mirror GRUB @@ -703,23 +701,23 @@ disks: :: - # dpkg-reconfigure grub-pc - Hit enter until you get to the device selection screen. - Select (using the space bar) all of the disks (not partitions) in your pool. + # dpkg-reconfigure grub-pc + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. 6.8b UEFI :: - # umount /boot/efi + # umount /boot/efi - For the second and subsequent disks (increment ubuntu-2 to -3, etc.): - # dd if=/dev/disk/by-id/scsi-SATA_disk1-part3 \ - of=/dev/disk/by-id/scsi-SATA_disk2-part3 - # efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ - -p 3 -L "ubuntu-2" -l '\EFI\Ubuntu\grubx64.efi' + For the second and subsequent disks (increment ubuntu-2 to -3, etc.): + # dd if=/dev/disk/by-id/scsi-SATA_disk1-part3 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part3 + # efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 3 -L "ubuntu-2" -l '\EFI\Ubuntu\grubx64.efi' - # mount /boot/efi + # mount /boot/efi Step 7: Configure Swap ---------------------- @@ -728,10 +726,10 @@ Step 7: Configure Swap :: - # zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ - -o logbias=throughput -o sync=always \ - -o primarycache=metadata -o secondarycache=none \ - -o com.sun:auto-snapshot=false rpool/swap + # zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap You can adjust the size (the ``4G`` part) to your needs. @@ -753,25 +751,25 @@ files. Never use a short ``/dev/zdX`` device name. :: - # mkswap -f /dev/zvol/rpool/swap - # echo /dev/zvol/rpool/swap none swap defaults 0 0 >> /etc/fstab + # mkswap -f /dev/zvol/rpool/swap + # echo /dev/zvol/rpool/swap none swap defaults 0 0 >> /etc/fstab 7.2b eCryptfs: :: - # apt install cryptsetup - # echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom \ - swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab - # systemctl daemon-reload - # systemctl start systemd-cryptsetup@cryptswap1.service - # echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab + # apt install cryptsetup + # echo cryptswap1 /dev/zvol/rpool/swap /dev/urandom \ + swap,cipher=aes-xts-plain64:sha256,size=256 >> /etc/crypttab + # systemctl daemon-reload + # systemctl start systemd-cryptsetup@cryptswap1.service + # echo /dev/mapper/cryptswap1 none swap defaults 0 0 >> /etc/fstab 7.3 Enable the swap device: :: - # swapon -av + # swapon -av Step 8: Full Software Installation ---------------------------------- @@ -780,7 +778,7 @@ Step 8: Full Software Installation :: - # apt dist-upgrade --yes + # apt dist-upgrade --yes 8.2 Install a regular set of software: @@ -790,13 +788,13 @@ Choose one of the following options: :: - # apt install --yes ubuntu-standard + # apt install --yes ubuntu-standard 8.2b Install a full GUI environment: :: - # apt install --yes ubuntu-desktop + # apt install --yes ubuntu-desktop **Hint**: If you are installing a full GUI environment, you will likely want to manage your network with NetworkManager. In that case, @@ -814,17 +812,17 @@ highly recommended): :: - # for file in /etc/logrotate.d/* ; do - if grep -Eq "(^|[^#y])compress" "$file" ; then - sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" - fi - done + # for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done 8.4 Reboot: :: - # reboot + # reboot Step 9: Final Cleanup --------------------- @@ -836,13 +834,13 @@ created. Ensure the system (including networking) works normally. :: - $ sudo zfs destroy rpool/ROOT/ubuntu@install + $ sudo zfs destroy rpool/ROOT/ubuntu@install 9.3 Optional: Disable the root password :: - $ sudo usermod -p '*' root + $ sudo usermod -p '*' root 9.4 Optional: @@ -851,13 +849,13 @@ you are using LUKS, it makes the prompt look nicer. :: - $ sudo vi /etc/default/grub - Uncomment GRUB_HIDDEN_TIMEOUT=0 - Add quiet and splash to GRUB_CMDLINE_LINUX_DEFAULT - Comment out GRUB_TERMINAL=console - Save and quit. + $ sudo vi /etc/default/grub + Uncomment GRUB_HIDDEN_TIMEOUT=0 + Add quiet and splash to GRUB_CMDLINE_LINUX_DEFAULT + Comment out GRUB_TERMINAL=console + Save and quit. - $ sudo update-grub + $ sudo update-grub Troubleshooting --------------- @@ -871,28 +869,28 @@ Become root and install the ZFS utilities: :: - $ sudo -i - # apt update - # apt install --yes zfsutils-linux + $ sudo -i + # apt update + # apt install --yes zfsutils-linux This will automatically import your pool. Export it and re-import it to get the mounts right: :: - # zpool export -a - # zpool import -N -R /mnt rpool - # zfs mount rpool/ROOT/ubuntu - # zfs mount -a + # zpool export -a + # zpool import -N -R /mnt rpool + # zfs mount rpool/ROOT/ubuntu + # zfs mount -a If needed, you can chroot into your installed environment: :: - # mount --rbind /dev /mnt/dev - # mount --rbind /proc /mnt/proc - # mount --rbind /sys /mnt/sys - # chroot /mnt /bin/bash --login + # mount --rbind /dev /mnt/dev + # mount --rbind /proc /mnt/proc + # mount --rbind /sys /mnt/sys + # chroot /mnt /bin/bash --login Do whatever you need to do to fix your system. @@ -900,9 +898,9 @@ When done, cleanup: :: - # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} - # zpool export rpool - # reboot + # mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + # zpool export rpool + # reboot MPT2SAS ~~~~~~~ @@ -935,9 +933,9 @@ this error message. VMware ~~~~~~ -- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere - configuration. Doing this ensures that ``/dev/disk`` aliases are - created in the guest. +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere + configuration. Doing this ensures that ``/dev/disk`` aliases are + created in the guest. QEMU/KVM/XEN ~~~~~~~~~~~~ @@ -950,11 +948,11 @@ this on the host: :: - $ sudo apt install ovmf - $ sudo vi /etc/libvirt/qemu.conf - Uncomment these lines: - nvram = [ - "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", - "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" - ] - $ sudo service libvirt-bin restart + $ sudo apt install ovmf + $ sudo vi /etc/libvirt/qemu.conf + Uncomment these lines: + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" + ] + $ sudo service libvirt-bin restart diff --git a/docs/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst b/docs/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst index fbc1253..7fce19e 100644 --- a/docs/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst +++ b/docs/Getting Started/Ubuntu/Ubuntu 18.04 Root on ZFS.rst @@ -1,8 +1,10 @@ +.. highlight:: sh + Ubuntu 18.04 Root on ZFS ======================== .. contents:: Table of Contents - :local: + :local: Overview -------- @@ -10,20 +12,20 @@ Overview Caution ~~~~~~~ -- This HOWTO uses a whole physical disk. -- Do not use these instructions for dual-booting. -- Backup your data. Any existing data will be lost. +- This HOWTO uses a whole physical disk. +- Do not use these instructions for dual-booting. +- Backup your data. Any existing data will be lost. System Requirements ~~~~~~~~~~~~~~~~~~~ -- `Ubuntu 18.04.3 ("Bionic") Desktop - CD `__ - (*not* any server images) -- Installing on a drive which presents 4KiB logical sectors (a “4Kn” - drive) only works with UEFI booting. This not unique to ZFS. `GRUB - does not and will not work on 4Kn with legacy (BIOS) - booting. `__ +- `Ubuntu 18.04.3 ("Bionic") Desktop + CD `__ + (*not* any server images) +- Installing on a drive which presents 4KiB logical sectors (a “4Kn” + drive) only works with UEFI booting. This not unique to ZFS. `GRUB + does not and will not work on 4Kn with legacy (BIOS) + booting. `__ Computers that have less than 2 GiB of memory run ZFS slowly. 4 GiB of memory is recommended for normal performance in basic workloads. If you @@ -44,32 +46,29 @@ mention @rlaager. Contributing ~~~~~~~~~~~~ -1) Fork and clone: https://github.com/openzfs/openzfs-docs +1. Fork and clone: https://github.com/openzfs/openzfs-docs -2) Install the tools: +2. Install the tools:: -:: + # On Debian 11 / Ubuntu 20.04 or later: + sudo apt install python3-sphinx python3-sphinx-issues python3-sphinx-rtd-theme - # On Debian 11 / Ubuntu 20.04 or later: - sudo apt install python3-sphinx python3-sphinx-issues python3-sphinx-rtd-theme - # On earlier releases: - sudo apt install pip3 - pip3 install -r requirements.txt - # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: - PATH=$HOME/.local/bin:$PATH + # On earlier releases: + sudo apt install pip3 + pip3 install -r requirements.txt + # Add ~/.local/bin to your $PATH, e.g. by adding this to ~/.bashrc: + PATH=$HOME/.local/bin:$PATH -3) Make your changes. +3. Make your changes. -4) Test: +4. Test:: -:: + cd docs + make html + sensible-browser _build/html/index.html - cd docs - make html - sensible-browser _build/html/index.html - -5) ``git commit --signoff`` to a branch, ``git push``, and create a pull request. - Mention @rlaager. +5. ``git commit --signoff`` to a branch, ``git push``, and create a pull + request. Mention @rlaager. Encryption ~~~~~~~~~~ @@ -95,49 +94,39 @@ Step 1: Prepare The Install Environment the Internet as appropriate (e.g. join your WiFi network). Open a terminal (press Ctrl-Alt-T). -1.2 Setup and update the repositories: +1.2 Setup and update the repositories:: -:: - - sudo apt-add-repository universe - sudo apt update + sudo apt-add-repository universe + sudo apt update 1.3 Optional: Install and start the OpenSSH server in the Live CD environment: If you have a second system, using SSH to access the target system can -be convenient. +be convenient:: -:: - - passwd - There is no current password; hit enter at that prompt. - sudo apt install --yes openssh-server + passwd + # There is no current password; hit enter at that prompt. + sudo apt install --yes openssh-server **Hint:** You can find your IP address with ``ip addr show scope global | grep inet``. Then, from your main machine, connect with ``ssh ubuntu@IP``. -1.4 Become root: +1.4 Become root:: -:: + sudo -i - sudo -i +1.5 Install ZFS in the Live CD environment:: -1.5 Install ZFS in the Live CD environment: - -:: - - apt install --yes debootstrap gdisk zfs-initramfs + apt install --yes debootstrap gdisk zfs-initramfs Step 2: Disk Formatting ----------------------- -2.1 Set a variable with the disk name: +2.1 Set a variable with the disk name:: -:: - - DISK=/dev/disk/by-id/scsi-SATA_disk1 + DISK=/dev/disk/by-id/scsi-SATA_disk1 Always use the long ``/dev/disk/by-id/*`` aliases with ZFS. Using the ``/dev/sd*`` device nodes directly can cause sporadic import failures, @@ -145,84 +134,68 @@ especially on systems that have more than one storage pool. **Hints:** -- ``ls -la /dev/disk/by-id`` will list the aliases. -- Are you doing this in a virtual machine? If your virtual disk is - missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using - KVM with virtio; otherwise, read the - `troubleshooting <#troubleshooting>`__ section. +- ``ls -la /dev/disk/by-id`` will list the aliases. +- Are you doing this in a virtual machine? If your virtual disk is + missing from ``/dev/disk/by-id``, use ``/dev/vda`` if you are using + KVM with virtio; otherwise, read the + `troubleshooting <#troubleshooting>`__ section. 2.2 If you are re-using a disk, clear it as necessary: -If the disk was previously used in an MD array, zero the superblock: +If the disk was previously used in an MD array, zero the superblock:: -:: + apt install --yes mdadm + mdadm --zero-superblock --force $DISK - apt install --yes mdadm - mdadm --zero-superblock --force $DISK +Clear the partition table:: -Clear the partition table: - -:: - - sgdisk --zap-all $DISK + sgdisk --zap-all $DISK 2.3 Partition your disk(s): -Run this if you need legacy (BIOS) booting: +Run this if you need legacy (BIOS) booting:: -:: + sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK - sgdisk -a1 -n1:24K:+1000K -t1:EF02 $DISK +Run this for UEFI booting (for use now or in the future):: -Run this for UEFI booting (for use now or in the future): + sgdisk -n2:1M:+512M -t2:EF00 $DISK -:: +Run this for the boot pool:: - sgdisk -n2:1M:+512M -t2:EF00 $DISK - -Run this for the boot pool: - -:: - - sgdisk -n3:0:+1G -t3:BF01 $DISK + sgdisk -n3:0:+1G -t3:BF01 $DISK Choose one of the following options: -2.3a Unencrypted: +2.3a Unencrypted:: -:: + sgdisk -n4:0:0 -t4:BF01 $DISK - sgdisk -n4:0:0 -t4:BF01 $DISK +2.3b LUKS:: -2.3b LUKS: - -:: - - sgdisk -n4:0:0 -t4:8300 $DISK + sgdisk -n4:0:0 -t4:8300 $DISK If you are creating a mirror or raidz topology, repeat the partitioning commands for all the disks which will be part of the pool. -2.4 Create the boot pool: +2.4 Create the boot pool:: -:: - - zpool create -o ashift=12 -d \ - -o feature@async_destroy=enabled \ - -o feature@bookmarks=enabled \ - -o feature@embedded_data=enabled \ - -o feature@empty_bpobj=enabled \ - -o feature@enabled_txg=enabled \ - -o feature@extensible_dataset=enabled \ - -o feature@filesystem_limits=enabled \ - -o feature@hole_birth=enabled \ - -o feature@large_blocks=enabled \ - -o feature@lz4_compress=enabled \ - -o feature@spacemap_histogram=enabled \ - -o feature@userobj_accounting=enabled \ - -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ - -O normalization=formD -O relatime=on -O xattr=sa \ - -O mountpoint=/ -R /mnt bpool ${DISK}-part3 + zpool create -o ashift=12 -d \ + -o feature@async_destroy=enabled \ + -o feature@bookmarks=enabled \ + -o feature@embedded_data=enabled \ + -o feature@empty_bpobj=enabled \ + -o feature@enabled_txg=enabled \ + -o feature@extensible_dataset=enabled \ + -o feature@filesystem_limits=enabled \ + -o feature@hole_birth=enabled \ + -o feature@large_blocks=enabled \ + -o feature@lz4_compress=enabled \ + -o feature@spacemap_histogram=enabled \ + -o feature@userobj_accounting=enabled \ + -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ + -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt bpool ${DISK}-part3 You should not need to customize any of the options for the boot pool. @@ -236,110 +209,104 @@ read-only compatible features are "supported" by GRUB. **Hints:** -- If you are creating a mirror or raidz topology, create the pool using - ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3`` - (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and - list the partitions from additional disks). -- The pool name is arbitrary. If changed, the new name must be used - consistently. The ``bpool`` convention originated in this HOWTO. +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... bpool mirror /dev/disk/by-id/scsi-SATA_disk1-part3 /dev/disk/by-id/scsi-SATA_disk2-part3`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). +- The pool name is arbitrary. If changed, the new name must be used + consistently. The ``bpool`` convention originated in this HOWTO. 2.5 Create the root pool: Choose one of the following options: -2.5a Unencrypted: +2.5a Unencrypted:: -:: + zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt rpool ${DISK}-part4 - zpool create -o ashift=12 \ - -O acltype=posixacl -O canmount=off -O compression=lz4 \ - -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ - -O mountpoint=/ -R /mnt rpool ${DISK}-part4 +2.5b LUKS:: -2.5b LUKS: + cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 + cryptsetup luksOpen ${DISK}-part4 luks1 + zpool create -o ashift=12 \ + -O acltype=posixacl -O canmount=off -O compression=lz4 \ + -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ + -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1 -:: - - cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 - cryptsetup luksOpen ${DISK}-part4 luks1 - zpool create -o ashift=12 \ - -O acltype=posixacl -O canmount=off -O compression=lz4 \ - -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ - -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1 - -- The use of ``ashift=12`` is recommended here because many drives - today have 4KiB (or larger) physical sectors, even though they - present 512B logical sectors. Also, a future replacement drive may - have 4KiB physical sectors (in which case ``ashift=12`` is desirable) - or 4KiB logical sectors (in which case ``ashift=12`` is required). -- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you - do not want this, remove that option, but later add - ``-o acltype=posixacl`` (note: lowercase "o") to the ``zfs create`` - for ``/var/log``, as `journald requires - ACLs `__ -- Setting ``normalization=formD`` eliminates some corner cases relating - to UTF-8 filename normalization. It also implies ``utf8only=on``, - which means that only UTF-8 filenames are allowed. If you care to - support non-UTF-8 filenames, do not use this option. For a discussion - of why requiring UTF-8 filenames may be a bad idea, see `The problems - with enforced UTF-8 only - filenames `__. -- Setting ``relatime=on`` is a middle ground between classic POSIX - ``atime`` behavior (with its significant performance impact) and - ``atime=off`` (which provides the best performance by completely - disabling atime updates). Since Linux 2.6.30, ``relatime`` has been - the default for other filesystems. See `RedHat's - documentation `__ - for further information. -- Setting ``xattr=sa`` `vastly improves the performance of extended - attributes `__. - Inside ZFS, extended attributes are used to implement POSIX ACLs. - Extended attributes can also be used by user-space applications. - `They are used by some desktop GUI - applications. `__ - `They can be used by Samba to store Windows ACLs and DOS attributes; - they are required for a Samba Active Directory domain - controller. `__ - Note that ```xattr=sa`` is - Linux-specific. `__ - If you move your ``xattr=sa`` pool to another OpenZFS implementation - besides ZFS-on-Linux, extended attributes will not be readable - (though your data will be). If portability of extended attributes is - important to you, omit the ``-O xattr=sa`` above. Even if you do not - want ``xattr=sa`` for the whole pool, it is probably fine to use it - for ``/var/log``. -- Make sure to include the ``-part4`` portion of the drive path. If you - forget that, you are specifying the whole disk, which ZFS will then - re-partition, and you will lose the bootloader partition(s). -- For LUKS, the key size chosen is 512 bits. However, XTS mode requires - two keys, so the LUKS key is split in half. Thus, ``-s 512`` means - AES-256. -- Your passphrase will likely be the weakest link. Choose wisely. See - `section 5 of the cryptsetup - FAQ `__ - for guidance. +- The use of ``ashift=12`` is recommended here because many drives + today have 4KiB (or larger) physical sectors, even though they + present 512B logical sectors. Also, a future replacement drive may + have 4KiB physical sectors (in which case ``ashift=12`` is desirable) + or 4KiB logical sectors (in which case ``ashift=12`` is required). +- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you + do not want this, remove that option, but later add + ``-o acltype=posixacl`` (note: lowercase "o") to the ``zfs create`` + for ``/var/log``, as `journald requires + ACLs `__ +- Setting ``normalization=formD`` eliminates some corner cases relating + to UTF-8 filename normalization. It also implies ``utf8only=on``, + which means that only UTF-8 filenames are allowed. If you care to + support non-UTF-8 filenames, do not use this option. For a discussion + of why requiring UTF-8 filenames may be a bad idea, see `The problems + with enforced UTF-8 only + filenames `__. +- Setting ``relatime=on`` is a middle ground between classic POSIX + ``atime`` behavior (with its significant performance impact) and + ``atime=off`` (which provides the best performance by completely + disabling atime updates). Since Linux 2.6.30, ``relatime`` has been + the default for other filesystems. See `RedHat's + documentation `__ + for further information. +- Setting ``xattr=sa`` `vastly improves the performance of extended + attributes `__. + Inside ZFS, extended attributes are used to implement POSIX ACLs. + Extended attributes can also be used by user-space applications. + `They are used by some desktop GUI + applications. `__ + `They can be used by Samba to store Windows ACLs and DOS attributes; + they are required for a Samba Active Directory domain + controller. `__ + Note that ``xattr=sa`` is + `Linux-specific `__. + If you move your ``xattr=sa`` pool to another OpenZFS implementation + besides ZFS-on-Linux, extended attributes will not be readable + (though your data will be). If portability of extended attributes is + important to you, omit the ``-O xattr=sa`` above. Even if you do not + want ``xattr=sa`` for the whole pool, it is probably fine to use it + for ``/var/log``. +- Make sure to include the ``-part4`` portion of the drive path. If you + forget that, you are specifying the whole disk, which ZFS will then + re-partition, and you will lose the bootloader partition(s). +- For LUKS, the key size chosen is 512 bits. However, XTS mode requires + two keys, so the LUKS key is split in half. Thus, ``-s 512`` means + AES-256. +- Your passphrase will likely be the weakest link. Choose wisely. See + `section 5 of the cryptsetup + FAQ `__ + for guidance. **Hints:** -- If you are creating a mirror or raidz topology, create the pool using - ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4`` - (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and - list the partitions from additional disks). For LUKS, use - ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will - have to create using ``cryptsetup``. -- The pool name is arbitrary. If changed, the new name must be used - consistently. On systems that can automatically install to ZFS, the - root pool is named ``rpool`` by default. +- If you are creating a mirror or raidz topology, create the pool using + ``zpool create ... rpool mirror /dev/disk/by-id/scsi-SATA_disk1-part4 /dev/disk/by-id/scsi-SATA_disk2-part4`` + (or replace ``mirror`` with ``raidz``, ``raidz2``, or ``raidz3`` and + list the partitions from additional disks). For LUKS, use + ``/dev/mapper/luks1``, ``/dev/mapper/luks2``, etc., which you will + have to create using ``cryptsetup``. +- The pool name is arbitrary. If changed, the new name must be used + consistently. On systems that can automatically install to ZFS, the + root pool is named ``rpool`` by default. Step 3: System Installation --------------------------- -3.1 Create filesystem datasets to act as containers: +3.1 Create filesystem datasets to act as containers:: -:: - - zfs create -o canmount=off -o mountpoint=none rpool/ROOT - zfs create -o canmount=off -o mountpoint=none bpool/BOOT + zfs create -o canmount=off -o mountpoint=none rpool/ROOT + zfs create -o canmount=off -o mountpoint=none bpool/BOOT On Solaris systems, the root filesystem is cloned and the suffix is incremented for major system changes through ``pkg image-update`` or @@ -347,111 +314,83 @@ incremented for major system changes through ``pkg image-update`` or unimplemented. Even without such a tool, it can still be used for manually created clones. -3.2 Create filesystem datasets for the root and boot filesystems: +3.2 Create filesystem datasets for the root and boot filesystems:: -:: + zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu + zfs mount rpool/ROOT/ubuntu - zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/ubuntu - zfs mount rpool/ROOT/ubuntu - - zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu - zfs mount bpool/BOOT/ubuntu + zfs create -o canmount=noauto -o mountpoint=/boot bpool/BOOT/ubuntu + zfs mount bpool/BOOT/ubuntu With ZFS, it is not normally necessary to use a mount command (either ``mount`` or ``zfs mount``). This situation is an exception because of ``canmount=noauto``. -3.3 Create datasets: +3.3 Create datasets:: -:: - - zfs create rpool/home - zfs create -o mountpoint=/root rpool/home/root - zfs create -o canmount=off rpool/var - zfs create -o canmount=off rpool/var/lib - zfs create rpool/var/log - zfs create rpool/var/spool + zfs create rpool/home + zfs create -o mountpoint=/root rpool/home/root + zfs create -o canmount=off rpool/var + zfs create -o canmount=off rpool/var/lib + zfs create rpool/var/log + zfs create rpool/var/spool The datasets below are optional, depending on your preferences and/or software choices. -If you wish to exclude these from snapshots: +If you wish to exclude these from snapshots:: -:: + zfs create -o com.sun:auto-snapshot=false rpool/var/cache + zfs create -o com.sun:auto-snapshot=false rpool/var/tmp + chmod 1777 /mnt/var/tmp - zfs create -o com.sun:auto-snapshot=false rpool/var/cache - zfs create -o com.sun:auto-snapshot=false rpool/var/tmp - chmod 1777 /mnt/var/tmp +If you use /opt on this system:: -If you use /opt on this system: + zfs create rpool/opt -:: +If you use /srv on this system:: - zfs create rpool/opt + zfs create rpool/srv -If you use /srv on this system: +If you use /usr/local on this system:: -:: + zfs create -o canmount=off rpool/usr + zfs create rpool/usr/local - zfs create rpool/srv +If this system will have games installed:: -If you use /usr/local on this system: + zfs create rpool/var/games -:: +If this system will store local email in /var/mail:: - zfs create -o canmount=off rpool/usr - zfs create rpool/usr/local + zfs create rpool/var/mail -If this system will have games installed: +If this system will use Snap packages:: -:: + zfs create rpool/var/snap - zfs create rpool/var/games +If you use /var/www on this system:: -If this system will store local email in /var/mail: + zfs create rpool/var/www -:: +If this system will use GNOME:: - zfs create rpool/var/mail - -If this system will use Snap packages: - -:: - - zfs create rpool/var/snap - -If you use /var/www on this system: - -:: - - zfs create rpool/var/www - -If this system will use GNOME: - -:: - - zfs create rpool/var/lib/AccountsService + zfs create rpool/var/lib/AccountsService If this system will use Docker (which manages its own datasets & -snapshots): +snapshots):: -:: + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker - zfs create -o com.sun:auto-snapshot=false rpool/var/lib/docker +If this system will use NFS (locking):: -If this system will use NFS (locking): - -:: - - zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs + zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs A tmpfs is recommended later, but if you want a separate dataset for -/tmp: +/tmp:: -:: - - zfs create -o com.sun:auto-snapshot=false rpool/tmp - chmod 1777 /mnt/tmp + zfs create -o com.sun:auto-snapshot=false rpool/tmp + chmod 1777 /mnt/tmp The primary goal of this dataset layout is to separate the OS from user data. This allows the root filesystem to be rolled back without rolling @@ -467,12 +406,10 @@ of your root filesystem. It also allows you to set a quota on ``rpool/tmp``, if you want to limit the maximum space used. Otherwise, you can use a tmpfs (RAM filesystem) later. -3.4 Install the minimal system: +3.4 Install the minimal system:: -:: - - debootstrap bionic /mnt - zfs set devices=off rpool + debootstrap bionic /mnt + zfs set devices=off rpool The ``debootstrap`` command leaves the new system in an unconfigured state. An alternative to using ``debootstrap`` is to copy the entirety @@ -482,111 +419,99 @@ Step 4: System Configuration ---------------------------- 4.1 Configure the hostname (change ``HOSTNAME`` to the desired -hostname). +hostname):: -:: + echo HOSTNAME > /mnt/etc/hostname + vi /mnt/etc/hosts - echo HOSTNAME > /mnt/etc/hostname +.. code-block:: text - vi /mnt/etc/hosts - Add a line: - 127.0.1.1 HOSTNAME - or if the system has a real name in DNS: - 127.0.1.1 FQDN HOSTNAME + Add a line: + 127.0.1.1 HOSTNAME + or if the system has a real name in DNS: + 127.0.1.1 FQDN HOSTNAME **Hint:** Use ``nano`` if you find ``vi`` confusing. 4.2 Configure the network interface: -Find the interface name: +Find the interface name:: -:: + ip addr show - ip addr show +Adjust NAME below to match your interface name:: -Adjust NAME below to match your interface name: + vi /mnt/etc/netplan/01-netcfg.yaml -:: +.. code-block:: yaml - vi /mnt/etc/netplan/01-netcfg.yaml - network: - version: 2 - ethernets: - NAME: - dhcp4: true + network: + version: 2 + ethernets: + NAME: + dhcp4: true Customize this file if the system is not a DHCP client. -4.3 Configure the package sources: +4.3 Configure the package sources:: -:: + vi /mnt/etc/apt/sources.list - vi /mnt/etc/apt/sources.list - deb http://archive.ubuntu.com/ubuntu bionic main universe - deb-src http://archive.ubuntu.com/ubuntu bionic main universe +.. code-block:: sourceslist - deb http://security.ubuntu.com/ubuntu bionic-security main universe - deb-src http://security.ubuntu.com/ubuntu bionic-security main universe + deb http://archive.ubuntu.com/ubuntu bionic main universe + deb-src http://archive.ubuntu.com/ubuntu bionic main universe - deb http://archive.ubuntu.com/ubuntu bionic-updates main universe - deb-src http://archive.ubuntu.com/ubuntu bionic-updates main universe + deb http://security.ubuntu.com/ubuntu bionic-security main universe + deb-src http://security.ubuntu.com/ubuntu bionic-security main universe + + deb http://archive.ubuntu.com/ubuntu bionic-updates main universe + deb-src http://archive.ubuntu.com/ubuntu bionic-updates main universe 4.4 Bind the virtual filesystems from the LiveCD environment to the new -system and ``chroot`` into it: +system and ``chroot`` into it:: -:: - - mount --rbind /dev /mnt/dev - mount --rbind /proc /mnt/proc - mount --rbind /sys /mnt/sys - chroot /mnt /usr/bin/env DISK=$DISK bash --login + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + chroot /mnt /usr/bin/env DISK=$DISK bash --login **Note:** This is using ``--rbind``, not ``--bind``. -4.5 Configure a basic system environment: +4.5 Configure a basic system environment:: -:: + ln -s /proc/self/mounts /etc/mtab + apt update - ln -s /proc/self/mounts /etc/mtab - apt update - - dpkg-reconfigure locales + dpkg-reconfigure locales Even if you prefer a non-English system language, always ensure that -``en_US.UTF-8`` is available. +``en_US.UTF-8`` is available:: -:: + dpkg-reconfigure tzdata - dpkg-reconfigure tzdata +If you prefer nano over vi, install it:: -If you prefer nano over vi, install it: + apt install --yes nano -:: +4.6 Install ZFS in the chroot environment for the new system:: - apt install --yes nano - -4.6 Install ZFS in the chroot environment for the new system: - -:: - - apt install --yes --no-install-recommends linux-image-generic - apt install --yes zfs-initramfs + apt install --yes --no-install-recommends linux-image-generic + apt install --yes zfs-initramfs **Hint:** For the HWE kernel, install ``linux-image-generic-hwe-18.04`` instead of ``linux-image-generic``. -4.7 For LUKS installs only, setup crypttab: +4.7 For LUKS installs only, setup crypttab:: -:: + apt install --yes cryptsetup - apt install --yes cryptsetup + echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ + luks,discard,initramfs > /etc/crypttab - echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ - luks,discard,initramfs > /etc/crypttab - -- The use of ``initramfs`` is a work-around for `cryptsetup does not - support - ZFS `__. +- The use of ``initramfs`` is a work-around for `cryptsetup does not + support + ZFS `__. **Hint:** If you are creating a mirror or raidz topology, repeat the ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. @@ -595,40 +520,34 @@ instead of ``linux-image-generic``. Choose one of the following options: -4.8a Install GRUB for legacy (BIOS) booting +4.8a Install GRUB for legacy (BIOS) booting:: -:: - - apt install --yes grub-pc + apt install --yes grub-pc Install GRUB to the disk(s), not the partition(s). -4.8b Install GRUB for UEFI booting +4.8b Install GRUB for UEFI booting:: -:: + apt install dosfstools + mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 + mkdir /boot/efi + echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \ + /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab + mount /boot/efi + apt install --yes grub-efi-amd64-signed shim-signed - apt install dosfstools - mkdosfs -F 32 -s 1 -n EFI ${DISK}-part2 - mkdir /boot/efi - echo PARTUUID=$(blkid -s PARTUUID -o value ${DISK}-part2) \ - /boot/efi vfat nofail,x-systemd.device-timeout=1 0 1 >> /etc/fstab - mount /boot/efi - apt install --yes grub-efi-amd64-signed shim-signed - -- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which - present 4 KiB logical sectors (“4Kn” drives) to meet the minimum - cluster size (given the partition size of 512 MiB) for FAT32. It also - works fine on drives which present 512 B sectors. +- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which + present 4 KiB logical sectors (“4Kn” drives) to meet the minimum + cluster size (given the partition size of 512 MiB) for FAT32. It also + works fine on drives which present 512 B sectors. **Note:** If you are creating a mirror or raidz topology, this step only installs GRUB on the first disk. The other disk(s) will be handled later. -4.9 Set a root password +4.9 Set a root password:: -:: - - passwd + passwd 4.10 Enable importing bpool @@ -638,23 +557,26 @@ or whether ``zfs-import-scan.service`` is enabled. :: - vi /etc/systemd/system/zfs-import-bpool.service - [Unit] - DefaultDependencies=no - Before=zfs-import-scan.service - Before=zfs-import-cache.service + vi /etc/systemd/system/zfs-import-bpool.service - [Service] - Type=oneshot - RemainAfterExit=yes - ExecStart=/sbin/zpool import -N -o cachefile=none bpool +.. code-block:: ini - [Install] - WantedBy=zfs-import.target + [Unit] + DefaultDependencies=no + Before=zfs-import-scan.service + Before=zfs-import-cache.service + + [Service] + Type=oneshot + RemainAfterExit=yes + ExecStart=/sbin/zpool import -N -o cachefile=none bpool + + [Install] + WantedBy=zfs-import.target :: - systemctl enable zfs-import-bpool.service + systemctl enable zfs-import-bpool.service 4.11 Optional (but recommended): Mount a tmpfs to /tmp @@ -664,94 +586,78 @@ tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit. :: - cp /usr/share/systemd/tmp.mount /etc/systemd/system/ - systemctl enable tmp.mount + cp /usr/share/systemd/tmp.mount /etc/systemd/system/ + systemctl enable tmp.mount 4.12 Setup system groups: :: - addgroup --system lpadmin - addgroup --system sambashare + addgroup --system lpadmin + addgroup --system sambashare Step 5: GRUB Installation ------------------------- -5.1 Verify that the ZFS boot filesystem is recognized: +5.1 Verify that the ZFS boot filesystem is recognized:: -:: + grub-probe /boot - grub-probe /boot +5.2 Refresh the initrd files:: -5.2 Refresh the initrd files: - -:: - - update-initramfs -u -k all + update-initramfs -u -k all **Note:** When using LUKS, this will print "WARNING could not determine root device from /etc/fstab". This is because `cryptsetup does not support ZFS `__. -5.3 Workaround GRUB's missing zpool-features support: +5.3 Workaround GRUB's missing zpool-features support:: -:: + vi /etc/default/grub + # Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/ubuntu" - vi /etc/default/grub - Set: GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/ubuntu" +5.4 Optional (but highly recommended): Make debugging GRUB easier:: -5.4 Optional (but highly recommended): Make debugging GRUB easier: - -:: - - vi /etc/default/grub - Comment out: GRUB_TIMEOUT_STYLE=hidden - Set: GRUB_TIMEOUT=5 - Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5 - Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT - Uncomment: GRUB_TERMINAL=console - Save and quit. + vi /etc/default/grub + # Comment out: GRUB_TIMEOUT_STYLE=hidden + # Set: GRUB_TIMEOUT=5 + # Below GRUB_TIMEOUT, add: GRUB_RECORDFAIL_TIMEOUT=5 + # Remove quiet and splash from: GRUB_CMDLINE_LINUX_DEFAULT + # Uncomment: GRUB_TERMINAL=console + # Save and quit. Later, once the system has rebooted twice and you are sure everything is working, you can undo these changes, if desired. -5.5 Update the boot configuration: +5.5 Update the boot configuration:: -:: - - update-grub + update-grub **Note:** Ignore errors from ``osprober``, if present. 5.6 Install the boot loader -5.6a For legacy (BIOS) booting, install GRUB to the MBR: +5.6a For legacy (BIOS) booting, install GRUB to the MBR:: -:: - - grub-install $DISK + grub-install $DISK Note that you are installing GRUB to the whole disk, not a partition. If you are creating a mirror or raidz topology, repeat the ``grub-install`` command for each disk in the pool. -5.6b For UEFI booting, install GRUB: +5.6b For UEFI booting, install GRUB:: -:: - - grub-install --target=x86_64-efi --efi-directory=/boot/efi \ - --bootloader-id=ubuntu --recheck --no-floppy + grub-install --target=x86_64-efi --efi-directory=/boot/efi \ + --bootloader-id=ubuntu --recheck --no-floppy It is not necessary to specify the disk here. If you are creating a mirror or raidz topology, the additional disks will be handled later. -5.7 Verify that the ZFS module is installed: +5.7 Verify that the ZFS module is installed:: -:: - - ls /boot/grub/*/zfs.mod + ls /boot/grub/*/zfs.mod 5.8 Fix filesystem mount ordering @@ -776,121 +682,95 @@ with UEFI, we need to ensure it is mounted before its child filesystem point in adding ``x-systemd.requires=zfs-import.target`` to those filesystems. -For UEFI booting, unmount /boot/efi first: +For UEFI booting, unmount /boot/efi first:: -:: + umount /boot/efi - umount /boot/efi +Everything else applies to both BIOS and UEFI booting:: -Everything else applies to both BIOS and UEFI booting: + zfs set mountpoint=legacy bpool/BOOT/ubuntu + echo bpool/BOOT/ubuntu /boot zfs \ + nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab -:: + zfs set mountpoint=legacy rpool/var/log + echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab - zfs set mountpoint=legacy bpool/BOOT/ubuntu - echo bpool/BOOT/ubuntu /boot zfs \ - nodev,relatime,x-systemd.requires=zfs-import-bpool.service 0 0 >> /etc/fstab + zfs set mountpoint=legacy rpool/var/spool + echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab - zfs set mountpoint=legacy rpool/var/log - echo rpool/var/log /var/log zfs nodev,relatime 0 0 >> /etc/fstab +If you created a /var/tmp dataset:: - zfs set mountpoint=legacy rpool/var/spool - echo rpool/var/spool /var/spool zfs nodev,relatime 0 0 >> /etc/fstab + zfs set mountpoint=legacy rpool/var/tmp + echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab -If you created a /var/tmp dataset: +If you created a /tmp dataset:: -:: - - zfs set mountpoint=legacy rpool/var/tmp - echo rpool/var/tmp /var/tmp zfs nodev,relatime 0 0 >> /etc/fstab - -If you created a /tmp dataset: - -:: - - zfs set mountpoint=legacy rpool/tmp - echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab + zfs set mountpoint=legacy rpool/tmp + echo rpool/tmp /tmp zfs nodev,relatime 0 0 >> /etc/fstab Step 6: First Boot ------------------ -6.1 Snapshot the initial installation: +6.1 Snapshot the initial installation:: -:: - - zfs snapshot bpool/BOOT/ubuntu@install - zfs snapshot rpool/ROOT/ubuntu@install + zfs snapshot bpool/BOOT/ubuntu@install + zfs snapshot rpool/ROOT/ubuntu@install In the future, you will likely want to take snapshots before each upgrade, and remove old snapshots (including this one) at some point to save space. -6.2 Exit from the ``chroot`` environment back to the LiveCD environment: +6.2 Exit from the ``chroot`` environment back to the LiveCD environment:: -:: - - exit + exit 6.3 Run these commands in the LiveCD environment to unmount all -filesystems: +filesystems:: -:: + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + zpool export -a - mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} - zpool export -a +6.4 Reboot:: -6.4 Reboot: - -:: - - reboot + reboot 6.5 Wait for the newly installed system to boot normally. Login as root. -6.6 Create a user account: +6.6 Create a user account:: -:: - - zfs create rpool/home/YOURUSERNAME - adduser YOURUSERNAME - cp -a /etc/skel/. /home/YOURUSERNAME - chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME + zfs create rpool/home/YOURUSERNAME + adduser YOURUSERNAME + cp -a /etc/skel/. /home/YOURUSERNAME + chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME 6.7 Add your user account to the default set of groups for an -administrator: +administrator:: -:: - - usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo YOURUSERNAME + usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo YOURUSERNAME 6.8 Mirror GRUB If you installed to multiple disks, install GRUB on the additional disks: -6.8a For legacy (BIOS) booting: +6.8a For legacy (BIOS) booting:: -:: + dpkg-reconfigure grub-pc + Hit enter until you get to the device selection screen. + Select (using the space bar) all of the disks (not partitions) in your pool. - dpkg-reconfigure grub-pc - Hit enter until you get to the device selection screen. - Select (using the space bar) all of the disks (not partitions) in your pool. +6.8b For UEFI booting:: -6.8b UEFI + umount /boot/efi -:: +For the second and subsequent disks (increment ubuntu-2 to -3, etc.):: - umount /boot/efi + dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ + of=/dev/disk/by-id/scsi-SATA_disk2-part2 + efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ + -p 2 -L "ubuntu-2" -l '\EFI\ubuntu\shimx64.efi' -For the second and subsequent disks (increment ubuntu-2 to -3, etc.): - -:: - - dd if=/dev/disk/by-id/scsi-SATA_disk1-part2 \ - of=/dev/disk/by-id/scsi-SATA_disk2-part2 - efibootmgr -c -g -d /dev/disk/by-id/scsi-SATA_disk2 \ - -p 2 -L "ubuntu-2" -l '\EFI\ubuntu\shimx64.efi' - - mount /boot/efi + mount /boot/efi Step 7: (Optional) Configure Swap --------------------------------- @@ -900,14 +780,12 @@ zvol for swap can result in lockup, regardless of how much swap is still available. This issue is currently being investigated in: `https://github.com/zfsonlinux/zfs/issues/7734 `__ -7.1 Create a volume dataset (zvol) for use as a swap device: +7.1 Create a volume dataset (zvol) for use as a swap device:: -:: - - zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ - -o logbias=throughput -o sync=always \ - -o primarycache=metadata -o secondarycache=none \ - -o com.sun:auto-snapshot=false rpool/swap + zfs create -V 4G -b $(getconf PAGESIZE) -o compression=zle \ + -o logbias=throughput -o sync=always \ + -o primarycache=metadata -o secondarycache=none \ + -o com.sun:auto-snapshot=false rpool/swap You can adjust the size (the ``4G`` part) to your needs. @@ -925,9 +803,9 @@ files. Never use a short ``/dev/zdX`` device name. :: - mkswap -f /dev/zvol/rpool/swap - echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab - echo RESUME=none > /etc/initramfs-tools/conf.d/resume + mkswap -f /dev/zvol/rpool/swap + echo /dev/zvol/rpool/swap none swap discard 0 0 >> /etc/fstab + echo RESUME=none > /etc/initramfs-tools/conf.d/resume The ``RESUME=none`` is necessary to disable resuming from hibernation. This does not work, as the zvol is not present (because the pool has not @@ -935,48 +813,41 @@ yet been imported) at the time the resume script runs. If it is not disabled, the boot process hangs for 30 seconds waiting for the swap zvol to appear. -7.3 Enable the swap device: +7.3 Enable the swap device:: -:: - - swapon -av + swapon -av Step 8: Full Software Installation ---------------------------------- -8.1 Upgrade the minimal system: +8.1 Upgrade the minimal system:: -:: - - apt dist-upgrade --yes + apt dist-upgrade --yes 8.2 Install a regular set of software: Choose one of the following options: -8.2a Install a command-line environment only: +8.2a Install a command-line environment only:: -:: + apt install --yes ubuntu-standard - apt install --yes ubuntu-standard +8.2b Install a full GUI environment:: -8.2b Install a full GUI environment: - -:: - - apt install --yes ubuntu-desktop - vi /etc/gdm3/custom.conf - In the [daemon] section, add: InitialSetupEnable=false + apt install --yes ubuntu-desktop + vi /etc/gdm3/custom.conf + # In the [daemon] section, add: InitialSetupEnable=false **Hint**: If you are installing a full GUI environment, you will likely -want to manage your network with NetworkManager: +want to manage your network with NetworkManager:: -:: + vi /etc/netplan/01-netcfg.yaml - vi /etc/netplan/01-netcfg.yaml - network: - version: 2 - renderer: NetworkManager +.. code-block:: yaml + + network: + version: 2 + renderer: NetworkManager 8.3 Optional: Disable log compression: @@ -986,21 +857,17 @@ Also, if you are making snapshots of ``/var/log``, logrotate’s compression will actually waste space, as the uncompressed data will live on in the snapshot. You can edit the files in ``/etc/logrotate.d`` by hand to comment out ``compress``, or use this loop (copy-and-paste -highly recommended): +highly recommended):: -:: + for file in /etc/logrotate.d/* ; do + if grep -Eq "(^|[^#y])compress" "$file" ; then + sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" + fi + done - for file in /etc/logrotate.d/* ; do - if grep -Eq "(^|[^#y])compress" "$file" ; then - sed -i -r "s/(^|[^#y])(compress)/\1#\2/" "$file" - fi - done +8.4 Reboot:: -8.4 Reboot: - -:: - - reboot + reboot Step 9: Final Cleanup --------------------- @@ -1008,18 +875,14 @@ Step 9: Final Cleanup 9.1 Wait for the system to boot normally. Login using the account you created. Ensure the system (including networking) works normally. -9.2 Optional: Delete the snapshots of the initial installation: +9.2 Optional: Delete the snapshots of the initial installation:: -:: + sudo zfs destroy bpool/BOOT/ubuntu@install + sudo zfs destroy rpool/ROOT/ubuntu@install - sudo zfs destroy bpool/BOOT/ubuntu@install - sudo zfs destroy rpool/ROOT/ubuntu@install +9.3 Optional: Disable the root password:: -9.3 Optional: Disable the root password - -:: - - sudo usermod -p '*' root + sudo usermod -p '*' root 9.4 Optional: Re-enable the graphical boot process: @@ -1028,22 +891,20 @@ you are using LUKS, it makes the prompt look nicer. :: - sudo vi /etc/default/grub - Uncomment: GRUB_TIMEOUT_STYLE=hidden - Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT - Comment out: GRUB_TERMINAL=console - Save and quit. + sudo vi /etc/default/grub + # Uncomment: GRUB_TIMEOUT_STYLE=hidden + # Add quiet and splash to: GRUB_CMDLINE_LINUX_DEFAULT + # Comment out: GRUB_TERMINAL=console + # Save and quit. - sudo update-grub + sudo update-grub **Note:** Ignore errors from ``osprober``, if present. -9.5 Optional: For LUKS installs only, backup the LUKS header: +9.5 Optional: For LUKS installs only, backup the LUKS header:: -:: - - sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ - --header-backup-file luks1-header.dat + sudo cryptsetup luksHeaderBackup /dev/disk/by-id/scsi-SATA_disk1-part4 \ + --header-backup-file luks1-header.dat Store that backup somewhere safe (e.g. cloud storage). It is protected by your LUKS passphrase, but you may wish to use additional encryption. @@ -1060,44 +921,36 @@ Rescuing using a Live CD Go through `Step 1: Prepare The Install Environment <#step-1-prepare-the-install-environment>`__. -For LUKS, first unlock the disk(s): +For LUKS, first unlock the disk(s):: -:: + cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 + # Repeat for additional disks, if this is a mirror or raidz topology. - cryptsetup luksOpen /dev/disk/by-id/scsi-SATA_disk1-part4 luks1 - Repeat for additional disks, if this is a mirror or raidz topology. +Mount everything correctly:: -Mount everything correctly: + zpool export -a + zpool import -N -R /mnt rpool + zpool import -N -R /mnt bpool + zfs mount rpool/ROOT/ubuntu + zfs mount -a -:: +If needed, you can chroot into your installed environment:: - zpool export -a - zpool import -N -R /mnt rpool - zpool import -N -R /mnt bpool - zfs mount rpool/ROOT/ubuntu - zfs mount -a - -If needed, you can chroot into your installed environment: - -:: - - mount --rbind /dev /mnt/dev - mount --rbind /proc /mnt/proc - mount --rbind /sys /mnt/sys - chroot /mnt /bin/bash --login - mount /boot - mount -a + mount --rbind /dev /mnt/dev + mount --rbind /proc /mnt/proc + mount --rbind /sys /mnt/sys + chroot /mnt /bin/bash --login + mount /boot + mount -a Do whatever you need to do to fix your system. -When done, cleanup: +When done, cleanup:: -:: - - exit - mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} - zpool export -a - reboot + exit + mount | grep -v zfs | tac | awk '/\/mnt/ {print $3}' | xargs -i{} umount -lf {} + zpool export -a + reboot MPT2SAS ~~~~~~~ @@ -1131,9 +984,9 @@ this error message. VMware ~~~~~~ -- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere - configuration. Doing this ensures that ``/dev/disk`` aliases are - created in the guest. +- Set ``disk.EnableUUID = "TRUE"`` in the vmx file or vsphere + configuration. Doing this ensures that ``/dev/disk`` aliases are + created in the guest. QEMU/KVM/XEN ~~~~~~~~~~~~ @@ -1142,17 +995,20 @@ Set a unique serial number on each virtual disk using libvirt or qemu (e.g. ``-drive if=none,id=disk1,file=disk1.qcow2,serial=1234567890``). To be able to use UEFI in guests (instead of only BIOS booting), run -this on the host: +this on the host:: + + sudo apt install ovmf + sudo vi /etc/libvirt/qemu.conf + +Uncomment these lines: + +.. code-block:: text + + nvram = [ + "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", + "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" + ] :: - sudo apt install ovmf - - sudo vi /etc/libvirt/qemu.conf - Uncomment these lines: - nvram = [ - "/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd", - "/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd" - ] - - sudo service libvirt-bin restart + sudo service libvirt-bin restart