Debian/Ubuntu: Add Ubuntu 20.04

I have backported some of the cosmetic and other minimal changes to
Ubuntu 18.04 and Debian Buster.  This keeps the deltas as small as
reasonably possible.

Signed-off-by: Richard Laager <rlaager@wiktel.com>
This commit is contained in:
Richard Laager
2020-05-24 02:38:49 -05:00
parent 417c2a5379
commit 7c4861e911
5 changed files with 1280 additions and 190 deletions

View File

@@ -23,7 +23,7 @@ System Requirements
iso) <https://cdimage.debian.org/mirror/cdimage/release/current-live/amd64/iso-hybrid/>`__ iso) <https://cdimage.debian.org/mirror/cdimage/release/current-live/amd64/iso-hybrid/>`__
- `A 64-bit kernel is strongly - `A 64-bit kernel is strongly
encouraged. <https://github.com/zfsonlinux/zfs/wiki/FAQ#32-bit-vs-64-bit-systems>`__ encouraged. <https://github.com/zfsonlinux/zfs/wiki/FAQ#32-bit-vs-64-bit-systems>`__
- Installing on a drive which presents 4KiB logical sectors (a “4Kn” - Installing on a drive which presents 4 KiB logical sectors (a “4Kn”
drive) only works with UEFI booting. This not unique to ZFS. `GRUB drive) only works with UEFI booting. This not unique to ZFS. `GRUB
does not and will not work on 4Kn with legacy (BIOS) does not and will not work on 4Kn with legacy (BIOS)
booting. <http://savannah.gnu.org/bugs/?46700>`__ booting. <http://savannah.gnu.org/bugs/?46700>`__
@@ -39,7 +39,7 @@ Support
If you need help, reach out to the community using the :doc:`zfs-discuss If you need help, reach out to the community using the :doc:`zfs-discuss
mailing list <../../Project and Community/Mailing Lists>` or IRC at mailing list <../../Project and Community/Mailing Lists>` or IRC at
`#zfsonlinux <irc://irc.freenode.net/#zfsonlinux>`__. on `freenode `#zfsonlinux <irc://irc.freenode.net/#zfsonlinux>`__ on `freenode
<https://freenode.net/>`__. If you have a bug report or feature request <https://freenode.net/>`__. If you have a bug report or feature request
related to this HOWTO, please `file a new issue and mention @rlaager related to this HOWTO, please `file a new issue and mention @rlaager
<https://github.com/openzfs/openzfs-docs/issues/new?body=@rlaager,%20I%20have%20the%20following%20issue%20with%20the%20Debian%20Buster%20Root%20on%20ZFS%20HOWTO:>`__. <https://github.com/openzfs/openzfs-docs/issues/new?body=@rlaager,%20I%20have%20the%20following%20issue%20with%20the%20Debian%20Buster%20Root%20on%20ZFS%20HOWTO:>`__.
@@ -74,20 +74,13 @@ Contributing
Encryption Encryption
~~~~~~~~~~ ~~~~~~~~~~
This guide supports three different encryption options: unencrypted, This guide supports three different encryption options: unencrypted, ZFS
LUKS (full-disk encryption), and ZFS native encryption. With any option, native encryption, and LUKS. With any option, all ZFS features are fully
all ZFS features are fully available. available.
Unencrypted does not encrypt anything, of course. With no encryption Unencrypted does not encrypt anything, of course. With no encryption
happening, this option naturally has the best performance. happening, this option naturally has the best performance.
LUKS encrypts almost everything: the OS, swap, home directories, and
anything else. The only unencrypted data is the bootloader, kernel, and
initrd. The system cannot boot without the passphrase being entered at
the console. Performance is good, but LUKS sits underneath ZFS, so if
multiple disks (mirror or raidz topologies) are used, the data has to be
encrypted once per disk.
ZFS native encryption encrypts the data and most metadata in the root ZFS native encryption encrypts the data and most metadata in the root
pool. It does not encrypt dataset or snapshot names or properties. The pool. It does not encrypt dataset or snapshot names or properties. The
boot pool is not encrypted at all, but it only contains the bootloader, boot pool is not encrypted at all, but it only contains the bootloader,
@@ -97,6 +90,12 @@ without the passphrase being entered at the console. Performance is
good. As the encryption happens in ZFS, even if multiple disks (mirror good. As the encryption happens in ZFS, even if multiple disks (mirror
or raidz topologies) are used, the data only has to be encrypted once. or raidz topologies) are used, the data only has to be encrypted once.
LUKS encrypts almost everything. The only unencrypted data is the bootloader,
kernel, and initrd. The system cannot boot without the passphrase being
entered at the console. Performance is good, but LUKS sits underneath ZFS, so
if multiple disks (mirror or raidz topologies) are used, the data has to be
encrypted once per disk.
Step 1: Prepare The Install Environment Step 1: Prepare The Install Environment
--------------------------------------- ---------------------------------------
@@ -212,12 +211,7 @@ commands for all the disks which will be part of the pool.
-o feature@large_blocks=enabled \ -o feature@large_blocks=enabled \
-o feature@lz4_compress=enabled \ -o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \ -o feature@spacemap_histogram=enabled \
-o feature@userobj_accounting=enabled \
-o feature@zpool_checkpoint=enabled \ -o feature@zpool_checkpoint=enabled \
-o feature@spacemap_v2=enabled \
-o feature@project_quota=enabled \
-o feature@resilver_defer=enabled \
-o feature@allocation_classes=enabled \
-O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
-O normalization=formD -O relatime=on -O xattr=sa \ -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt bpool ${DISK}-part3 -O mountpoint=/ -R /mnt bpool ${DISK}-part3
@@ -230,7 +224,7 @@ GRUB does not support all of the zpool features. See
This step creates a separate boot pool for ``/boot`` with the features This step creates a separate boot pool for ``/boot`` with the features
limited to only those that GRUB supports, allowing the root pool to use limited to only those that GRUB supports, allowing the root pool to use
any/all features. Note that GRUB opens the pool read-only, so all any/all features. Note that GRUB opens the pool read-only, so all
read-only compatible features are "supported" by GRUB. read-only compatible features are supported by GRUB.
**Hints:** **Hints:**
@@ -241,6 +235,24 @@ read-only compatible features are "supported" by GRUB.
- The pool name is arbitrary. If changed, the new name must be used - The pool name is arbitrary. If changed, the new name must be used
consistently. The ``bpool`` convention originated in this HOWTO. consistently. The ``bpool`` convention originated in this HOWTO.
**Feature Notes:**
- The ``allocation_classes`` feature should be safe to use. However, unless
one is using it (i.e. a ``special`` vdev), there is no point to enabling it.
It is extremely unlikely that someone would use this feature for a boot
pool. If one cares about speeding up the boot pool, it would make more sense
to put the whole pool on the faster disk rather than using it as a
``special`` vdev.
- The ``project_quota`` feature has been tested and is safe to use. This
feature is extremely unlikely to matter for the boot pool.
- The ``resilver_defer`` should be safe but the boot pool is small enough that
it is unlikely to be necessary.
- The ``spacemap_v2`` feature has been tested and is safe to use. The boot
pool is small, so this does not matter in practice.
- As a read-only compatible feature, the ``userobj_accounting`` feature should
be compatible in theory, but in practice, GRUB can fail with an “invalid
dnode type” error. This feature does not matter for ``/boot`` anyway.
2.5 Create the root pool: 2.5 Create the root pool:
Choose one of the following options: Choose one of the following options:
@@ -252,7 +264,15 @@ Choose one of the following options:
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt rpool ${DISK}-part4 -O mountpoint=/ -R /mnt rpool ${DISK}-part4
2.5b LUKS:: 2.5b ZFS native encryption::
zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
-O mountpoint=/ -R /mnt rpool ${DISK}-part4
2.5c LUKS::
apt install --yes cryptsetup apt install --yes cryptsetup
cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4 cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
@@ -262,22 +282,16 @@ Choose one of the following options:
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt rpool /dev/mapper/luks1 -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1
2.5c ZFS native encryption:: **Notes:**
zpool create -o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase \
-O mountpoint=/ -R /mnt rpool ${DISK}-part4
- The use of ``ashift=12`` is recommended here because many drives - The use of ``ashift=12`` is recommended here because many drives
today have 4KiB (or larger) physical sectors, even though they today have 4 KiB (or larger) physical sectors, even though they
present 512B logical sectors. Also, a future replacement drive may present 512 B logical sectors. Also, a future replacement drive may
have 4KiB physical sectors (in which case ``ashift=12`` is desirable) have 4 KiB physical sectors (in which case ``ashift=12`` is desirable)
or 4KiB logical sectors (in which case ``ashift=12`` is required). or 4 KiB logical sectors (in which case ``ashift=12`` is required).
- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you
do not want this, remove that option, but later add do not want this, remove that option, but later add
``-o acltype=posixacl`` (note: lowercase "o") to the ``zfs create`` ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create``
for ``/var/log``, as `journald requires for ``/var/log``, as `journald requires
ACLs <https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported>`__ ACLs <https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported>`__
- Setting ``normalization=formD`` eliminates some corner cases relating - Setting ``normalization=formD`` eliminates some corner cases relating
@@ -287,11 +301,18 @@ Choose one of the following options:
of why requiring UTF-8 filenames may be a bad idea, see `The problems of why requiring UTF-8 filenames may be a bad idea, see `The problems
with enforced UTF-8 only with enforced UTF-8 only
filenames <http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames>`__. filenames <http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames>`__.
- ``recordsize`` is unset (leaving it at the default of 128 KiB). If you want to
tune it (e.g. ``-o recordsize=1M``), see `these
<https://jrs-s.net/2019/04/03/on-zfs-recordsize/>`__ `various
<http://blog.programster.org/zfs-record-size>`__ `blog
<https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSFileRecordsizeGrowth>`__
`posts
<https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSRecordsizeAndCompression>`__.
- Setting ``relatime=on`` is a middle ground between classic POSIX - Setting ``relatime=on`` is a middle ground between classic POSIX
``atime`` behavior (with its significant performance impact) and ``atime`` behavior (with its significant performance impact) and
``atime=off`` (which provides the best performance by completely ``atime=off`` (which provides the best performance by completely
disabling atime updates). Since Linux 2.6.30, ``relatime`` has been disabling atime updates). Since Linux 2.6.30, ``relatime`` has been
the default for other filesystems. See `RedHat's the default for other filesystems. See `RedHats
documentation <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime>`__ documentation <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime>`__
for further information. for further information.
- Setting ``xattr=sa`` `vastly improves the performance of extended - Setting ``xattr=sa`` `vastly improves the performance of extended
@@ -314,16 +335,17 @@ Choose one of the following options:
- Make sure to include the ``-part4`` portion of the drive path. If you - Make sure to include the ``-part4`` portion of the drive path. If you
forget that, you are specifying the whole disk, which ZFS will then forget that, you are specifying the whole disk, which ZFS will then
re-partition, and you will lose the bootloader partition(s). re-partition, and you will lose the bootloader partition(s).
- ZFS native encryption defaults to ``aes-256-ccm``, but `the default has
changed upstream <https://github.com/openzfs/zfs/commit/31b160f0a6c673c8f926233af2ed6d5354808393>`__
to ``aes-256-gcm``. `AES-GCM seems to be generally preferred over AES-CCM
<https://crypto.stackexchange.com/questions/6842/how-to-choose-between-aes-ccm-and-aes-gcm-for-storage-volume-encryption>`__,
`is faster now
<https://github.com/zfsonlinux/zfs/pull/9749#issuecomment-569132997>`__, and
`will be even faster in the future
<https://github.com/zfsonlinux/zfs/pull/9749>`__.
- For LUKS, the key size chosen is 512 bits. However, XTS mode requires - For LUKS, the key size chosen is 512 bits. However, XTS mode requires
two keys, so the LUKS key is split in half. Thus, ``-s 512`` means two keys, so the LUKS key is split in half. Thus, ``-s 512`` means
AES-256. AES-256.
- ZFS native encryption uses ``aes-256-ccm`` by default. `AES-GCM seems
to be generally preferred over
AES-CCM <https://crypto.stackexchange.com/questions/6842/how-to-choose-between-aes-ccm-and-aes-gcm-for-storage-volume-encryption>`__,
`is faster
now <https://github.com/zfsonlinux/zfs/pull/9749#issuecomment-569132997>`__,
and `will be even faster in the
future <https://github.com/zfsonlinux/zfs/pull/9749>`__.
- Your passphrase will likely be the weakest link. Choose wisely. See - Your passphrase will likely be the weakest link. Choose wisely. See
`section 5 of the cryptsetup `section 5 of the cryptsetup
FAQ <https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects>`__ FAQ <https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions#5-security-aspects>`__
@@ -351,9 +373,10 @@ Step 3: System Installation
On Solaris systems, the root filesystem is cloned and the suffix is On Solaris systems, the root filesystem is cloned and the suffix is
incremented for major system changes through ``pkg image-update`` or incremented for major system changes through ``pkg image-update`` or
``beadm``. Similar functionality for APT is possible but currently ``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with the
unimplemented. Even without such a tool, it can still be used for ``zsys`` tool, though its dataset layout is more complicated. Even without
manually created clones. such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still be used
for manually created clones.
3.2 Create filesystem datasets for the root and boot filesystems:: 3.2 Create filesystem datasets for the root and boot filesystems::
@@ -428,17 +451,15 @@ If this system will use NFS (locking)::
zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
A tmpfs is recommended later, but if you want a separate dataset for A tmpfs is recommended later, but if you want a separate dataset for
/tmp:: ``/tmp``::
zfs create -o com.sun:auto-snapshot=false rpool/tmp zfs create -o com.sun:auto-snapshot=false rpool/tmp
chmod 1777 /mnt/tmp chmod 1777 /mnt/tmp
The primary goal of this dataset layout is to separate the OS from user The primary goal of this dataset layout is to separate the OS from user data.
data. This allows the root filesystem to be rolled back without rolling This allows the root filesystem to be rolled back without rolling back user
back user data such as logs (in ``/var/log``). This will be especially data. The ``com.sun.auto-snapshot`` setting is used by some ZFS
important if/when a ``beadm`` or similar utility is integrated. The snapshot utilities to exclude transient data.
``com.sun.auto-snapshot`` setting is used by some ZFS snapshot utilities
to exclude transient data.
If you do nothing extra, ``/tmp`` will be stored as part of the root If you do nothing extra, ``/tmp`` will be stored as part of the root
filesystem. Alternatively, you can create a separate dataset for filesystem. Alternatively, you can create a separate dataset for
@@ -459,8 +480,9 @@ of a working system into the new ZFS root.
Step 4: System Configuration Step 4: System Configuration
---------------------------- ----------------------------
4.1 Configure the hostname (change ``HOSTNAME`` to the desired 4.1 Configure the hostname:
hostname)::
Replace ``HOSTNAME`` with the desired hostname::
echo HOSTNAME > /mnt/etc/hostname echo HOSTNAME > /mnt/etc/hostname
vi /mnt/etc/hosts vi /mnt/etc/hosts
@@ -491,9 +513,7 @@ Adjust NAME below to match your interface name::
Customize this file if the system is not a DHCP client. Customize this file if the system is not a DHCP client.
4.3 Configure the package sources: 4.3 Configure the package sources::
::
vi /mnt/etc/apt/sources.list vi /mnt/etc/apt/sources.list
@@ -550,16 +570,15 @@ Even if you prefer a non-English system language, always ensure that
apt install --yes zfs-initramfs apt install --yes zfs-initramfs
echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf
4.7 For LUKS installs only, setup crypttab:: 4.7 For LUKS installs only, setup ``/etc/crypttab``::
apt install --yes cryptsetup apt install --yes cryptsetup
echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
luks,discard,initramfs > /etc/crypttab luks,discard,initramfs > /etc/crypttab
- The use of ``initramfs`` is a work-around for `cryptsetup does not The use of ``initramfs`` is a work-around for `cryptsetup does not support ZFS
support <https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__.\
ZFS <https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__.
**Hint:** If you are creating a mirror or raidz topology, repeat the **Hint:** If you are creating a mirror or raidz topology, repeat the
``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk.
@@ -572,7 +591,7 @@ Choose one of the following options:
apt install --yes grub-pc apt install --yes grub-pc
Install GRUB to the disk(s), not the partition(s). Select (using the space bar) all of the disks (not partitions) in your pool.
4.8b Install GRUB for UEFI booting:: 4.8b Install GRUB for UEFI booting::
@@ -584,20 +603,27 @@ Install GRUB to the disk(s), not the partition(s).
mount /boot/efi mount /boot/efi
apt install --yes grub-efi-amd64 shim-signed apt install --yes grub-efi-amd64 shim-signed
- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which **Notes:**
present 4 KiB logical sectors (“4Kn” drives) to meet the minimum
cluster size (given the partition size of 512 MiB) for FAT32. It also
works fine on drives which present 512 B sectors.
**Note:** If you are creating a mirror or raidz topology, this step only - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present
installs GRUB on the first disk. The other disk(s) will be handled 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size
later. (given the partition size of 512 MiB) for FAT32. It also works fine on
drives which present 512 B sectors.
- For a mirror or raidz topology, this step only installs GRUB on the
first disk. The other disk(s) will be handled later.
4.9 Set a root password:: 4.9 (Optional): Remove os-prober::
dpkg --purge os-prober
This avoids error messages from `update-grub`. `os-prober` is only necessary
in dual-boot configurations.
4.10 Set a root password::
passwd passwd
4.10 Enable importing bpool 4.11 Enable importing bpool
This ensures that ``bpool`` is always imported, regardless of whether This ensures that ``bpool`` is always imported, regardless of whether
``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not,
@@ -626,7 +652,7 @@ or whether ``zfs-import-scan.service`` is enabled.
systemctl enable zfs-import-bpool.service systemctl enable zfs-import-bpool.service
4.11 Optional (but recommended): Mount a tmpfs to /tmp 4.12 Optional (but recommended): Mount a tmpfs to ``/tmp``
If you chose to create a ``/tmp`` dataset above, skip this step, as they If you chose to create a ``/tmp`` dataset above, skip this step, as they
are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a
@@ -637,7 +663,7 @@ tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit.
cp /usr/share/systemd/tmp.mount /etc/systemd/system/ cp /usr/share/systemd/tmp.mount /etc/systemd/system/
systemctl enable tmp.mount systemctl enable tmp.mount
4.12 Optional (but kindly requested): Install popcon 4.13 Optional (but kindly requested): Install popcon
The ``popularity-contest`` package reports the list of packages install The ``popularity-contest`` package reports the list of packages install
on your system. Showing that ZFS is popular may be helpful in terms of on your system. Showing that ZFS is popular may be helpful in terms of
@@ -658,12 +684,12 @@ Step 5: GRUB Installation
5.2 Refresh the initrd files:: 5.2 Refresh the initrd files::
update-initramfs -u -k all update-initramfs -c -k all
**Note:** When using LUKS, this will print "WARNING could not determine **Note:** When using LUKS, this will print WARNING could not determine
root device from /etc/fstab". This is because `cryptsetup does not root device from /etc/fstab. This is because `cryptsetup does not
support support ZFS
ZFS <https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__. <https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__.
5.3 Workaround GRUB's missing zpool-features support:: 5.3 Workaround GRUB's missing zpool-features support::
@@ -686,7 +712,7 @@ working, you can undo these changes, if desired.
**Note:** Ignore errors from ``osprober``, if present. **Note:** Ignore errors from ``osprober``, if present.
5.6 Install the boot loader 5.6 Install the boot loader:
5.6a For legacy (BIOS) booting, install GRUB to the MBR:: 5.6a For legacy (BIOS) booting, install GRUB to the MBR::
@@ -705,11 +731,7 @@ If you are creating a mirror or raidz topology, repeat the
It is not necessary to specify the disk here. If you are creating a It is not necessary to specify the disk here. If you are creating a
mirror or raidz topology, the additional disks will be handled later. mirror or raidz topology, the additional disks will be handled later.
5.7 Verify that the ZFS module is installed:: 5.7 Fix filesystem mount ordering:
ls /boot/grub/*/zfs.mod
5.8 Fix filesystem mount ordering
Until there is support for mounting ``/boot`` in the initramfs, we also Until there is support for mounting ``/boot`` in the initramfs, we also
need to mount that, because it was marked ``canmount=noauto``. Also, need to mount that, because it was marked ``canmount=noauto``. Also,
@@ -738,7 +760,7 @@ Everything else applies to both BIOS and UEFI booting::
ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d ln -s /usr/lib/zfs-linux/zed.d/history_event-zfs-list-cacher.sh /etc/zfs/zed.d
zed -F & zed -F &
Verify that zed updated the cache by making sure this is not empty:: Verify that ``zed`` updated the cache by making sure this is not empty::
cat /etc/zfs/zfs-list.cache/rpool cat /etc/zfs/zfs-list.cache/rpool
@@ -746,12 +768,12 @@ If it is empty, force a cache update and check again::
zfs set canmount=noauto rpool/ROOT/debian zfs set canmount=noauto rpool/ROOT/debian
Stop zed:: Stop ``zed``::
fg fg
Press Ctrl-C. Press Ctrl-C.
Fix the paths to eliminate /mnt:: Fix the paths to eliminate ``/mnt``::
sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/rpool sed -Ei "s|/mnt/?|/|" /etc/zfs/zfs-list.cache/rpool
@@ -781,32 +803,31 @@ filesystems::
reboot reboot
6.5 Wait for the newly installed system to boot normally. Login as root. Wait for the newly installed system to boot normally. Login as root.
6.6 Create a user account:: 6.5 Create a user account:
zfs create rpool/home/YOURUSERNAME Replace ``username`` with your desired username::
adduser YOURUSERNAME
cp -a /etc/skel/. /home/YOURUSERNAME
chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
6.7 Add your user account to the default set of groups for an zfs create rpool/home/username
administrator:: adduser username
usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video YOURUSERNAME cp -a /etc/skel/. /home/username
chown -R username:username /home/username
usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username
6.8 Mirror GRUB 6.6 Mirror GRUB
If you installed to multiple disks, install GRUB on the additional If you installed to multiple disks, install GRUB on the additional
disks: disks:
6.8a For legacy (BIOS) booting:: 6.6a For legacy (BIOS) booting::
dpkg-reconfigure grub-pc dpkg-reconfigure grub-pc
Hit enter until you get to the device selection screen. Hit enter until you get to the device selection screen.
Select (using the space bar) all of the disks (not partitions) in your pool. Select (using the space bar) all of the disks (not partitions) in your pool.
6.8b For UEFI booting:: 6.6b For UEFI booting::
umount /boot/efi umount /boot/efi
@@ -992,8 +1013,8 @@ not hotplug pool members. See
`https://github.com/zfsonlinux/zfs/issues/330 <https://github.com/zfsonlinux/zfs/issues/330>`__. `https://github.com/zfsonlinux/zfs/issues/330 <https://github.com/zfsonlinux/zfs/issues/330>`__.
Most LSI cards are perfectly compatible with ZoL. If your card has this Most LSI cards are perfectly compatible with ZoL. If your card has this
glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in
/etc/default/zfs. The system will wait X seconds for all drives to ``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to
appear before importing the pool. appear before importing the pool.
Areca Areca
@@ -1001,7 +1022,7 @@ Areca
Systems that require the ``arcsas`` blob driver should add it to the Systems that require the ``arcsas`` blob driver should add it to the
``/etc/initramfs-tools/modules`` file and run ``/etc/initramfs-tools/modules`` file and run
``update-initramfs -u -k all``. ``update-initramfs -c -k all``.
Upgrade or downgrade the Areca driver if something like Upgrade or downgrade the Areca driver if something like
``RIP: 0010:[<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20`` ``RIP: 0010:[<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20``

View File

@@ -43,7 +43,7 @@ Support
If you need help, reach out to the community using the :doc:`zfs-discuss If you need help, reach out to the community using the :doc:`zfs-discuss
mailing list <../../Project and Community/Mailing Lists>` or IRC at mailing list <../../Project and Community/Mailing Lists>` or IRC at
`#zfsonlinux <irc://irc.freenode.net/#zfsonlinux>`__. on `freenode `#zfsonlinux <irc://irc.freenode.net/#zfsonlinux>`__ on `freenode
<https://freenode.net/>`__. If you have a bug report or feature request <https://freenode.net/>`__. If you have a bug report or feature request
related to this HOWTO, please `file a new issue and mention @rlaager related to this HOWTO, please `file a new issue and mention @rlaager
<https://github.com/openzfs/openzfs-docs/issues/new?body=@rlaager,%20I%20have%20the%20following%20issue%20with%20the%20Debian%20Stretch%20Root%20on%20ZFS%20HOWTO:>`__. <https://github.com/openzfs/openzfs-docs/issues/new?body=@rlaager,%20I%20have%20the%20following%20issue%20with%20the%20Debian%20Stretch%20Root%20on%20ZFS%20HOWTO:>`__.

View File

@@ -10,7 +10,7 @@ Overview
Newer release available Newer release available
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
- See :doc:`Ubuntu 18.04 Root on ZFS <./Ubuntu 18.04 Root on ZFS>` for new - See :doc:`Ubuntu 20.04 Root on ZFS <./Ubuntu 20.04 Root on ZFS>` for new
installs. installs.
Caution Caution
@@ -41,12 +41,12 @@ deduplication is a permanent change that cannot be easily reverted.
Support Support
~~~~~~~ ~~~~~~~
If you need help, reach out to the community using the `zfs-discuss If you need help, reach out to the community using the :doc:`zfs-discuss
mailing list <https://github.com/zfsonlinux/zfs/wiki/Mailing-Lists>`__ mailing list <../../Project and Community/Mailing Lists>` or IRC at
or IRC at #zfsonlinux on `freenode <https://freenode.net/>`__. If you `#zfsonlinux <irc://irc.freenode.net/#zfsonlinux>`__ on `freenode
have a bug report or feature request related to this HOWTO, please `file <https://freenode.net/>`__. If you have a bug report or feature request
a new issue <https://github.com/zfsonlinux/zfs/issues/new>`__ and related to this HOWTO, please `file a new issue and mention @rlaager
mention @rlaager. <https://github.com/openzfs/openzfs-docs/issues/new?body=@rlaager,%20I%20have%20the%20following%20issue%20with%20the%20Ubuntu%2016.04%20Root%20on%20ZFS%20HOWTO:>`__.
Contributing Contributing
~~~~~~~~~~~~ ~~~~~~~~~~~~

View File

@@ -9,6 +9,12 @@ Ubuntu 18.04 Root on ZFS
Overview Overview
-------- --------
Newer release available
~~~~~~~~~~~~~~~~~~~~~~~
- See :doc:`Ubuntu 20.04 Root on ZFS <./Ubuntu 20.04 Root on ZFS>` for new
installs.
Caution Caution
~~~~~~~ ~~~~~~~
@@ -22,7 +28,7 @@ System Requirements
- `Ubuntu 18.04.3 ("Bionic") Desktop - `Ubuntu 18.04.3 ("Bionic") Desktop
CD <http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso>`__ CD <http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso>`__
(*not* any server images) (*not* any server images)
- Installing on a drive which presents 4KiB logical sectors (a “4Kn” - Installing on a drive which presents 4 KiB logical sectors (a “4Kn”
drive) only works with UEFI booting. This not unique to ZFS. `GRUB drive) only works with UEFI booting. This not unique to ZFS. `GRUB
does not and will not work on 4Kn with legacy (BIOS) does not and will not work on 4Kn with legacy (BIOS)
booting. <http://savannah.gnu.org/bugs/?46700>`__ booting. <http://savannah.gnu.org/bugs/?46700>`__
@@ -38,7 +44,7 @@ Support
If you need help, reach out to the community using the :doc:`zfs-discuss If you need help, reach out to the community using the :doc:`zfs-discuss
mailing list <../../Project and Community/Mailing Lists>` or IRC at mailing list <../../Project and Community/Mailing Lists>` or IRC at
`#zfsonlinux <irc://irc.freenode.net/#zfsonlinux>`__. on `freenode `#zfsonlinux <irc://irc.freenode.net/#zfsonlinux>`__ on `freenode
<https://freenode.net/>`__. If you have a bug report or feature request <https://freenode.net/>`__. If you have a bug report or feature request
related to this HOWTO, please `file a new issue and mention @rlaager related to this HOWTO, please `file a new issue and mention @rlaager
<https://github.com/openzfs/openzfs-docs/issues/new?body=@rlaager,%20I%20have%20the%20following%20issue%20with%20the%20Ubuntu%2018.04%20Root%20on%20ZFS%20HOWTO:>`__. <https://github.com/openzfs/openzfs-docs/issues/new?body=@rlaager,%20I%20have%20the%20following%20issue%20with%20the%20Ubuntu%2018.04%20Root%20on%20ZFS%20HOWTO:>`__.
@@ -74,17 +80,16 @@ Encryption
~~~~~~~~~~ ~~~~~~~~~~
This guide supports two different encryption options: unencrypted and This guide supports two different encryption options: unencrypted and
LUKS (full-disk encryption). ZFS native encryption has not yet been LUKS (full-disk encryption). With either option, all ZFS features are fully
released. With either option, all ZFS features are fully available. available. ZFS native encryption is not available in Ubuntu 18.04.
Unencrypted does not encrypt anything, of course. With no encryption Unencrypted does not encrypt anything, of course. With no encryption
happening, this option naturally has the best performance. happening, this option naturally has the best performance.
LUKS encrypts almost everything: the OS, swap, home directories, and LUKS encrypts almost everything. The only unencrypted data is the bootloader,
anything else. The only unencrypted data is the bootloader, kernel, and kernel, and initrd. The system cannot boot without the passphrase being
initrd. The system cannot boot without the passphrase being entered at entered at the console. Performance is good, but LUKS sits underneath ZFS, so
the console. Performance is good, but LUKS sits underneath ZFS, so if if multiple disks (mirror or raidz topologies) are used, the data has to be
multiple disks (mirror or raidz topologies) are used, the data has to be
encrypted once per disk. encrypted once per disk.
Step 1: Prepare The Install Environment Step 1: Prepare The Install Environment
@@ -192,7 +197,6 @@ commands for all the disks which will be part of the pool.
-o feature@large_blocks=enabled \ -o feature@large_blocks=enabled \
-o feature@lz4_compress=enabled \ -o feature@lz4_compress=enabled \
-o feature@spacemap_histogram=enabled \ -o feature@spacemap_histogram=enabled \
-o feature@userobj_accounting=enabled \
-O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \ -O acltype=posixacl -O canmount=off -O compression=lz4 -O devices=off \
-O normalization=formD -O relatime=on -O xattr=sa \ -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt bpool ${DISK}-part3 -O mountpoint=/ -R /mnt bpool ${DISK}-part3
@@ -205,7 +209,7 @@ GRUB does not support all of the zpool features. See
This step creates a separate boot pool for ``/boot`` with the features This step creates a separate boot pool for ``/boot`` with the features
limited to only those that GRUB supports, allowing the root pool to use limited to only those that GRUB supports, allowing the root pool to use
any/all features. Note that GRUB opens the pool read-only, so all any/all features. Note that GRUB opens the pool read-only, so all
read-only compatible features are "supported" by GRUB. read-only compatible features are supported by GRUB.
**Hints:** **Hints:**
@@ -216,6 +220,12 @@ read-only compatible features are "supported" by GRUB.
- The pool name is arbitrary. If changed, the new name must be used - The pool name is arbitrary. If changed, the new name must be used
consistently. The ``bpool`` convention originated in this HOWTO. consistently. The ``bpool`` convention originated in this HOWTO.
**Feature Notes:**
- As a read-only compatible feature, the ``userobj_accounting`` feature should
be compatible in theory, but in practice, GRUB can fail with an “invalid
dnode type” error. This feature does not matter for ``/boot`` anyway.
2.5 Create the root pool: 2.5 Create the root pool:
Choose one of the following options: Choose one of the following options:
@@ -236,14 +246,16 @@ Choose one of the following options:
-O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \ -O dnodesize=auto -O normalization=formD -O relatime=on -O xattr=sa \
-O mountpoint=/ -R /mnt rpool /dev/mapper/luks1 -O mountpoint=/ -R /mnt rpool /dev/mapper/luks1
**Notes:**
- The use of ``ashift=12`` is recommended here because many drives - The use of ``ashift=12`` is recommended here because many drives
today have 4KiB (or larger) physical sectors, even though they today have 4 KiB (or larger) physical sectors, even though they
present 512B logical sectors. Also, a future replacement drive may present 512 B logical sectors. Also, a future replacement drive may
have 4KiB physical sectors (in which case ``ashift=12`` is desirable) have 4 KiB physical sectors (in which case ``ashift=12`` is desirable)
or 4KiB logical sectors (in which case ``ashift=12`` is required). or 4 KiB logical sectors (in which case ``ashift=12`` is required).
- Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you - Setting ``-O acltype=posixacl`` enables POSIX ACLs globally. If you
do not want this, remove that option, but later add do not want this, remove that option, but later add
``-o acltype=posixacl`` (note: lowercase "o") to the ``zfs create`` ``-o acltype=posixacl`` (note: lowercase “o”) to the ``zfs create``
for ``/var/log``, as `journald requires for ``/var/log``, as `journald requires
ACLs <https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported>`__ ACLs <https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported>`__
- Setting ``normalization=formD`` eliminates some corner cases relating - Setting ``normalization=formD`` eliminates some corner cases relating
@@ -253,11 +265,18 @@ Choose one of the following options:
of why requiring UTF-8 filenames may be a bad idea, see `The problems of why requiring UTF-8 filenames may be a bad idea, see `The problems
with enforced UTF-8 only with enforced UTF-8 only
filenames <http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames>`__. filenames <http://utcc.utoronto.ca/~cks/space/blog/linux/ForcedUTF8Filenames>`__.
- ``recordsize`` is unset (leaving it at the default of 128 KiB). If you want to
tune it (e.g. ``-o recordsize=1M``), see `these
<https://jrs-s.net/2019/04/03/on-zfs-recordsize/>`__ `various
<http://blog.programster.org/zfs-record-size>`__ `blog
<https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSFileRecordsizeGrowth>`__
`posts
<https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSRecordsizeAndCompression>`__.
- Setting ``relatime=on`` is a middle ground between classic POSIX - Setting ``relatime=on`` is a middle ground between classic POSIX
``atime`` behavior (with its significant performance impact) and ``atime`` behavior (with its significant performance impact) and
``atime=off`` (which provides the best performance by completely ``atime=off`` (which provides the best performance by completely
disabling atime updates). Since Linux 2.6.30, ``relatime`` has been disabling atime updates). Since Linux 2.6.30, ``relatime`` has been
the default for other filesystems. See `RedHat's the default for other filesystems. See `RedHats
documentation <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime>`__ documentation <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/power_management_guide/relatime>`__
for further information. for further information.
- Setting ``xattr=sa`` `vastly improves the performance of extended - Setting ``xattr=sa`` `vastly improves the performance of extended
@@ -310,9 +329,10 @@ Step 3: System Installation
On Solaris systems, the root filesystem is cloned and the suffix is On Solaris systems, the root filesystem is cloned and the suffix is
incremented for major system changes through ``pkg image-update`` or incremented for major system changes through ``pkg image-update`` or
``beadm``. Similar functionality for APT is possible but currently ``beadm``. Similar functionality has been implemented in Ubuntu 20.04 with the
unimplemented. Even without such a tool, it can still be used for ``zsys`` tool, though its dataset layout is more complicated. Even without
manually created clones. such a tool, the `rpool/ROOT` and `bpool/BOOT` containers can still be used
for manually created clones.
3.2 Create filesystem datasets for the root and boot filesystems:: 3.2 Create filesystem datasets for the root and boot filesystems::
@@ -387,17 +407,15 @@ If this system will use NFS (locking)::
zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs zfs create -o com.sun:auto-snapshot=false rpool/var/lib/nfs
A tmpfs is recommended later, but if you want a separate dataset for A tmpfs is recommended later, but if you want a separate dataset for
/tmp:: ``/tmp``::
zfs create -o com.sun:auto-snapshot=false rpool/tmp zfs create -o com.sun:auto-snapshot=false rpool/tmp
chmod 1777 /mnt/tmp chmod 1777 /mnt/tmp
The primary goal of this dataset layout is to separate the OS from user The primary goal of this dataset layout is to separate the OS from user data.
data. This allows the root filesystem to be rolled back without rolling This allows the root filesystem to be rolled back without rolling back user
back user data such as logs (in ``/var/log``). This will be especially data. The ``com.sun.auto-snapshot`` setting is used by some ZFS
important if/when a ``beadm`` or similar utility is integrated. The snapshot utilities to exclude transient data.
``com.sun.auto-snapshot`` setting is used by some ZFS snapshot utilities
to exclude transient data.
If you do nothing extra, ``/tmp`` will be stored as part of the root If you do nothing extra, ``/tmp`` will be stored as part of the root
filesystem. Alternatively, you can create a separate dataset for filesystem. Alternatively, you can create a separate dataset for
@@ -418,8 +436,9 @@ of a working system into the new ZFS root.
Step 4: System Configuration Step 4: System Configuration
---------------------------- ----------------------------
4.1 Configure the hostname (change ``HOSTNAME`` to the desired 4.1 Configure the hostname:
hostname)::
Replace ``HOSTNAME`` with the desired hostname::
echo HOSTNAME > /mnt/etc/hostname echo HOSTNAME > /mnt/etc/hostname
vi /mnt/etc/hosts vi /mnt/etc/hosts
@@ -459,14 +478,10 @@ Customize this file if the system is not a DHCP client.
.. code-block:: sourceslist .. code-block:: sourceslist
deb http://archive.ubuntu.com/ubuntu bionic main universe deb http://archive.ubuntu.com/ubuntu bionic main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu bionic main universe deb http://archive.ubuntu.com/ubuntu bionic-updates main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu bionic-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu bionic-security main universe deb http://security.ubuntu.com/ubuntu bionic-security main restricted universe multiverse
deb-src http://security.ubuntu.com/ubuntu bionic-security main universe
deb http://archive.ubuntu.com/ubuntu bionic-updates main universe
deb-src http://archive.ubuntu.com/ubuntu bionic-updates main universe
4.4 Bind the virtual filesystems from the LiveCD environment to the new 4.4 Bind the virtual filesystems from the LiveCD environment to the new
system and ``chroot`` into it:: system and ``chroot`` into it::
@@ -490,7 +505,7 @@ Even if you prefer a non-English system language, always ensure that
dpkg-reconfigure tzdata dpkg-reconfigure tzdata
If you prefer nano over vi, install it:: If you prefer ``nano`` over ``vi``, install it::
apt install --yes nano apt install --yes nano
@@ -502,16 +517,15 @@ If you prefer nano over vi, install it::
**Hint:** For the HWE kernel, install ``linux-image-generic-hwe-18.04`` **Hint:** For the HWE kernel, install ``linux-image-generic-hwe-18.04``
instead of ``linux-image-generic``. instead of ``linux-image-generic``.
4.7 For LUKS installs only, setup crypttab:: 4.7 For LUKS installs only, setup ``/etc/crypttab``::
apt install --yes cryptsetup apt install --yes cryptsetup
echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \ echo luks1 UUID=$(blkid -s UUID -o value ${DISK}-part4) none \
luks,discard,initramfs > /etc/crypttab luks,discard,initramfs > /etc/crypttab
- The use of ``initramfs`` is a work-around for `cryptsetup does not The use of ``initramfs`` is a work-around for `cryptsetup does not support ZFS
support <https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__.
ZFS <https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__.
**Hint:** If you are creating a mirror or raidz topology, repeat the **Hint:** If you are creating a mirror or raidz topology, repeat the
``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk. ``/etc/crypttab`` entries for ``luks2``, etc. adjusting for each disk.
@@ -524,7 +538,7 @@ Choose one of the following options:
apt install --yes grub-pc apt install --yes grub-pc
Install GRUB to the disk(s), not the partition(s). Select (using the space bar) all of the disks (not partitions) in your pool.
4.8b Install GRUB for UEFI booting:: 4.8b Install GRUB for UEFI booting::
@@ -536,20 +550,27 @@ Install GRUB to the disk(s), not the partition(s).
mount /boot/efi mount /boot/efi
apt install --yes grub-efi-amd64-signed shim-signed apt install --yes grub-efi-amd64-signed shim-signed
- The ``-s 1`` for ``mkdosfs`` is only necessary for drives which **Notes:**
present 4 KiB logical sectors (“4Kn” drives) to meet the minimum
cluster size (given the partition size of 512 MiB) for FAT32. It also
works fine on drives which present 512 B sectors.
**Note:** If you are creating a mirror or raidz topology, this step only - The ``-s 1`` for ``mkdosfs`` is only necessary for drives which present
installs GRUB on the first disk. The other disk(s) will be handled 4 KiB logical sectors (“4Kn” drives) to meet the minimum cluster size
later. (given the partition size of 512 MiB) for FAT32. It also works fine on
drives which present 512 B sectors.
- For a mirror or raidz topology, this step only installs GRUB on the
first disk. The other disk(s) will be handled later.
4.9 Set a root password:: 4.9 (Optional): Remove os-prober::
dpkg --purge os-prober
This avoids error messages from `update-grub`. `os-prober` is only necessary
in dual-boot configurations.
4.10 Set a root password::
passwd passwd
4.10 Enable importing bpool 4.11 Enable importing bpool
This ensures that ``bpool`` is always imported, regardless of whether This ensures that ``bpool`` is always imported, regardless of whether
``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not, ``/etc/zfs/zpool.cache`` exists, whether it is in the cachefile or not,
@@ -578,7 +599,7 @@ or whether ``zfs-import-scan.service`` is enabled.
systemctl enable zfs-import-bpool.service systemctl enable zfs-import-bpool.service
4.11 Optional (but recommended): Mount a tmpfs to /tmp 4.12 Optional (but recommended): Mount a tmpfs to ``/tmp``
If you chose to create a ``/tmp`` dataset above, skip this step, as they If you chose to create a ``/tmp`` dataset above, skip this step, as they
are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a are mutually exclusive choices. Otherwise, you can put ``/tmp`` on a
@@ -589,9 +610,7 @@ tmpfs (RAM filesystem) by enabling the ``tmp.mount`` unit.
cp /usr/share/systemd/tmp.mount /etc/systemd/system/ cp /usr/share/systemd/tmp.mount /etc/systemd/system/
systemctl enable tmp.mount systemctl enable tmp.mount
4.12 Setup system groups: 4.13 Setup system groups::
::
addgroup --system lpadmin addgroup --system lpadmin
addgroup --system sambashare addgroup --system sambashare
@@ -605,12 +624,12 @@ Step 5: GRUB Installation
5.2 Refresh the initrd files:: 5.2 Refresh the initrd files::
update-initramfs -u -k all update-initramfs -c -k all
**Note:** When using LUKS, this will print "WARNING could not determine **Note:** When using LUKS, this will print WARNING could not determine
root device from /etc/fstab". This is because `cryptsetup does not root device from /etc/fstab. This is because `cryptsetup does not
support support ZFS
ZFS <https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__. <https://bugs.launchpad.net/ubuntu/+source/cryptsetup/+bug/1612906>`__.
5.3 Workaround GRUB's missing zpool-features support:: 5.3 Workaround GRUB's missing zpool-features support::
@@ -636,7 +655,7 @@ working, you can undo these changes, if desired.
**Note:** Ignore errors from ``osprober``, if present. **Note:** Ignore errors from ``osprober``, if present.
5.6 Install the boot loader 5.6 Install the boot loader:
5.6a For legacy (BIOS) booting, install GRUB to the MBR:: 5.6a For legacy (BIOS) booting, install GRUB to the MBR::
@@ -655,11 +674,7 @@ If you are creating a mirror or raidz topology, repeat the
It is not necessary to specify the disk here. If you are creating a It is not necessary to specify the disk here. If you are creating a
mirror or raidz topology, the additional disks will be handled later. mirror or raidz topology, the additional disks will be handled later.
5.7 Verify that the ZFS module is installed:: 5.7 Fix filesystem mount ordering:
ls /boot/grub/*/zfs.mod
5.8 Fix filesystem mount ordering
`Until ZFS gains a systemd mount `Until ZFS gains a systemd mount
generator <https://github.com/zfsonlinux/zfs/issues/4898>`__, there are generator <https://github.com/zfsonlinux/zfs/issues/4898>`__, there are
@@ -734,32 +749,31 @@ filesystems::
reboot reboot
6.5 Wait for the newly installed system to boot normally. Login as root. Wait for the newly installed system to boot normally. Login as root.
6.6 Create a user account:: 6.5 Create a user account:
zfs create rpool/home/YOURUSERNAME Replace ``username`` with your desired username::
adduser YOURUSERNAME
cp -a /etc/skel/. /home/YOURUSERNAME
chown -R YOURUSERNAME:YOURUSERNAME /home/YOURUSERNAME
6.7 Add your user account to the default set of groups for an zfs create rpool/home/username
administrator:: adduser username
usermod -a -G adm,cdrom,dip,lpadmin,plugdev,sambashare,sudo YOURUSERNAME cp -a /etc/skel/. /home/username
chown -R username:username /home/username
usermod -a -G audio,cdrom,dip,floppy,netdev,plugdev,sudo,video username
6.8 Mirror GRUB 6.6 Mirror GRUB
If you installed to multiple disks, install GRUB on the additional If you installed to multiple disks, install GRUB on the additional
disks: disks:
6.8a For legacy (BIOS) booting:: 6.6a For legacy (BIOS) booting::
dpkg-reconfigure grub-pc dpkg-reconfigure grub-pc
Hit enter until you get to the device selection screen. Hit enter until you get to the device selection screen.
Select (using the space bar) all of the disks (not partitions) in your pool. Select (using the space bar) all of the disks (not partitions) in your pool.
6.8b For UEFI booting:: 6.6b For UEFI booting::
umount /boot/efi umount /boot/efi
@@ -841,7 +855,8 @@ Choose one of the following options:
**Hint**: If you are installing a full GUI environment, you will likely **Hint**: If you are installing a full GUI environment, you will likely
want to manage your network with NetworkManager:: want to manage your network with NetworkManager::
vi /etc/netplan/01-netcfg.yaml rm /mnt/etc/netplan/01-netcfg.yaml
vi /etc/netplan/01-network-manager-all.yaml
.. code-block:: yaml .. code-block:: yaml
@@ -965,8 +980,8 @@ not hotplug pool members. See
`https://github.com/zfsonlinux/zfs/issues/330 <https://github.com/zfsonlinux/zfs/issues/330>`__. `https://github.com/zfsonlinux/zfs/issues/330 <https://github.com/zfsonlinux/zfs/issues/330>`__.
Most LSI cards are perfectly compatible with ZoL. If your card has this Most LSI cards are perfectly compatible with ZoL. If your card has this
glitch, try setting ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X in glitch, try setting ``ZFS_INITRD_PRE_MOUNTROOT_SLEEP=X`` in
/etc/default/zfs. The system will wait X seconds for all drives to ``/etc/default/zfs``. The system will wait ``X`` seconds for all drives to
appear before importing the pool. appear before importing the pool.
Areca Areca
@@ -974,7 +989,7 @@ Areca
Systems that require the ``arcsas`` blob driver should add it to the Systems that require the ``arcsas`` blob driver should add it to the
``/etc/initramfs-tools/modules`` file and run ``/etc/initramfs-tools/modules`` file and run
``update-initramfs -u -k all``. ``update-initramfs -c -k all``.
Upgrade or downgrade the Areca driver if something like Upgrade or downgrade the Areca driver if something like
``RIP: 0010:[<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20`` ``RIP: 0010:[<ffffffff8101b316>] [<ffffffff8101b316>] native_read_tsc+0x6/0x20``

File diff suppressed because it is too large Load Diff