The documented configuration changes for NixOS appear to
be making zfs available for boot. However, these changes
are also required just to make the zfs.ko module available
to modprobe even for users who don't need ZFS available at
boot time. Also, the kernel module does not appear until
after a reboot, regardless of 'nixos-rebuild switch'.
(a more knowledgable NixOS user might know how to modprobe
without a reboot, but I don't)
We should add these two module parameter.
They can be used for setting up individual SIMD module
parameters.
Signed-off-by: Tino Reichardt <milky-zfs@mcmilk.de>
NixOS has enjoyed popularity among ZFS users thanks to
its declarative configuration and native ZFS support.
However, the installation guide used hardcoded disk
names in configuration files, which is unnecessary and
is the source of difficulties in multidisk setups.
The guide is now rewritten to leverage expressions in
the Nix language to manage multidisk setups.
Also adds instruction on replacing failed disk.
Closes#385.
Signed-off-by: Maurice Zhou <ja@apvc.uk>
Previously we used a bind mount from /boot/efis/*-part1
to /boot/efi to facilitate bootloader configuration.
Recent reports indicate that this bind mount prevents
the system from booting. This pull request removes the
bind mount.
Closes#383.
Signed-off-by: Maurice Zhou <ja@apvc.uk>
By my understanding, unnecessary due to unmounting at the end of the instruction. Has lead to unstable and error-spewing ZFS setups, as discussed in this issue: https://github.com/NixOS/nixpkgs/issues/214871
Not having kernel-abi-stablelists causes e.g.
error: Failed build dependencies:
kernel-abi-whitelists is needed by zfs-kmod-2.1.5-1.el8.x86_64
when following the section on building kABI-tracking kmods.
The advice to not cache data in ARC is simply wrong--PG does not work that way, shared buffers are more special purpose. PG *expects* the majority of caching to be done by the OS file cache and is designed around that assumption.
LZ4 compression changes things with regard to record size--even with fast NVMe--using a record size that will fit multiple compressed blocks increases performance by reducing the total data written despite the partial record writes. (I have benchmarked this extensively.)
When mail dataset is created in /var/mail, the filesystem package will fail to install (as it does not expect /var/mail to be a directory, see https://archive.virtualmin.com/node/23096), so create it in /var/spool/mail instead as is usual.
Install the "python" package to get the a friendly bin/python wrapper
and install dependencies using origins rather than package names in
order to install the default flavor.
I am not sure under what circumstances this occurs, or whether it also
affects Debian Buster or Ubuntu.
Closes#349
Co-authored-by: Immanuel Albrecht <immanuel.albrecht@dlh.de>
Signed-off-by: Richard Laager <rlaager@wiktel.com>