zfs and portage’s var directories
While /var
is usually for non-crucial content, caches[3], pid files, etc, portage has a different idea [1]:
/var/db/pkg Portage stores the state of your system
/var/lib/portage The versions for the applications you have explicitly installed
These directories store the current tree state, there is no way recreating them if they are deleted.[2]
So if you plan to use ZFS with separate /
and /var
to take a snapshot of /
, install some packages and then rollback the snapshot as you changed your mind, your /
and /var
will be out of sync!
/var/db/pkg
and /var/lib/portage
has to be on /
.
mkdir /usr/var_db_pkg /usr/var_lib_portage cp -r /var/lib/portage /usr/var_lib_portage cp -r /var/db/pkg /usr/var_db_pkg rm -rf /var/lib/portage /var/db/pkg ln -s /usr/var_lib_portage /var/lib/portage ln -s /usr/var_db_pkg /var/db/pkg
[1] https://wiki.gentoo.org/wiki/Directories
[2] Or at least it is painfull. To avoid the initial circular-dependency hell, issue:
emerge --nodeps dev-lang/perl dev-lang/python dev-libs/libxml2 dev-util/cmake dev-util/pkgconfig sys-apps/acl ys-apps/systemd sys-devel/automake sys-libs/glibc sys-libs/ncurses sys-libs/zlib virtual/libudev
[3] what you'd have to recreate: powertop's calibration measurements, gentoolkit's busybox and initramfs
zfs set ditto blocks after file system creation
"The copies property works for all new writes, so I recommend that you set that policy when you create the file system or immediately after you create a zpool." [1]
So how can you force a complete reread-rewrite?
With (non-incremental) backup and restore:
# settings properties won't work: zfs set copies=2 POOL/FS zfs snapshot SNAPSHOT zfs send SNAPSHOT | xz --threads=12 --verbose > FILE.img.xz zfs destoy POOL/FS xz --threads=12 --decompress --verbose FILE.img.xz -c | zfs receive POOL/FS # so you have to create & override a new FS with copies=2 zfs snapshot SNAPSHOT zfs send SNAPSHOT | xz --threads=12 --verbose > FILE.img.xz zfs destoy POOL/FS zfs create ... -o copies=2 POOL/FS xz --threads=12 --decompress --verbose FILE.img.xz -c | zfs receive POOL/FS -F
[1] https://blogs.oracle.com/relling/entry/zfs_copies_and_data_protection
ZFS dataset hierarchy on a single user machine
# the pool zpool create -o ashift=12 -O mountpoint=none -O atime=off -O snapdir=visible rpool /dev/mapper/crypt_zfs # Create filesystems: rootfs, var and home # rootfs and home has 2 copy of each file as a mirror in single dev. zfs create -o copies=2 -o compress=lz4 -o mountpoint=/ rpool/rootfs zfs create -o copies=2 -o compress=lz4 -o mountpoint=/home rpool/home # var is not a child of rootfs and the zfs daemon will mount it # after systemd creates it, leading to a: cannot mount /var, # dir already exists error. see link[1] zfs create -o compress=lz4 -o quota=20G -o mountpoint=legacy rpool/var # on the second thought, copies=2 makes more sence than quota zfs create -o copies=2 -o compress=lz4 -o mountpoint=legacy rpool/var # /etc/fstab should be this line only: rpool/var /var zfs defaults 0 0 # var has 2 children with no compression zfs create -o compress=off -o mountpoint=/var/portage/distfiles rpool/var/portage_distfiles zfs create -o compress=off -o mountpoint=/var/portage/packeges rpool/var/portage_packages # swap check blocksize with: getconf PAGESIZE, default is 4K zfs create -V 4G -b 4K rpool/swap mkswap -f /dev/zvol/rpool/swap swapon /dev/zvol/rpool/swap # snapshot of rootfs before sysupdates # snapshot of home regularly # reset var to initial (right after bootstrap) snapshot when it's too big zfs umount -a zpool set bootfs=rpool/rootfs rpool zpool export rpool zpool import -R /mnt/rpool rpool chroot /mnt/rpool # install...
TODO: making rootfs readonly and mounting it readwrite only at system updatws.
zfs backup to file
#create snapshot zfs snapshot POOL/FS@DESCRIPTION #list snapshots zfs list -t snapshot #save zfs send SNAPSHOT | xz --threads=12 --verbose > FILE.img.xz #restore xz --threads=12 --decompress --verbose FILE.img.xz -c | zfs receive POOL/NEW_FS