This page may be dated.
The basic procedure for converting an existing Linux installation to use ZFS on Linux is outlined here. These instructions are designed for Debian but may be adapted for other systems.
If you use LVM, use /dev/mapper/* names with ZFS for best results.
Planning
This guide is for those familiar with Linux, ZFS, and how Linux boots. If this whole page looks daunting to you, please don’t attempt.
A comment: Booting from ZFS with LVM is not well-tested and has some pitfalls, but it can work. LVM makes a convenient way to copy data, though, if there is enough space free in your VG.
Any pool that grub will need to read must not use any raidz; it must be simple or mirrored.
You will either need to have enough spare space on your system to keep your data, or some place to move it to, then you can reformat the partitions and move it back. One nice hint: you can dd the live rescue ISO onto a USB memory stick, then use cfdisk or parted to add a partition consuming the spare space at the end, and use that partition to hold data. (Note: re-writing the ISO will erase the partition!)
Another hint: you can create your zpool on a new device temporarily,
copy your data to it, unmount the source, then zpool attach
the source
partition as a mirror. Use zpool status
to watch the copy progress,
then zpool detach
the temporary device when it’s done. Just make sure
that the temporary device is no larger than the place where you want it
to go in the end. (It is fine if it is smaller, as long as it’s big
enough to hold your data.)
Preparation
On the target system, install the ZFS packages using the zfsonlinux.org instructions, which are:
$ su -
# wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_2%7Ewheezy_all.deb
# dpkg -i zfsonlinux_2~wheezy_all.deb
# apt-get update
# apt-get install debian-zfs
Then, also install the ZFS versions of grub and the initramfs support:
apt-get install grub-pc zfs-initramfs
Make sure your grub has ZFS support:
dpkg -s grub-pc | grep Version
You should see “zfs” in that output string.
Set ZFS arc max
Create a file /etc/modprobe.d/local-zfs.conf
and add:
options zfs zfs_arc_max=536870912
for a 512MB ZFS cache. This may need to be tweaked up or down on your system. See things like the ZFS Evil Tuning Guide or other references for guidance.
If you have already loaded the zfs module see lsmod | grep zfs
, you
may need to update-initramfs -u
and then reboot for this to take
effect.
zfs mounting workaround (for separate /usr, /var)
Bug reference: pkg-zfs #101
If you have /usr, /var, /home, etc. in separate zfs filesystems, the
default zfs mount -a script runs too late in the boot process for most
system scripts. To fix it, edit /etc/insserv.conf
, and at the end of
the $local_fs
line, add zfs-mount
(without a plus).
Also, edit /etc/init.d/zfs-mount
and find three lines near the top,
changing them like this:
# Required-Start:
# Required-Stop:
# Default-Start: S
Additional workaround for cryptdisks
If you have a zpool atop dm-crypt, you will also need to edit a few more files.
/etc/init.d/zfs-mount:
At the top, set
# Required-Start: cryptdisks-early
and before zfs mount -a
, add:
zpool import -a
In mountall-bootclean.sh
, set:
# Required-Start: mountall zfs-mount
Activating init.d changes
Then run:
insserv -v -d zfs-mount
If rpool is on LVM: initramfs bug workaround
Bug reference: pkg-zfs #102
If your rpool is on LVM, save this as /usr/share/initramfs-tools/scripts/local-top/jgoerzenactivatevg:
#!/bin/sh
# Workaround to make sure LVM is activated for ZFS
# from http://wiki.complete.org/ConvertingToZFS by John Goerzen
PREREQ="mdadm mdrun multipath"
prereqs()
{
echo "$PREREQ"
}
case $1 in
# get pre-requisites
prereqs)
prereqs
exit 0
;;
esac
# source for log_*_msg() functions, see LP: #272301
. /scripts/functions
#
# Helper functions
#
message()
{
if [ -x /bin/plymouth ] && plymouth --ping; then
plymouth message --text="$@"
else
echo "$@" >&2
fi
return 0
}
udev_settle()
{
# Wait for udev to be ready, see https://launchpad.net/bugs/85640
if [ -x /sbin/udevadm ]; then
/sbin/udevadm settle --timeout=30
elif [ -x /sbin/udevsettle ]; then
/sbin/udevsettle --timeout=30
fi
return 0
}
activate_vg()
{
# Sanity checks
if [ ! -x /sbin/lvm ]; then
message "jgoerzenactivatevg: lvm is not available"
return 1
fi
# Detect and activate available volume groups
/sbin/lvm vgscan
/sbin/lvm vgchange -a y --sysinit
return $?
}
udev_settle
activate_vg
exit 0
Create pools and filesystems
Create your zpools and ZFS filesystems.
For instance:
zpool create rpool /dev/whatever
zfs create rpool/hostname-1
zfs create rpool/hostname-1/ROOT
If you want a separate /usr, /var, and /home, you might also:
zfs create rpool/hostname-1/usr
zfs create rpool/hostname-1/var
zfs create rpool/hostname-1/home
For swap, you might do this:
zfs create rpool/swap -V 1G -b 4K
mkswap -f /dev/rpool/swap
The -V gives the size of the swap, and the -b the blocksize. Per the zfsonlinux.org FAQ, the blocksize of swap should match the system’s pagesize, and on amd64, that’s 4K.
I usually recommend disabling atime on my systems, so:
zfs set atime=off rpool
Configure ZFS default
Edit /etc/default/zfs
and set ZFS_MOUNT
‘yes’=
Initial Copy
Now, you can prepare an initial run of populating the target filesystem with rsync. Be careful with how you do this!
rsync -avxHAXS --delete / /rpool/hostname-1/ROOT
For additional /usr, /var, etc:
rsync -avxHAXS --delete /usr/ /rpool/hostname-1/usr
rsync -avxHAXS --delete /var/ /rpool/hostname-1/var
rsync -avxHAXS --delete /home/ /rpool/hostname-1/home
The trailing slash after /usr/
is important.
Prepare for Reboot
Now, it is time to prepare the system for reboot.
First, we need to edit /etc/fstab
. I’d start by saving it off:
cd /etc
cp fstab fstab.old
Now, you’ll probably comment out everything that was converted to ZFS. Then add:
rpool/hostname-1/ROOT / zfs defaults 0 0
/dev/rpool/swap none swap sw 0 0
You do not need to list /usr, /var, etc. here since they will be auto-mounted by zfs.
Now, we need to configure the mountpoint for root in zfs. First, we have to unmount it so we can do this on a running system:
zfs umount -a
zfs set mountpoint=/ rpool/hostname-1/ROOT
zpool set bootfs=rpool/hostname-1/ROOT rpool
If you’re using /usr, /var, and the like, also:
zfs set mountpoint=/usr rpool/hostname-1/usr
zfs set mountpoint=/var rpool/hostname-1/var
zfs set mountpoint=/home rpool/hostname-1/home
Finally, we unmount the zpool entirely:
zpool export rpool
Reboot to rescue disk
Now it’s time to boot to the ZFS Rescue Disc to finish the installation.
Select live from the boot menu, and when you get the shell prompt, run
sudo -s
to become root.
Mount filesystems
You’ll first mount the old filesystems on the rescue environment. We’ll mount them under /tmp/old.
mkdir /tmp/old
mount /dev/blah /tmp/old
mount /dev/blah2 /tmp/old/usr
Then, you can import the pool under /tmp/new:
mkdir /tmp/new
zpool import -R /tmp/new rpool
Final Copy
This is the main reason we use the rescue disk: to get a good copy. A running system will have files in use, and also will have things mounted over places like /dev that mask what’s there.
So, do a final rsync of all filesystems as before.
rsync -avxHAXS --delete /tmp/old/ /tmp/new
rsync -avxHAXS --delete /tmp/old/usr/ /tmp/new/usr
rsync -avxHAXS --delete /tmp/old/var/ /tmp/new/var
rsync -avxHAXS --delete /tmp/old/home/ /tmp/new/home
Most likely, there will be changes under / and /var but not /usr.
Prepare for booting
Now it’s time to prepare for booting.
First, we set up the /tmp/new for chrooting:
mount -o bind /dev /tmp/new/dev
mount -o bind /sys /tmp/new/sys
mount -o bind /proc /tmp/new/proc
Now, we enter the new filesystem:
chroot /tmp/new
If you installed from the Debian Live CD, you’ll have to remove the live tools:
dpkg --purge live-tools
Now, update the initramfs:
update-initramfs -k all -u
If you wish, you can make sure ZFS support was included:
mkdir /tmp/t
cd /tmp/t
zcat /boot/initrd.... | cpio -i
find . -iname '*zfs*'
You should see zfs.ko, /sbin/zfs, etc. listed.
Now, run:
update-grub
and install grub:
grub-install /dev/whatever
Grub mirrored rpool workaround
Bug reference: zfsonlinux/grub #6
If you have a mirrored rpool, you may get an error at this point:
error: cannot find a GRUB drive for ...
.... Check your device.map.
If this happens, create /usr/local/bin/grub-probe, with these contents:
#!/bin/bash
/usr/sbin/grub-probe "$@" | head -1
and run:
grub-install --grub-probe=/usr/local/bin/grub-probe /dev/blah
Now, exit the chroot and umount everything:
exit
umount /tmp/new/{proc,sys,dev}
zpool export rpool
reboot
Your system should now boot!
Troubleshooting
error: couldn’t find a valid label (grub)
See http://comments.gmane.org/gmane.linux.file-systems.zfs.user/2485. This may occur if / is on ZFS but /boot is not. It may require fiddling with grub.cfg or the generating scripts. When I saw this, it just required me a press a key to continue booting.
/usr, /var, etc aren’t mounting
See tips above.
See Also
Submitted Bugs
-
Mounting datasets too late in boot process: pkg-zfs #101
-
rpool on LVM not working: pkg-zfs #102
-
grub on mirrored rpool: zfsonlinux/grub #6
Links to this note
This page is outdated.
This page may be dated. In particular, ZFS can now be installed atop the Debian live CD images.