ZFS
๐ ZFS Installation and RAID Z2 Setup Guide
๐ Update system packages
sudo apt update
๐ ๏ธ Install ZFS
sudo apt install zfsutils-linux
๐ Check available disks
lsblk
๐๏ธ Create a ZFS pool with RAID Z2
sudo zpool create EVA01 raidz2 /dev/sdX /dev/sdY /dev/sdZ
๐ Check the pool status
sudo zpool status EVA01
โ๏ธ Set the record size to 1M.
sudo zfs set recordsize=1M EVA01
๐๏ธ Enable compression LZ4
sudo zfs set compression=lz4 EVA01
๐งฝ Schedule regular maintenance
# Launch a scrub to check and repair errors.
sudo zpool scrub EVA01
# Add a monthly scrub to the crontab.
echo "0 0 1 * * /usr/sbin/zpool scrub EVA01" | sudo tee -a /etc/crontab
๐ Create a dataset for Series
sudo zfs create EVA01/Series
๐๏ธ Set a custom mount point
sudo zfs set mountpoint=/Series EVA01/Series
๐ Enable deduplication
sudo zfs set dedup=on EVA01
๐ธ Set snapshot limits to 10
sudo zfs set snapshot_limit=10 EVA01
๐๏ธ Optimize cache
#Configure the primary cache to only store metadata.
sudo zfs set primarycache=metadata EVA01
๐ Actualizar los paquetes del sistema
sudo apt update
sudo apt install zfsutils-linux
Delete RAID
๐ซ Export the ZFS pool
#Prepare the ZFS pool for removal by exporting it.
sudo zpool export nombre_del_pool
๐งจ Destroy the ZFS pool
sudo zpool destroy nombre_del_pool
๐ List current ZFS pools
#Verify that the pool has been destroyed.
sudo zpool list
๐ง Wipe filesystem signatures from the first disk
sudo wipefs -a /dev/sda
sudo wipefs -a /dev/sdb
sudo wipefs -a /dev/sdc
sudo wipefs -a /dev/sdd
Snapshot
sudo zfs snapshot EVA01@snapshot
sudo zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
EVA01@snapshot 0B - 1023M -
USED: This field shows the additional space that the snapshot is using due to data that has changed since it was created. In your case, it shows 0B, which means there are no new data modifications since the snapshot was created.
AVAIL: This field is not applicable to snapshots, hence it displays -.
REFER: Shows the size of the dataset at the time the snapshot was taken, which in this case is 1023M. This is the total size of the data referenced by the snapshot when it was created.
MOUNTPOINT: ZFS snapshots are typically not mounted and are not directly accessible in the file system as a normal directory, hence it displays.
#!/bin/bash
# Create a new snapshot with the current date and time
zfs snapshot EVA01@$(date +"%Y-%m-%d_%H-%M-%S")
# Check if the number of snapshots exceeds the limit and delete the oldest one if necessary
snapshots=$(zfs list -H -o name -t snapshot | grep 'EVA01' | wc -l)
if [ "$snapshots" -gt 10 ]; then
oldest_snapshot=$(zfs list -H -o name -t snapshot | grep 'EVA01' | head -n 1)
zfs destroy "$oldest_snapshot"
fi
# Crontab
0 0 1 * * /snapshot.sh
๐ Safe Reboot Procedure and Start for ZFS System
๐ฝ Unmount all ZFS filesystems
sudo zfs unmount -a
๐ฆ Export the ZFS pool
sudo zpool export EVA01
๐ List all ZFS pools
sudo zpool list
๐ After reboot - Import the ZFS pool
sudo zpool import EVA01
zfs list
df -kh
zectl
Usage Guide ๐
zectl
is a tool for managing boot environments on operating systems that use ZFS.
๐ฆ Installation
sudo pacman -S zectl
๐ Creating a Boot Environment
sudo zectl create my-new-be
๐ Listing Boot Environments
sudo zectl list
๐ Activating a Boot Environment
sudo zectl activate my-new-be
sudo reboot
๐๏ธ Deleting a Boot Environment
sudo zectl destroy my-old-be
Automating Boot Environments with zectl
๐
Creation of Boot Environments
#!/bin/bash
# This script creates a new boot environment every week. These boot environments act as restore points that you can use to revert your system to a previous state.
# Create a new boot environment weekly with zectl
# Name of the BE, including the date for easy identification
be_name="be_$(date +%Y-%m-%d)"
# Create the boot environment
sudo zectl create $be_name
echo "New boot environment created: $be_name"
Schedule with Cron
# To run this script every Sunday at 3:00 AM
0 3 * * 0 /path/to/create_be.sh
Deletion of Old Boot Environments
#!/bin/bash
# This script deletes boot environments that are more than 30 days old. It's good practice to free up space and keep the system tidy.
# Delete old boot environments that are more than 30 days old
# List and filter old BEs
old_bes=$(sudo zectl list --columns=name,creation | awk -v date="$(date --date='-30 days' +%s)" '$2 < date {print $1}')
for be in $old_bes; do
sudo zectl destroy $be
echo "Boot environment deleted: $be"
done
Schedule with Cron
# To run this script on the first day of each month at 4:00 AM
0 4 1 * * /path/to/delete_old_bes.sh
Useful
๐ง View all ZFS configurations
zfs get all