Am Dienstag, 20. Januar 2026, 20:53:50 UTC+00:00:01 schrieben Sie:
> Just out of curiosity, what are the hardware specifications of this
> machine? I’ve realized that I forgot to include two additional pieces of
> context in my original post. First (as mentioned in my reply to Hauke),
> I’m using WAPBL in the guest filesystems. Second, this setup runs on a
> low-spec Intel NUC7CJYH (8 GB RAM, dual-core Celeron J4005).

The bare metal of the oldest machine is a proliant DL360 G8 (now over 10 
years old), but the Dom0 (where all of the ZFS magic happens) currently has 8 
GB dedicated RAM 4 threads (means 2 real CPU Cores) for 10 DomU under severe 
load - never getting over 60% of overall CPU lload (peaks) under high 
parallel DomU IO. Disks are typical HPE SAS spindles (6 GBit/s) up to modern 
SSD.

As i wrote, we use single device zfs pools only (on hardware array with i.e. 
RAID 5 or simple mirroring) - here typically some HPE p4xx or newer LSI 
megaraid) what may have impact here, because the RAID logic has not to be 
handled by ZFS / CPU.

I would recommend to try it - it is easy to create some single device zfs 
pool on a empty disk or array or even a partition.


> I’m wondering whether the relatively weak CPU might be a bottleneck. I
> seem to recall that during my load tests - especially with ZVOLs - I
> observed a significant number of TLB shootdowns, whereas this did not
> occur with raw/CCD devices. Could these observations be related?
Because of compatibility issues we use linux dom0 but plan to test / migrate 
NetBSD here as well. So i can not guarantee for the NetBSD version.

To me, ZFS is significabtly faster then LVM - at least 


> That sounds interesting and reminds me of an experimental setup I once
> built with QEMU/nvmm. I mirrored ZVOL snapshots to a NAS and exposed
> them as iSCSI block devices for recovery purposes.
I do no iSCSI stuff in my setup and would avoid it, because in my experiences 
it brought in a significantly bottleneck.



> Sure, I’d be happy to take a look. I always enjoy reviewing such tools
> for inspiration and to see how others solve problems similar to the ones
> I’m facing.

zfs_snapper
https://github.com/nielsd/zfs_snapper



# create single device zfs pool:

(i.e. i create 2 pools - "tank" (for domU usage) and "backup" (local 
replication).

 cat /root/sbin/zfs_create_pool_backup
sudo zpool create backup \
-f \
    -o ashift=12 \
    -O compression=lz4 \
    -O normalization=formD \
    -O atime=off \
    -O xattr=sa \
    -O canmount=off \
    -O mountpoint=none \
    /dev/disk_or_partition


# create zvols for domUs in dom0

#!/bin/bash
# create-zfs-zvol-domu.sh – size-limited ZVOL for Xen domU

NAME=$1
SIZE=$2
POOL=tank

[ -z "$NAME" ] && { echo "Usage: $0 <zvol-name> <size>"; exit 1; }
[ -z "$SIZE" ] && { echo "Size argument required (e.g. 20G)"; exit 1; }

# ZVOL with Volblocksize 16K lz4 ,size "reservation" (if you do not use 
"refreservation" you can/may "overbook" your pool)

zfs create -V "$SIZE" \
  -p \
  -o volblocksize=16K \
  -o compression=lz4 \
  -o logbias=throughput \
  -o primarycache=metadata \
  -o secondarycache=none \
  -o refreservation="$SIZE" \
  "$POOL/$NAME"

echo "→ ZVOL $POOL/$NAME created, size is fixed to $SIZE"
echo "  Device path: /dev/zvol/$POOL/$NAME"



## example cron script
cat /etc/cron.daily/zpool_backup.sh
#!/bin/bash

# create and rotate snapshots on zpool tank
/root/sbin/zfs_snapper tank --list-new --dosnap --doprune >>/var/log/
zfs_snapper.log

# sync / replicate from tank to backup
/root/sbin/syncoid_to_backup.sh

# prune outdated snaps on backup pool
/root/sbin/zfs_prune backup --do-it >>/var/log/zfs_prune.log




cat /root/sbin/syncoid_to_backup.sh
#!/usr/bin/env bash
set -euo pipefail

SOURCE=tank
TARGET=backup/tank

echo "=== Syncoid tank → backup/tank – $(date) ==="

syncoid \
  --recursive \
  --no-sync-snap \
  --skip-parent \
  --compress=lz4 \
  --mbuffer-size=256M \
  "$SOURCE" "$TARGET"

### --debug ### for debug

echo "=== Syncoid done – $(date) ==="


Feel free to ask me in case of questions.



hth,

niels.


-- 
 ---
 Niels Dettenbach
 Syndicat IT & Internet
 https://www.syndicat.com
 PGP: https://syndicat.com/pub_key.asc
 ---
 






Reply via email to