Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-26 Thread Henryk Paluch

Hello!

> Previously, in essentially this configuration, I had run into
> reproducible freezes during dump -X. With the current setup, however, I
> can no longer reproduce this behaviour.

That "freeze" is more likely kernel panic. fss(4) is known to corrupt 
kernel memory pool under high I/O load. These two commands below 
reliably crash any NetBSD kernel (tested 10.1+) in few seconds:


fssconfig -cx fss0 / /root/backup
dd if=/dev/fss0 of=/dev/null bs=3D1024k

After few seconds:

 uvm_fault(0x81976c20, 0x0, 1) -> e
[ 883.0589643] fatal page fault in supervisor mode
[ 883.0589643] trap type 6 code 0 rip 0x80e261f3 cs 0x8 rflags 
0x10202 cr2 0x9 ilevel 0 rsp 0x9d04c49d4c10
[ 883.0589643] curlwp 0x8b60ae5fe400 pid 0.1215 lowest kstack 
0x9d04c49d02c0

kernel: page fault trap, code=0
Stopped in pid 0.1215 (system) at   netbsd:pool_get+0x2b9:  cmpq 
%r15,8(%

rax)
pool_get() at netbsd:pool_get+0x2b9
allocbuf() at netbsd:allocbuf+0x113
getblk() at netbsd:getblk+0x18c
bio_doread() at netbsd:bio_doread+0x1d
breadn() at netbsd:breadn+0x8b
ffs_snapshot_read() at netbsd:ffs_snapshot_read+0x1b2
VOP_READ() at netbsd:VOP_READ+0x42
vn_rdwr() at netbsd:vn_rdwr+0xf1
fss_bs_io() at netbsd:fss_bs_io+0x89
fss_bs_thread() at netbsd:fss_bs_thread+0x50f

It is clear from stacktrace that trigger is fss(4).

I encounter same crashes with "dump -X" - however it usually takes 
around 20minutes to crash kernel in such conditions. I reported that as 
https://gnats.netbsd.org/cgi-bin/query-pr-single.pl?number=59663 on 21 
Sep 2025 but there was no reply so far.


LVM snapshots use different (and much easier to understand than fss(4) 
which operates at filesystem level) path so they should work without 
problem.


On 1/25/26 17:58, Matthias Petermann wrote:

Hi all,

thanks a lot for the many thoughtful replies and perspectives in this 
thread - I found them genuinely helpful.


I wanted to add a short follow-up with a concrete data point, as I had 
the chance today to set up a clean test environment to re-check an issue 
I had observed some months ago.


Test setup:

- NetBSD 10.1_STABLE (built from 2026-01-08), Xen 4.18
- NetBSD Dom0 and identical NetBSD DomU
- DomU filesystem hosted on an LVM volume provided by the NetBSD Dom0
- FFSv2ea + WAPBL as DomU root filesystem
- Backups using dump -X (FSS)

Previously, in essentially this configuration, I had run into 
reproducible freezes during dump -X. With the current setup, however, I 
can no longer reproduce this behaviour.


For reference:

- DomU: 192.168.2.252
- Backup target: USB disk mounted at /mnt
- DomU filesystem ~6 GB used

Tests were run repeatedly and also under mixed load (a bonnie++ load 
test running in parallel inside the DomU)


I did not observe any stalls, hangs, or other instabilities.

The test loop used was:

```
for i in `seq 100 199`; do
   ssh [email protected] "/sbin/dump -X -h 0 -b 64 -0auf - /" \
     > /mnt/backup$i.dump
done
```

Under these conditions, the LVM-based approach appears to work reliably 
again and is, at least for me, back to being a viable option.


Many thanks again to everyone for sharing their experiences and insights 
- they were the motivation to revisit and re-test this properly.


Best regards,
Matthias




Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-25 Thread Matthias Petermann

Hi all,

thanks a lot for the many thoughtful replies and perspectives in this 
thread - I found them genuinely helpful.


I wanted to add a short follow-up with a concrete data point, as I had 
the chance today to set up a clean test environment to re-check an issue 
I had observed some months ago.


Test setup:

- NetBSD 10.1_STABLE (built from 2026-01-08), Xen 4.18
- NetBSD Dom0 and identical NetBSD DomU
- DomU filesystem hosted on an LVM volume provided by the NetBSD Dom0
- FFSv2ea + WAPBL as DomU root filesystem
- Backups using dump -X (FSS)

Previously, in essentially this configuration, I had run into 
reproducible freezes during dump -X. With the current setup, however, I 
can no longer reproduce this behaviour.


For reference:

- DomU: 192.168.2.252
- Backup target: USB disk mounted at /mnt
- DomU filesystem ~6 GB used

Tests were run repeatedly and also under mixed load (a bonnie++ load 
test running in parallel inside the DomU)


I did not observe any stalls, hangs, or other instabilities.

The test loop used was:

```
for i in `seq 100 199`; do
  ssh [email protected] "/sbin/dump -X -h 0 -b 64 -0auf - /" \
> /mnt/backup$i.dump
done
```

Under these conditions, the LVM-based approach appears to work reliably 
again and is, at least for me, back to being a viable option.


Many thanks again to everyone for sharing their experiences and insights 
- they were the motivation to revisit and re-test this properly.


Best regards,
Matthias


Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-22 Thread Sad Clouds
On Thu, 22 Jan 2026 09:50:56 +
Sad Clouds  wrote:

> # zfs create -V 10G -o volblocksize=32K -o compression=off zroot/zvol_test

One thing I forgot to mention - after creating zvol, remember to fill
it with data, otherwise ZFS will be returning zeros from memory without
any disk I/O.


Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-22 Thread Sad Clouds
On Mon, 19 Jan 2026 21:15:17 +0100
Matthias Petermann  wrote:

> Unfortunately, once LVM enters the picture, I have repeatedly run into 
> situations where triggering an FSS snapshot causes severe stalls or 
> complete lockups. ZFS zvols, while very attractive feature-wise, showed 
> significantly lower performance in my setup compared to raw partitions, 
> and also compared poorly to simpler approaches such as CCD or even 
> vnd-backed storage.

Maybe it's a NetBSD specific ZFS issue, or overhead of ZFS software
RAID?

As a point of reference, I'm using hardware RAID-5 with FreeBSD-15 and
multithreaded zvol read performance is very close to raw disk. I need
to use multiple concurrent threads to saturate 4 SSDs.

# mfiutil show config
/dev/mfi0 Configuration: 1 arrays, 1 volumes, 0 spares
array 0 of 4 drives:
drive 16 (  466G) ONLINE  SATA
drive 14 (  466G) ONLINE  SATA
drive 17 (  466G) ONLINE  SATA
drive 15 (  466G) ONLINE  SATA
volume mfid0 (1396G) RAID-5 32K OPTIMAL spans:
array 0

# geom disk list
Geom name: mfid0
Providers:
1. Name: mfid0
   Mediasize: 1498675150848 (1.4T)
   Sectorsize: 512
   Mode: r1w1e2
   descr: (null)
   ident: (null)
   rotationrate: unknown
   fwsectors: 63
   fwheads: 255

# zpool status
  pool: zroot
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
config:

NAMESTATE READ WRITE CKSUM
zroot   ONLINE   0 0 0
  mfid0p2   ONLINE   0 0 0


As you can see below, sequential read performance is very close, at
around 1 GiB/sec.

ZVol test:
# zfs create -V 10G -o volblocksize=32K -o compression=off zroot/zvol_test
# sysperf_disk mode=rd size=10GiB threads=12 : /dev/zvol/zroot/zvol_test
...
Aggregate metrics:
  10.00 GiB, 163840.00 Block(s) @ 64.00 KiB/Block
  9920.44 msec, 1.01 GiB/sec, 16515.40 Blocks/sec, 728.17 usec/Block


RAID-5 virtual drive test:
# sysperf_disk mode=rd size=10GiB direct threads=12 : /dev/mfid0
...
Aggregate metrics:
  10.00 GiB, 163840.00 Block(s) @ 64.00 KiB/Block
  8711.16 msec, 1.15 GiB/sec, 18808.05 Blocks/sec, 637.41 usec/Block


Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-21 Thread Greg A. Woods
At Wed, 21 Jan 2026 15:45:40 -0800, "Greg A. Woods"  wrote:
Subject: Re: Xen storage for NetBSD guests: performance vs. consistent backups 
(sanity check)
>
> In any case I've been using LVM LVs in my dom0s to host raw partitions
> for domUs since day one of using Xen, so since NetBSD-5 I think.  I find
> them to be the most performant and, perhaps more importantly, the most
> flexible solution overall.  I can easily extend an LVM if more space is
> necessary somewhere and then just resize the FFS for the domU, and
> they're very easy to configure and manipulate at any time.

This isn't a real benchmark, but here's the results of a simple test:

Simple file I/O performance test from dom0

# dd if=/dev/zero of=testfile bs=1m count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 18.033 secs (238172644 bytes/sec)
# dd if=testfile of=/dev/zero bs=1m
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 13.247 secs (324221883 bytes/sec)


and from a domU with LVM LV back-end storage:

# dd if=/dev/zero of=testfile bs=1m count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 14.750 secs (291184223 bytes/sec)
# dd if=testfile of=/dev/zero bs=1m
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 13.381 secs (320975061 bytes/sec)

bonnie-2.06nb3 runs (with "-s 4000"):

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char-  --Block---  -Rewrite-- -Per Char-  --Block--- 
--Seeks---
MachineMB K/sec %CPU  K/sec %CPU  K/sec %CPU K/sec %CPU  K/sec %CPU  /sec 
%CPU
dom0-raw 4000 156980 91.9 268640 87.8 90492 62.1 159257 73.1 265574 51.6 844.5 
11.4
dom0-lvm 4000 148048 84.8 269226 89.9 87670 60.3 155711 72.1 258751 51.6 781.0 
10.1
domU 4000 196809 95.6 352908 89.3 95027 48.6 183517 82.3 315754 64.2 695.3 
10.0

I don't really understand the bonnie numbers vs. the dd numbers.

Both dom0 and domU systems are the same NetBSD/amd64 9.99.81 version.

Both have 4096M of RAM allocated, but the domU can balloon to 12000M.

The dom0 is more or less idle beyond running NTP and whatever demand the
domUs put on it for storage I/O.  That domU is running SMTP, IMAP, HTTP,
and nsd, etc.

So I don't think I'm seeing very much of a performance hit from LVM.

--
Greg A. Woods 

Kelowna, BC +1 250 762-7675   RoboHack 
Planix, Inc.  Avoncote Farms 


pgpkDr272CoVF.pgp
Description: OpenPGP Digital Signature


Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-21 Thread Greg A. Woods
At Tue, 20 Jan 2026 10:20:54 +0100, Hauke Fath  wrote:
Subject: Re: Xen storage for NetBSD guests: performance vs. consistent backups 
(sanity check)
> 
> On Mon, 19 Jan 2026 21:15:17 +0100, Matthias Petermann wrote:
> > Unfortunately, once LVM enters the picture, I have repeatedly run 
> > into situations where triggering an FSS snapshot causes severe stalls 
> > or complete lockups.
> 
> This is on which version?
> 
> I am asking because I have been running ffs snapshot backups (TSM here) 
> on netbsd-9 DomUs for  years without ever running into any issues..

I'm very surprised that LVM could play any part in causing stalls or
lock-ups.

That being said I've never tried making FFS snapshots in any of my
NetBSD my domUs.  (I just use rsync from the live filesystem as I don't
have any important applications that continue to modify files all night
long.)  (Also the terminology used around ffsconfig(8) makes me
uncomfortable and makes me think I don't actually understand what it
all means -- it all seems backwards to me.)

It would be really interesting, and perhaps even important for us LVM
users, to try to track this down.  If there's a simple recipe that can
reproduce such stalls or hangs that would be very useful!

In any case I've been using LVM LVs in my dom0s to host raw partitions
for domUs since day one of using Xen, so since NetBSD-5 I think.  I find
them to be the most performant and, perhaps more importantly, the most
flexible solution overall.  I can easily extend an LVM if more space is
necessary somewhere and then just resize the FFS for the domU, and
they're very easy to configure and manipulate at any time.

I too had once thought of using CCD volumes, but they have far too many
limitations vs LVM; and having had lots of previous experience using the
same kind of LVM on AIX in a past job I realized that in combination
with good hardware RAID, well LVM was just the most obvious possible
choice.  I can take one giant raw partition provided by the RAID
controller and easily divide it up into as many LVs as necessary,
keeping any left over space available for expansion of any LV as
necessary.

That said I've not used ZFS on NetBSD yet.  I find it too piggy for the
systems I currently have, especially since they already have high-end
hardware RAID controllers that do caching and proactive scanning for bad
sectors -- maybe I can try it on a next generation of machines, though
those to will have even better hardware RAID and it just doesn't make
sense to me to run ZFS on top of high-end hardware RAID -- a layer too
many (though the compression features of ZFS are intriguing -- that
would be a feature I would think to add to LVM though, along with
ZFS-like "dedup" and "copies" -- performance starts to get problematic
with any of those features though).

I presume ZFS "volumes" (zvols?) can be extended in size just as easily
as LVM LVs.

-- 
Greg A. Woods 

Kelowna, BC +1 250 762-7675   RoboHack 
Planix, Inc.  Avoncote Farms 


pgpphHj13TJch.pgp
Description: OpenPGP Digital Signature


Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-21 Thread Matthias Petermann

Hi Niels,

On 1/21/26 10:33, Niels Dettenbach wrote:

I would recommend to try it - it is easy to create some single device zfs
pool on a empty disk or array or even a partition.



I’m wondering whether the relatively weak CPU might be a bottleneck. I
seem to recall that during my load tests - especially with ZVOLs - I
observed a significant number of TLB shootdowns, whereas this did not
occur with raw/CCD devices. Could these observations be related?

Because of compatibility issues we use linux dom0 but plan to test / migrate
NetBSD here as well. So i can not guarantee for the NetBSD version.

To me, ZFS is significabtly faster then LVM - at least


thanks for the interesting details - much appreciated.

Using a Linux Dom0 means that all ZFS-related machinery lives in the 
Linux kernel. I would assume that this is a considerably more recent ZFS 
implementation than what I currently have available in NetBSD 10.1, and 
likely also more optimized from a performance perspective. That could 
very well explain some of the differences I observed, compared to your 
experiences.


I would indeed be very interested to see how the same setup behaves on 
identical hardware with NetBSD as Dom0, just to get a more direct 
comparison.


In principle, I could also try to reproduce a similar setup myself on my 
low-spec hardware to get some hands-on data. On the other hand, for this 
small home-server appliance I am trying to avoid a mixed setup. I also 
want to use this project as an exercise in supply-chain self-control and 
reduction of complexity; introducing Linux solely as Dom0 would somewhat 
defeat that goal and make the overall system more complex than I’d like.


Thanks again for sharing your experience.

Best regards,
Matthias


--
Für alle, die digitale Systeme verstehen und gestalten wollen:
jede Woche neue Beiträge zu Architektur, Souveränität und Systemdesign.
👉 https://www.petermann-digital.de/blog



Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-21 Thread Niels Dettenbach
Am Dienstag, 20. Januar 2026, 20:53:50 UTC+00:00:01 schrieben Sie:
> Just out of curiosity, what are the hardware specifications of this
> machine? I’ve realized that I forgot to include two additional pieces of
> context in my original post. First (as mentioned in my reply to Hauke),
> I’m using WAPBL in the guest filesystems. Second, this setup runs on a
> low-spec Intel NUC7CJYH (8 GB RAM, dual-core Celeron J4005).

The bare metal of the oldest machine is a proliant DL360 G8 (now over 10 
years old), but the Dom0 (where all of the ZFS magic happens) currently has 8 
GB dedicated RAM 4 threads (means 2 real CPU Cores) for 10 DomU under severe 
load - never getting over 60% of overall CPU lload (peaks) under high 
parallel DomU IO. Disks are typical HPE SAS spindles (6 GBit/s) up to modern 
SSD.

As i wrote, we use single device zfs pools only (on hardware array with i.e. 
RAID 5 or simple mirroring) - here typically some HPE p4xx or newer LSI 
megaraid) what may have impact here, because the RAID logic has not to be 
handled by ZFS / CPU.

I would recommend to try it - it is easy to create some single device zfs 
pool on a empty disk or array or even a partition.


> I’m wondering whether the relatively weak CPU might be a bottleneck. I
> seem to recall that during my load tests - especially with ZVOLs - I
> observed a significant number of TLB shootdowns, whereas this did not
> occur with raw/CCD devices. Could these observations be related?
Because of compatibility issues we use linux dom0 but plan to test / migrate 
NetBSD here as well. So i can not guarantee for the NetBSD version.

To me, ZFS is significabtly faster then LVM - at least 


> That sounds interesting and reminds me of an experimental setup I once
> built with QEMU/nvmm. I mirrored ZVOL snapshots to a NAS and exposed
> them as iSCSI block devices for recovery purposes.
I do no iSCSI stuff in my setup and would avoid it, because in my experiences 
it brought in a significantly bottleneck.



> Sure, I’d be happy to take a look. I always enjoy reviewing such tools
> for inspiration and to see how others solve problems similar to the ones
> I’m facing.

zfs_snapper
https://github.com/nielsd/zfs_snapper



# create single device zfs pool:

(i.e. i create 2 pools - "tank" (for domU usage) and "backup" (local 
replication).

 cat /root/sbin/zfs_create_pool_backup
sudo zpool create backup \
-f \
-o ashift=12 \
-O compression=lz4 \
-O normalization=formD \
-O atime=off \
-O xattr=sa \
-O canmount=off \
-O mountpoint=none \
/dev/disk_or_partition


# create zvols for domUs in dom0

#!/bin/bash
# create-zfs-zvol-domu.sh – size-limited ZVOL for Xen domU

NAME=$1
SIZE=$2
POOL=tank

[ -z "$NAME" ] && { echo "Usage: $0  "; exit 1; }
[ -z "$SIZE" ] && { echo "Size argument required (e.g. 20G)"; exit 1; }

# ZVOL with Volblocksize 16K lz4 ,size "reservation" (if you do not use 
"refreservation" you can/may "overbook" your pool)

zfs create -V "$SIZE" \
  -p \
  -o volblocksize=16K \
  -o compression=lz4 \
  -o logbias=throughput \
  -o primarycache=metadata \
  -o secondarycache=none \
  -o refreservation="$SIZE" \
  "$POOL/$NAME"

echo "→ ZVOL $POOL/$NAME created, size is fixed to $SIZE"
echo "  Device path: /dev/zvol/$POOL/$NAME"



## example cron script
cat /etc/cron.daily/zpool_backup.sh
#!/bin/bash

# create and rotate snapshots on zpool tank
/root/sbin/zfs_snapper tank --list-new --dosnap --doprune >>/var/log/
zfs_snapper.log

# sync / replicate from tank to backup
/root/sbin/syncoid_to_backup.sh

# prune outdated snaps on backup pool
/root/sbin/zfs_prune backup --do-it >>/var/log/zfs_prune.log




cat /root/sbin/syncoid_to_backup.sh
#!/usr/bin/env bash
set -euo pipefail

SOURCE=tank
TARGET=backup/tank

echo "=== Syncoid tank → backup/tank – $(date) ==="

syncoid \
  --recursive \
  --no-sync-snap \
  --skip-parent \
  --compress=lz4 \
  --mbuffer-size=256M \
  "$SOURCE" "$TARGET"

### --debug ### for debug

echo "=== Syncoid done – $(date) ==="


Feel free to ask me in case of questions.



hth,

niels.


-- 
 ---
 Niels Dettenbach
 Syndicat IT & Internet
 https://www.syndicat.com
 PGP: https://syndicat.com/pub_key.asc
 ---
 








Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-20 Thread Hauke Fath
On Tue, 20 Jan 2026 20:29:06 +0100, Matthias Petermann wrote:
>> 
>> This is on which version?
>> 
>> I am asking because I have been running ffs snapshot backups (TSM here)
>> on netbsd-9 DomUs for ˜years without ever running into any issues..
> 
> I am experiencing this since the NetBSD 8 days I guess. I should have 
> mentioned, that I use WAPBL at the same time. How does this map to 
> your setup?

The DomUs all use WAPL.

Cheerio,
Hauke

-- 
 The ASCII Ribbon CampaignHauke Fath
() No HTML/RTF in email Institut für Nachrichtentechnik
/\ No Word docs in email TU Darmstadt
 Respect for open standards Ruf +49-6151-16-21344


Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-20 Thread Matthias Petermann

Hi,

On 1/19/26 22:57, Niels Dettenbach wrote:

Hi Matthias,


Am 19.01.26 um 21:15 schrieb Matthias Petermann :

- Reliable, consistent online backups from running guests
- Good read/write performance, ideally close to raw disk access
- Flexibility in terms of free space allocation (on demand)


after testing / ecperiencing a lot monthes ago we switched from LVM Dom0 
(linux dom0) to dom0 with a zfs pool (single device zpool on a hardware 
array in our case) providing zvols as block devices for the domu.


Even with lz4 compression it is much faster then LVM for us while we 
save >50% of disk space. We run i.e. internet servers and databases on it.



Just out of curiosity, what are the hardware specifications of this 
machine? I’ve realized that I forgot to include two additional pieces of 
context in my original post. First (as mentioned in my reply to Hauke), 
I’m using WAPBL in the guest filesystems. Second, this setup runs on a 
low-spec Intel NUC7CJYH (8 GB RAM, dual-core Celeron J4005).


I’m wondering whether the relatively weak CPU might be a bottleneck. I 
seem to recall that during my load tests - especially with ZVOLs - I 
observed a significant number of TLB shootdowns, whereas this did not 
occur with raw/CCD devices. Could these observations be related?


Snapshots are  done by a script (i call it xen snapper) i can provide 
you as open source- with i.e (multiple) daily + weekly snaps (amount of 
days/weeks could be configured) while i do backup simply by zfs 
replication per „syncoid“ (from sanoid)to another internal zpool + a 
external on WAN. Only changed blocks are replicated daily, saving lot of 
time as traffi even over former incremental backup solutions (see i.e. 
xen-backup which i formerly wrote for lvm / tar). once a month i do a 
third backup syncoid to a mac book with external thunderbolt SSD which 
is one zfs pool ad well (crucial x10 6TB).


That sounds interesting and reminds me of an experimental setup I once 
built with QEMU/nvmm. I mirrored ZVOL snapshots to a NAS and exposed 
them as iSCSI block devices for recovery purposes.


ZFS would be my preferred setup as well, and it might be worth trying it 
on a secondary system to identify potential issues and possibly help 
with fixes. I don’t mind that NetBSD’s ZFS isn’t at the latest upstream 
level (still based on illumos, if I recall correctly), but I have run 
into some issues occasionally - possibly related to the combination with 
relatively weak hardware.



There is currently no more elegant setup i experienced.

if you are interested in my snapper script and other small maintenance 
tools, im happy to provide it as open source.


Sure, I’d be happy to take a look. I always enjoy reviewing such tools 
for inspiration and to see how others solve problems similar to the ones 
I’m facing. That was also the motivation to show my ccdtool as a kind of 
"poor man’s LVM.".



Best regards
Matthias





--
Für alle, die digitale Systeme verstehen und gestalten wollen:
jede Woche neue Beiträge zu Architektur, Souveränität und Systemdesign.
👉 https://www.petermann-digital.de/blog



Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-20 Thread Matthias Petermann

Hi,

On 1/20/26 10:20, Hauke Fath wrote:

On Mon, 19 Jan 2026 21:15:17 +0100, Matthias Petermann wrote:

Unfortunately, once LVM enters the picture, I have repeatedly run
into situations where triggering an FSS snapshot causes severe stalls
or complete lockups.


This is on which version?

I am asking because I have been running ffs snapshot backups (TSM here)
on netbsd-9 DomUs for ˜years without ever running into any issues..


I am experiencing this since the NetBSD 8 days I guess. I should have 
mentioned, that I use WAPBL at the same time. How does this map to your

setup?

Best regards
Matthias



--
Für alle, die digitale Systeme verstehen und gestalten wollen:
jede Woche neue Beiträge zu Architektur, Souveränität und Systemdesign.
👉 https://www.petermann-digital.de/blog



Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-20 Thread Hauke Fath
On Mon, 19 Jan 2026 21:15:17 +0100, Matthias Petermann wrote:
> Unfortunately, once LVM enters the picture, I have repeatedly run 
> into situations where triggering an FSS snapshot causes severe stalls 
> or complete lockups.

This is on which version?

I am asking because I have been running ffs snapshot backups (TSM here) 
on netbsd-9 DomUs for ˜years without ever running into any issues..

Cheerio,
Hauke

-- 
 The ASCII Ribbon CampaignHauke Fath
() No HTML/RTF in emailInstitut für Nachrichtentechnik
/\ No Word docs in email TU Darmstadt
 Respect for open standards  Ruf +49-6151-16-21344

Re: Xen storage for NetBSD guests: performance vs. consistent backups (sanity check)

2026-01-19 Thread Niels Dettenbach
Hi Matthias,

> Am 19.01.26 um 21:15 schrieb Matthias Petermann :
> 
> - Reliable, consistent online backups from running guests
> - Good read/write performance, ideally close to raw disk access
> - Flexibility in terms of free space allocation (on demand)


after testing / ecperiencing a lot monthes ago we switched from LVM Dom0 (linux 
dom0) to dom0 with a zfs pool (single device zpool on a hardware array in our 
case) providing zvols as block devices for the domu.

Even with lz4 compression it is much faster then LVM for us while we save >50% 
of disk space. We run i.e. internet servers and databases on it.

With zfs i can easily „overbook“ disk space - zvols can have more brutto 
capacity in sum then the zfs pool in real and even vietually by compression 
have.

Snapshots are  done by a script (i call it xen snapper) i can provide you as 
open source- with i.e (multiple) daily + weekly snaps (amount of days/weeks 
could be configured) while i do backup simply by zfs replication per „syncoid“ 
(from sanoid)to another internal zpool + a external on WAN. Only changed blocks 
are replicated daily, saving lot of time as traffi even over former incremental 
backup solutions (see i.e. xen-backup which i formerly wrote for lvm / tar). 
once a month i do a third backup syncoid to a mac book with external 
thunderbolt SSD which is one zfs pool ad well (crucial x10 6TB).

There is currently no more elegant setup i experienced.

if you are interested in my snapper script and other small maintenance tools, 
im happy to provide it as open source.

hth,
cheers,

niels.
— 
Niels Dettenbach
https://www.syndicat.com
https://www.syndicat.com/pub_key.asc