On 23/11/23 19:06, Jonathan Chen wrote:
On 22/11/23 19:49, Jonathan Chen wrote:
Hi,
I'm running a somewhat recent version of STABLE-13/amd64:
stable/13-n256681-0b7939d725ba: Fri Nov 10 08:48:36 NZDT 2023, and I'm
seeing some unusual behaviour with ZFS.
To reproduce:
1. one big empty disk, GPT scheme, 1 freebsd-zfs partition.
2. create a zpool, eg: tank
3. create 2 sub-filesystems, eg: tank/one, tank/two
4. fill each sub-filesystem with large files until the pool is ~80%
full. In my case I had 200 10Gb files in each.
5. in one session run 'md5 tank/one/*'
6. in another session run 'md5 tank/two/*'
For most of my runs, one of the sessions against a sub-filesystem will
be starved of I/O, while the other one is performant.
I've run a few more tests, and the issue appears to be isolated to my
Alder Lake based system only. So it's more likely to be an issue with
the 'Alder Lake-S PCH SATA Controller [AHCI Mode]' or maybe the
scheduler using the P & E cores.
I've updated to FreeBSD 14/STABLE, and I'm glad to report that this bug
has gone away. I personally suspect that:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=274698
is the cause.
Cheers.
--
Jonathan Chen <[email protected]>