On Tue, Oct 14, 2025 at 05:38:29PM +0530, Kundan Kumar wrote: > Number of writeback contexts > ============================ > We've implemented two interfaces to manage the number of writeback > contexts: > 1) Sysfs Interface: As suggested by Christoph, we've added a sysfs > interface to allow users to adjust the number of writeback contexts > dynamically. > 2) Filesystem Superblock Interface: We've also introduced a filesystem > superblock interface to retrieve the filesystem-specific number of > writeback contexts. For XFS, this count is set equal to the > allocation group count. When mounting a filesystem, we automatically > increase the number of writeback threads to match this count.
This is dangerous. What happens when we mount a filesystem with millions of AGs? > Resolving the Issue with Multiple Writebacks > ============================================ > For XFS, affining inodes to writeback threads resulted in a decline > in IOPS for certain devices. The issue was caused by AG lock contention > in xfs_end_io, where multiple writeback threads competed for the same > AG lock. > To address this, we now affine writeback threads to the allocation > group, resolving the contention issue. In best case allocation happens > from the same AG where inode metadata resides, avoiding lock contention. Not necessarily. The allocator can (and will) select different AGs for an inode as the file grows and the AGs run low on space. Once they select a different AG for an inode, they don't tend to return to the original AG because allocation targets are based on contiguous allocation w.r.t. existing adjacent extents, not the AG the inode is located in. Indeed, if a user selects the inode32 mount option, there is absolutely no relationship between the AG the inode is located in and the AG it's data extents are allocated in. In these cases, using the inode resident AG is guaranteed to end up with a random mix of target AGs for the inodes queued in that AG. Worse yet, there may only be one AG that can have inodes allocated in it, so all the writeback contexts for the other hundreds of AGs in the filesystem go completely unused... > Similar IOPS decline was observed with other filesystems under different > workloads. To avoid similar issues, we have decided to limit > parallelism to XFS only. Other filesystems can introduce parallelism > and distribute inodes as per their geometry. I suspect that the issues with XFS lock contention are related to the fragmentation behaviour observed (see below) massively increasing the frequency of allocation work for a given amount of data being written rather than increasing writeback concurrency... > > IOPS and throughput > =================== > With the affinity to allocation group we see significant improvement in > XFS when we write to multiple files in different directories(AGs). > > Performance gains: > A) Workload 12 files each of 1G in 12 directories(AGs) - numjobs = 12 > - NVMe device BM1743 SSD So, 80-100k random 4kB write IOPS, ~2GB/s write bandwidth. > Base XFS : 243 MiB/s > Parallel Writeback XFS : 759 MiB/s (+212%) As such, the baseline result doesn't feel right - it doesn't match my experience with concurrent sequential buffered write workloads on SSDs. My expectation is that they'd get close to device bandwidth or run out of copy-in CPU at somewhere over 3GB/s. So what are you actually doing to get these numbers? What is the benchmark (CLI and conf files details, please!), what is the mkfs.xfs output, and how many CPUs/RAM do you have on the machines you are testing? i.e. please document them sufficiently so that other people can verify your results. Also, what is the raw device performance and how close to that are we getting through the filesystem? > - NVMe device PM9A3 SSD 130-180k random 4kB write IOPS, ~4GB/s write bandwidth. So roughly double the physical throughput of the BM1743, and .... > Base XFS : 368 MiB/s > Parallel Writeback XFS : 1634 MiB/s (+344%) .... it gets roughly double the physical throughput of the BM1743. This doesn't feel like a writeback concurrency limited workload - this feels more like a device IOPS and IO depth limited workload. > B) Workload 6 files each of 20G in 6 directories(AGs) - numjobs = 6 > - NVMe device BM1743 SSD > Base XFS : 305 MiB/s > Parallel Writeback XFS : 706 MiB/s (+131%) > > - NVMe device PM9A3 SSD > Base XFS : 315 MiB/s > Parallel Writeback XFS : 990 MiB/s (+214%) > > Filesystem fragmentation > ======================== > We also see that there is no increase in filesystem fragmentation > Number of extents per file: Are these from running the workload on a freshly made (i.e. just run mkfs.xfs, mount and run benchmark) filesystem, or do you reuse the same fs for all tests? > A) Workload 6 files each 1G in single directory(AG) - numjobs = 1 > Base XFS : 17 > Parallel Writeback XFS : 17 Yup, this implies a sequential write workload.... > B) Workload 12 files each of 1G to 12 directories(AGs)- numjobs = 12 > Base XFS : 166593 > Parallel Writeback XFS : 161554 which implies 144 files, and so over 1000 extents per file. Which means about 1MB per extent and is way, way worse than it should be for sequential write workloads. > > C) Workload 6 files each of 20G to 6 directories(AGs) - numjobs = 6 > Base XFS : 3173716 > Parallel Writeback XFS : 3364984 36 files, 720GB and 3.3m extents, which is about 100k extents per file for an average extent size of 200kB. That would explain why it performed roughly the same on both devices - they both have similar random 128kB write IO performance... But that fragmentation pattern is bad and shouldn't be occurring fro sequential writes. Speculative EOF preallocation should be almost entirely preventing this sort of fragmentation for concurrent sequential write IO and so we should be seeing extent sizes of at least hundreds of MBs for these file sizes. i.e. this feels to me like you test is triggering some underlying delayed allocation defeat mechanism that is causing physical writeback IO sizes to collapse. This turns what should be a bandwitdh limited workload running at full device bandwidth into an IOPS and IO depth limited workload. In adding writeback concurrency to this situation, it enables writeback to drive deeper IO queues and so extract more small IO performance from the device, thereby showing better performance for the wrokload. The issue is that baseline writeback performance is way below where I think it should be for the given IO workload (IIUC the workload being run, hence questions about benchmarks, filesystem configs and test hardware). Hence while I certainly agree that writeback concurrency is definitely needed, I think that the results you are getting here are a result of some other issue that writeback concurrency is mitigating. The underlying fragmentation issue needs to be understood (and probably solved) before we can draw any conclusions about the performance gains that concurrent writeback actually provides on these workloads and devices... -Dave. -- Dave Chinner [email protected] _______________________________________________ Linux-f2fs-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
