On Wed, Jun 04, 2025 at 02:52:34PM +0530, Kundan Kumar wrote: > > > > For xfs used this command: > > > > xfs_io -c "stat" /mnt/testfile > > > > And for ext4 used this: > > > > filefrag /mnt/testfile > > > > > > filefrag merges contiguous extents, and only counts up for discontiguous > > > mappings, while fsxattr.nextents counts all extent even if they are > > > contiguous. So you probably want to use filefrag for both cases. > > > > Got it — thanks for the clarification. We'll switch to using filefrag > > and will share updated extent count numbers accordingly. > > Using filefrag, we recorded extent counts on xfs and ext4 at three > stages: > a. Just after a 1G random write, > b. After a 30-second wait, > c. After unmounting and remounting the filesystem, > > xfs > Base > a. 6251 b. 2526 c. 2526 > Parallel writeback > a. 6183 b. 2326 c. 2326
Interesting that the mapping record count goes down... I wonder, you said the xfs filesystem has 4 AGs and 12 cores, so I guess wb_ctx_arr[] is 12? I wonder, do you see a knee point in writeback throughput when the # of wb contexts exceeds the AG count? Though I guess for the (hopefully common) case of pure overwrites, we don't have to do any metadata updates so we wouldn't really hit a scaling limit due to ag count or log contention or whatever. Does that square with what you see? > ext4 > Base > a. 7080 b. 7080 c. 11 > Parallel writeback > a. 5961 b. 5961 c. 11 Hum, that's particularly ... interesting. I wonder what the mapping count behaviors are when you turn off delayed allocation? --D > Used the same fio commandline as earlier: > fio --filename=/mnt/testfile --name=test --bs=4k --iodepth=1024 > --rw=randwrite --ioengine=io_uring --fallocate=none --numjobs=1 > --size=1G --direct=0 --eta-interval=1 --eta-newline=1 > --group_reporting > > filefrag command: > filefrag /mnt/testfile > _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel