On 6/2/2025 7:49 PM, Christoph Hellwig wrote:
> On Thu, May 29, 2025 at 04:44:51PM +0530, Kundan Kumar wrote:
> Well, the proper thing would be to figure out a good default and not
> just keep things as-is, no?

We observed that some filesystems, such as Btrfs, don't benefit from
this infra due to their distinct writeback architecture. To preserve
current behavior and avoid unintended changes for such filesystems,
we have kept nr_wb_ctx=1 as the default. Filesystems that can take
advantage of parallel writeback (xfs, ext4) can opt-in via a mount
option. Also we wanted to reduce risk during initial integration and
hence kept it as opt-in.

> 
>> IOPS and throughput
>> ===================
>> We see significant improvement in IOPS across several filesystem on both
>> PMEM and NVMe devices.
>>
>> Performance gains:
>>    - On PMEM:
>>      Base XFS                : 544 MiB/s
>>      Parallel Writeback XFS  : 1015 MiB/s  (+86%)
>>      Base EXT4               : 536 MiB/s
>>      Parallel Writeback EXT4 : 1047 MiB/s  (+95%)
>>
>>    - On NVMe:
>>      Base XFS                : 651 MiB/s
>>      Parallel Writeback XFS  : 808 MiB/s  (+24%)
>>      Base EXT4               : 494 MiB/s
>>      Parallel Writeback EXT4 : 797 MiB/s  (+61%)
> 
> What worksload was this?

Number of CPUs = 12
System RAM = 16G
For XFS number of AGs = 4
For EXT4 BG count = 28616
Used PMEM of 6G and NVMe SSD of 3.84 TB

fio command line :
fio --directory=/mnt --name=test --bs=4k --iodepth=1024 --rw=randwrite 
--ioengine=io_uring --time_based=1 -runtime=60 --numjobs=12 --size=450M 
--direct=0  --eta-interval=1 --eta-newline=1 --group_reporting

Will measure the write-amp and share.

> 
> How many CPU cores did the system have, how many AGs/BGs did the file
> systems have?   What SSD/Pmem was this?  Did this change the write
> amp as measure by the media writes on the NVMe SSD?
> 
> Also I'd be really curious to see numbers on hard drives.
> 
>> We also see that there is no increase in filesystem fragmentation
>> # of extents:
>>    - On XFS (on PMEM):
>>      Base XFS                : 1964
>>      Parallel Writeback XFS  : 1384
>>
>>    - On EXT4 (on PMEM):
>>      Base EXT4               : 21
>>      Parallel Writeback EXT4 : 11
> 
> How were the number of extents counts given that they look so wildly
> different?
> 
> 

Issued random write of 1G using fio with fallocate=none and then
measured the number of extents, after a delay of 30 secs :
fio --filename=/mnt/testfile --name=test --bs=4k --iodepth=1024 
--rw=randwrite --ioengine=io_uring  --fallocate=none --numjobs=1 
--size=1G --direct=0 --eta-interval=1 --eta-newline=1 --group_reporting

For xfs used this command:
xfs_io -c "stat" /mnt/testfile

And for ext4 used this:
filefrag /mnt/testfile


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to