On 10/21/2025 5:41 PM, Jan Kara wrote:
> On Tue 21-10-25 16:06:22, Kundan Kumar wrote:
>> Previous results of fragmentation were taken with randwrite. I took
>> fresh data for sequential IO and here are the results.
>> number of extents reduces a lot for seq IO:
>>     A) Workload 6 files each 1G in single directory(AG)   - numjobs = 1
>>           Base XFS                : 1
>>           Parallel Writeback XFS  : 1
>>
>>     B) Workload 12 files each of 1G to 12 directories(AGs)- numjobs = 12
>>           Base XFS                : 4
>>           Parallel Writeback XFS  : 3
>>
>>     C) Workload 6 files each of 20G to 6 directories(AGs) - numjobs = 6
>>           Base XFS                : 4
>>           Parallel Writeback XFS  : 4
> 
> Thanks for sharing details! I'm curious: how big differences in throughput
> did you see between normal and parallel writeback with sequential writes?
> 
>                                                               Honza

Thank you for the review, Jan.

I found that the IOPS for sequential writes on NVMe SSD were similar
for both normal and parallel writeback. This is because the normal
writeback already maxes out the device's capacity. To observe the
impact of parallel writeback on IOPS with sequential writes, I
conducted additional tests using a PMEM device. The results, including
IOPS and fragmentation data, are as follows:

A) Workload 6 files each 1G in single directory(AG)      - numjobs = 1
     Base XFS           : num extents : 1 : 6606 MiB/s
     Parallel writeback : num extents : 1 : 6729 MiB/s - No change

B) Workload 12 directories(AGs) each with 12 files of 1G - numjobs = 12
     Base XFS           : num extents : 4 : 4486 MiB/s
     Parallel writeback : num extents : 5 : 12.9 GiB/s - +187%

C) Workload 6 directories(AGs) each with one 20G file    - numjobs = 6
     Base XFS           : num extents : 7 : 3518 MiB/s
     Parallel writeback : num extents : 6 : 6448 MiB/s - +83%

Number of CPUs = 128
System RAM = 128G
PMEM device size = 170G


_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to