On Thu, May 29, 2025 at 04:44:51PM +0530, Kundan Kumar wrote:
> Number of writeback contexts
> ===========================
> The plan is to keep the nr_wb_ctx as 1, ensuring default single threaded
> behavior. However, we set the number of writeback contexts equal to
> number of CPUs in the current version. Later we will make it configurable
> using a mount option, allowing filesystems to choose the optimal number
> of writeback contexts.

Well, the proper thing would be to figure out a good default and not
just keep things as-is, no?

> IOPS and throughput
> ===================
> We see significant improvement in IOPS across several filesystem on both
> PMEM and NVMe devices.
> 
> Performance gains:
>   - On PMEM:
>       Base XFS                : 544 MiB/s
>       Parallel Writeback XFS  : 1015 MiB/s  (+86%)
>       Base EXT4               : 536 MiB/s
>       Parallel Writeback EXT4 : 1047 MiB/s  (+95%)
> 
>   - On NVMe:
>       Base XFS                : 651 MiB/s
>       Parallel Writeback XFS  : 808 MiB/s  (+24%)
>       Base EXT4               : 494 MiB/s
>       Parallel Writeback EXT4 : 797 MiB/s  (+61%)

What worksload was this?

How many CPU cores did the system have, how many AGs/BGs did the file
systems have?   What SSD/Pmem was this?  Did this change the write
amp as measure by the media writes on the NVMe SSD?

Also I'd be really curious to see numbers on hard drives.

> We also see that there is no increase in filesystem fragmentation
> # of extents:
>   - On XFS (on PMEM):
>       Base XFS                : 1964
>       Parallel Writeback XFS  : 1384
> 
>   - On EXT4 (on PMEM):
>       Base EXT4               : 21
>       Parallel Writeback EXT4 : 11

How were the number of extents counts given that they look so wildly
different?



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to