On 2020/7/6 15:34, lampahome wrote: > Chao Yu <[email protected]> 於 2020年7月6日 週一 下午3:29寫道: >> >> On 2020/7/6 15:10, lampahome wrote: >>> I tried to test performance with f2fs and create many fio to test it. >>> >>> I found when fio reach a number(e.g. 25 fio), the performance degrade >>> not in proportional with small number >>> >>> EX: >>> 5 fio: bandwidth 300MB/s >>> 10 fio: bandwidth 150MB/s >>> 25 fio: bandwidth 30MB/s >> >> What's your buffer size for each flush?
Could you share the whole command? >> > Each fio submit blocksize=4k, direct=0, 1GB file. So buffer size is 4k? I meant how many data fio will write before triggering fsync? I doubt that __should_serialize_io() may serialize all fio threads if your buffer size is larger than size of one section (2MB by default) > > When grep GC and CP in f2fs status, it shows did GC and CP some times. > But my disk has 128GB and each fio only write 1GB file. > Why does the behavior trigger GC and CP? Can you share result of status before and after test? There is BGGC and FGGC, BGGC runs periodically, FGGC runs when there is almost no free segments; CP trigger condition is complicated, commonly, via syncfs. > . > _______________________________________________ Linux-f2fs-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
