On 2021/10/11 20:03, Fengnan Chang wrote:
when overwrite only first block of cluster, since cluster is not full, it
will call f2fs_write_raw_pages when f2fs_write_multi_pages, and cause the
whole cluster become uncompressed eventhough data can be compressed.
this may will make random write bench score reduce a lot.

root# dd if=/dev/zero of=./fio-test bs=1M count=1

root# sync

root# echo 3 > /proc/sys/vm/drop_caches

root# f2fs_io get_cblocks ./fio-test

root# dd if=/dev/zero of=./fio-test bs=4K count=1 oflag=direct conv=notrunc

w/o patch:
root# f2fs_io get_cblocks ./fio-test
189

w/ patch:
root# f2fs_io get_cblocks ./fio-test
192

Signed-off-by: Fengnan Chang <[email protected]>
---
  fs/f2fs/data.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index f4fd6c246c9a..267db5d3993e 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3025,6 +3025,9 @@ static int f2fs_write_cache_pages(struct address_space 
*mapping,
                                                                1)) {
                                                retry = 1;
                                                break;
+                                       } else if (ret2 && nr_pages - i < 
cc.cluster_size) {

What about:
i = 0, nr_pages = 4,
pvec.pages[0].index = 0
pvec.pages[1].index = 4
pvec.pages[2].index = 5
pvec.pages[3].index = 6

Thanks,

+                                               retry = 1;
+                                               break;
                                        }
                                } else {
                                        goto lock_page;



_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to