On 2021/6/21 14:49, Fengnan Chang wrote:
Hi chao & jaegeuk:
Any comments about this?
Thanks.
On 2021/6/10 11:28, Fengnan Chang wrote:
For now, when overwrite compressed file, we need read old data to page
cache first and update pages.
But when we need overwrite whole cluster, we don't need old data
anymore.
Commit message needs to be updated as:
when we overwrite the whole page in cluster, we don't need read original
data before write, because after write_end(), writepages() can help to
load left data in that cluster.
Acked-by: Chao Yu <[email protected]>
Thanks,
So, remove read data process in this case, I have made
some simple changes to test, tests have shown that this can lead to
significant performance improvements, the speed of sequential write
up to 2x.
This modificy just check wheather the whole page was dirty, because
when writeback cache f2fs_prepare_compress_overwrite will be called again.
when update whole cluster, cc in prepare_compress_overwrite will be
empty, so will not read old data.
when only update one page in cluster, cc in prepae_compress_overwrite
will not be empty, so will read old data.
Signed-off-by: Fengnan Chang <[email protected]>
Signed-off-by: Chao Yu <[email protected]>
---
fs/f2fs/data.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index d4795eda12fa..9376c62e0ecc 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3307,6 +3307,9 @@ static int f2fs_write_begin(struct file *file, struct
address_space *mapping,
*fsdata = NULL;
+ if (len == PAGE_SIZE)
+ goto repeat;
+
ret = f2fs_prepare_compress_overwrite(inode, pagep,
index, fsdata);
if (ret < 0) {
_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel