On 2021/5/25 20:05, [email protected] wrote:
yes, I just check wheather the whole page was dirty, because of when write
cache f2fs_prepare_compress_overwrite will be called again,
when update whole cluster, cc in prepare_compress_overwrite will be empty,
so will not submit bio.
when only update one page in cluster, cc in prepare_compress_overwrite will
not be empty, so will submit bio.
This is my thinking, not sure if I've missed anything
Well, it looks more like we did for mmap() write case.
So I guess we can change as below:
To Jaegeuk, comments?
---
fs/f2fs/data.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 2ea887a114c8..723c59df51b7 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3323,7 +3323,7 @@ static int f2fs_write_begin(struct file *file, struct
address_space *mapping,
}
#ifdef CONFIG_F2FS_FS_COMPRESSION
- if (f2fs_compressed_file(inode)) {
+ if (f2fs_compressed_file(inode) && len != PAGE_SIZE) {
int ret;
*fsdata = NULL;
--
2.29.2
Thanks,
-----邮件原件-----
发件人: Chao Yu <[email protected]>
发送时间: 2021年5月24日 19:39
收件人: Fengnan Chang <[email protected]>; [email protected];
[email protected]; [email protected]
主题: Re: [f2fs-dev] [RFC PATCH] f2fs: compress: remove unneeded read when
rewrite whole cluster
On 2021/5/18 20:51, Fengnan Chang wrote:
For now,when overwrite compressed file, we need read old data to page
cache first and update pages.
But when we need overwrite whole cluster, we don't need old data
anymore.
I only see you just check the whole page was dirty as below rather than the
whole cluster is dirty during write().
Thanks,
+ if (len == PAGE_SIZE)
+ return 0;
/* compressed case */
prealloc = (ret < cc->cluster_size);
.
_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel