Hi chao:
Since cc.cluster_idx only will be set in f2fs_compress_ctx_add_page,
so for non-compressed cluster, cc.cluster_idx should always be NULL. it
means that the handling process of non-compressed cluster is same as older.
On 2021/8/6 8:57, Chao Yu wrote:
On 2021/7/23 11:18, Fengnan Chang wrote:
f2fs_read_multi_pages will handle,all truncate page will be zero out,
Whether partial or all page in cluster.
On 2021/7/22 21:47, Chao Yu wrote:
On 2021/7/22 11:25, Fengnan Chang wrote:
Since cluster is basic unit of compression, one cluster is
compressed or
not, so we can calculate valid blocks only for first page in
cluster, the
other pages just skip.
Signed-off-by: Fengnan Chang <[email protected]>
---
fs/f2fs/data.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index d2cf48c5a2e4..a0099d8329f0 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2304,12 +2304,13 @@ static int f2fs_mpage_readpages(struct inode
*inode,
if (ret)
goto set_error_page;
}
- ret = f2fs_is_compressed_cluster(inode, page->index);
- if (ret < 0)
- goto set_error_page;
- else if (!ret)
- goto read_single_page;
How about truncation races with read?
Look into this again, it looks fine, truncation tries to grab all pages
lock
of cluster, but readahead has already held some/all of them, so there is no
such race condition.
So compressed cluster case looks fine to me, but we still need to call
f2fs_is_compressed_cluster() every time for non-compressed cluster, could
you please check that as well?
Thanks,
Thanks,
-
+ if (cc.cluster_idx == NULL_CLUSTER) {
+ ret = f2fs_is_compressed_cluster(inode, page->index);
+ if (ret < 0)
+ goto set_error_page;
+ else if (!ret)
+ goto read_single_page;
+ }
ret = f2fs_init_compress_ctx(&cc);
if (ret)
goto set_error_page;
_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel