On 2021/7/22 11:25, Fengnan Chang wrote:
Since cluster is basic unit of compression, one cluster is compressed or
not, so we can calculate valid blocks only for first page in cluster, the
other pages just skip.
Signed-off-by: Fengnan Chang <[email protected]>
---
fs/f2fs/data.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index d2cf48c5a2e4..a0099d8329f0 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2304,12 +2304,13 @@ static int f2fs_mpage_readpages(struct inode *inode,
if (ret)
goto set_error_page;
}
- ret = f2fs_is_compressed_cluster(inode, page->index);
- if (ret < 0)
- goto set_error_page;
- else if (!ret)
- goto read_single_page;
How about truncation races with read?
Thanks,
-
+ if (cc.cluster_idx == NULL_CLUSTER) {
+ ret = f2fs_is_compressed_cluster(inode,
page->index);
+ if (ret < 0)
+ goto set_error_page;
+ else if (!ret)
+ goto read_single_page;
+ }
ret = f2fs_init_compress_ctx(&cc);
if (ret)
goto set_error_page;
_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel