On 2022/3/7 19:07, 常凤楠 wrote:
-----Original Message-----
From: Chao Yu <[email protected]>
Sent: Monday, March 7, 2022 6:30 PM
To: 常凤楠 <[email protected]>; Fengnan Chang
<[email protected]>; [email protected]
Cc: [email protected]
Subject: Re: [RFC PATCH] f2fs: fix compress file start atomic write may cause
data corruption
On 2022/3/7 15:54, 常凤楠 wrote:
-----Original Message-----
From: Chao Yu <[email protected]>
Sent: Monday, March 7, 2022 9:43 AM
To: Fengnan Chang <[email protected]>; [email protected]
Cc: [email protected]; 常凤楠
<[email protected]>
Subject: Re: [RFC PATCH] f2fs: fix compress file start atomic write
may cause data corruption
On 2022/3/5 12:00, Fengnan Chang wrote:
When compressed file has blocks, start atomic write will be failed,
but
You mean f2fs_ioc_start_atomic_write will succeed, but compressed
flag will be remained in inode?
Yes.
still set FI_ATOMIC_FILE flag, if write partial cluster and commit
atomic write will cause data corruption.
Could you please explain more about why data will be corrupted?
Step 1:
Create a compressed file ,write 64K data , call fsync(), then the blocks are
write as compressed cluster.
Step2:
iotcl(F2FS_IOC_START_ATOMIC_WRITE) --- this should be fail, but not.
write page 0 and page 3.
iotcl(F2FS_IOC_COMMIT_ATOMIC_WRITE) -- page 0 and 3 write as normal
file,
Step3:
drop cache.
Read page 0-4 -- Since page 0 has a valid block address, read as
no-compressed, page 1 and 2 will be compressed data or zero.
And before f2fs: compress: remove unneeded read when rewrite whole
cluster), even Step 2 is not right, but whole cluster will mark dirty in
write_begin, and whole cluster will be re-write as no-compressed cluster, so
it's ok.
Ah, after 7eab7a696827 (f2fs: compress: remove unneeded read when rewrite
whole cluster), we expect that f2fs_write_cache_pages() will be called, and
f2fs_prepare_compress_overwrite() can read and set dirty left pages of
compressed cluster.
However atomic commit flow won't call f2fs_write_cache_pages()...
Anyway, thanks for the explanation, and, could you please add above
comments into commit description of this patch?
Yes, I'll add more comments in next version.
And I want to know why atomic write need disable compressed file, I didn't see
any special is incompatible with compression.
I guess there is no incompatibility issue, however, as most database file will
be updated frequently, write amplification due to compression cluster design
may hurt database update performance, any particular scenario of using
compressed
db file?
Thanks,
Thanks.
Thanks,
Thanks,
Fixes: 4c8ff7095bef (f2fs: support data compression)
Fixes: 7eab7a696827 (f2fs: compress: remove unneeded read when
rewrite whole cluster)
Signed-off-by: Fengnan Chang <[email protected]>
---
fs/f2fs/data.c | 4 +---
fs/f2fs/file.c | 3 ++-
2 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index
6b5f389ba998..5cbee4ed0982 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3358,8 +3358,7 @@ static int f2fs_write_begin(struct file *file,
struct
address_space *mapping,
int ret;
*fsdata = NULL;
-
- if (len == PAGE_SIZE)
+ if (len == PAGE_SIZE && !(f2fs_is_atomic_file(inode)))
goto repeat;
ret = f2fs_prepare_compress_overwrite(inode, pagep, diff --git
a/fs/f2fs/file.c b/fs/f2fs/file.c index cfdc41f87f5d..26a648ef9e1f
100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -2009,7 +2009,8 @@ static int f2fs_ioc_start_atomic_write(struct
file *filp)
inode_lock(inode);
- f2fs_disable_compressed_file(inode);
+ if (!f2fs_disable_compressed_file(inode))
+ return -EINVAL;
if (f2fs_is_atomic_file(inode)) {
if (is_inode_flag_set(inode, FI_ATOMIC_REVOKE_REQUEST))
_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel