On 2021/11/9 9:59, 常凤楠 wrote:
-----Original Message-----
From: changfeng...@vivo.com <changfeng...@vivo.com> On Behalf Of
Chao Yu
Sent: Monday, November 8, 2021 10:21 PM
To: 常凤楠 <changfeng...@vivo.com>; jaeg...@kernel.org
Cc: linux-f2fs-devel@lists.sourceforge.net
Subject: Re: Do we need serial io for compress file?
On 2021/11/8 11:54, Fengnan Chang wrote:
In my test, serial io for compress file will make multithread small
write performance drop a lot.
I'm try to fingure out why we need __should_serialize_io, IMO, we use
__should_serialize_io to avoid deadlock or try to improve sequential
performance, but I don't understand why we should do this for
It was introduced to avoid fragmentation of file blocks.
So, for small write on compress file, is this still necessary? I think we
should treat compress file as regular file.
Any real scenario there? let me know if I missed any cases, as I saw, most
compressible
files are not small...
compressed file. In my test, if we just remove this, write same file
in multithread will have problem, but parallel write different files
in multithread
What do you mean by "write same file in multithread will have problem"?
If just remove compress file in __should_serialize_io()
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index f4fd6c246c9a..7bd429b46429 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3165,8 +3165,8 @@ static inline bool __should_serialize_io(struct inode
*inode,
if (IS_NOQUOTA(inode))
return false;
- if (f2fs_need_compress_data(inode))
- return true;
+ //if (f2fs_need_compress_data(inode))
+ // return true;
if (wbc->sync_mode != WB_SYNC_ALL)
return true;
if (get_dirty_pages(inode) >= SM_I(F2FS_I_SB(inode))->min_seq_blocks)
and use fio to start multi thread to write same file, fio will hung.
Any potential hangtask issue there? did you get any stack backtrace log?
If there is, it needs to figure out the root cause.
Thanks,
fio.conf:
[global]
direct=1
numjobs=8
time_based
runtime=30
ioengine=sync
iodepth=16
buffer_pattern="ZZZZ"
fsync=1
[file0]
name=fio-rand-RW
filename=fio-rand-RW
rw=rw
rwmixread=60
rwmixwrite=40
bs=1M
size=64M
[file1]
name=fio-rand-RW
filename=fio-rand-RW
rw=randrw
rwmixread=60
rwmixwrite=40
bs=4K
size=64M
Thanks,
is ok. So I think maybe we should use another lock to allow write
different files in multithread.
_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel