Deleting large files is time-consuming, and a large part
of the time is spent in f2fs_invalidate_blocks()
->down_write(sit_info->sentry_lock) and up_write().

If some blocks are continuous, we can process these blocks
at the same time. This can reduce the number of calls to
the down_write() and the up_write(), thereby improving the
overall speed of doing truncate.

Test steps:
Set the CPU and DDR frequencies to the maximum.
dd if=/dev/random of=./test.txt bs=1M count=100000
sync
rm test.txt

Time Comparison of rm:
original        optimization            ratio
7.17s           3.27s                   54.39%

----
v4:
- introduce update_sit_entry_for_alloc().
- [patch 2,3,4 / 4] have no changes compared to v3.

Yi Sun (4):
  f2fs: introduce update_sit_entry_for_release/alloc()
  f2fs: update_sit_entry_for_release() supports consecutive blocks.
  f2fs: add parameter @len to f2fs_invalidate_blocks()
  f2fs: Optimize f2fs_truncate_data_blocks_range()

 fs/f2fs/compress.c |   4 +-
 fs/f2fs/f2fs.h     |   3 +-
 fs/f2fs/file.c     |  78 +++++++++++++++++--
 fs/f2fs/node.c     |   4 +-
 fs/f2fs/segment.c  | 185 +++++++++++++++++++++++++++++----------------
 5 files changed, 198 insertions(+), 76 deletions(-)

-- 
2.25.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to