This reverts commit c7f114d864ac91515bb07ac271e9824a20f5ed95.
The race conditions between concurrent f2fs_stop_gc_thread() calls
are now protected by a dedicated lock, making the additional s_umount
lock protection unnecessary. Therefore, revert this patch.
Signed-off-by: Long Li
---
fs/f2fs/f2
In my test case, concurrent calls to f2fs shutdown report the following
stack trace:
Oops: general protection fault, probably for non-canonical address
0xc6cfff63bb5513fc: [#1] PREEMPT SMP PTI
CPU: 0 UID: 0 PID: 678 Comm: f2fs_rep_shutdo Not tainted
6.12.0-rc5-next-20241029-g6fb2fa9805c5-
If user give a file size as "length" parameter for fiemap
operations, but this size is non-block size aligned,
it will show 2 segments fiemap results even this whole file
is contiguous on disk, such as the following results, please
note that this f2fs_io has been modified for testing.
./f2fs_io f
f2fs_is_atomic_file(inode) is checked in f2fs_defragment_range,
so remove the redundant checking in f2fs_ioc_defragment.
Signed-off-by: Zhiguo Niu
---
fs/f2fs/file.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 75a8b22..3e22f6e 100644
On Thu, Oct 31, 2024 at 1:00 AM Daeho Jeong wrote:
>
> On Wed, Oct 30, 2024 at 3:35 AM Yi Sun wrote:
> >
> > New function can process some consecutive blocks at a time.
> >
> > Function f2fs_invalidate_blocks()->down_write() and up_write()
> > are very time-consuming, so if f2fs_invalidate_blocks
On Wed, Oct 30, 2024 at 3:35 AM Yi Sun wrote:
>
> New function can process some consecutive blocks at a time.
>
> Function f2fs_invalidate_blocks()->down_write() and up_write()
> are very time-consuming, so if f2fs_invalidate_blocks() can
> process consecutive blocks at one time, it will save a lo
When using update_sit_entry() to release consecutive blocks,
ensure that the consecutive blocks belong to the same segment.
Because after update_sit_entry_for_realese(), @segno is still
in use in update_sit_entry().
Signed-off-by: Yi Sun
---
fs/f2fs/segment.c | 11 +++
1 file changed, 11
New function f2fs_invalidate_compress_pages_range() adds the @len
parameter. So it can process some consecutive blocks at a time.
Signed-off-by: Yi Sun
---
fs/f2fs/compress.c | 7 ---
fs/f2fs/f2fs.h | 9 +
2 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/fs/f2fs/com
Function f2fs_invalidate_blocks() can process continuous
blocks at a time, so f2fs_truncate_data_blocks_range() is
optimized to use the new functionality of
f2fs_invalidate_blocks().
Signed-off-by: Yi Sun
---
fs/f2fs/file.c | 72 +++---
1 file changed,
New function can process some consecutive blocks at a time.
Signed-off-by: Yi Sun
---
fs/f2fs/data.c| 2 +-
fs/f2fs/f2fs.h| 6 +++---
fs/f2fs/gc.c | 2 +-
fs/f2fs/segment.c | 6 +++---
4 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
ind
Deleting large files is time-consuming, and a large part
of the time is spent in f2fs_invalidate_blocks()
->down_write(sit_info->sentry_lock) and up_write().
If some blocks are continuous, we can process these blocks
at the same time. This can reduce the number of calls to
the down_write() and the
New function can process some consecutive blocks at a time.
Function f2fs_invalidate_blocks()->down_write() and up_write()
are very time-consuming, so if f2fs_invalidate_blocks() can
process consecutive blocks at one time, it will save a lot of time.
Signed-off-by: Yi Sun
---
fs/f2fs/compress.c
This patch introduces a new helper log_type_to_seg_type() to convert
log type to segment data type, and uses it to clean up opened codes
in build_curseg(), and also it fixes to convert log type before use
in do_write_page().
Signed-off-by: Chao Yu
---
v2:
- no logic change, just rebase to last de
13 matches
Mail list logo