On 11/11/25 14:10, Xiaole He wrote:
> When active_logs == 6, dentry blocks can be allocated to HOT, WARM, or
> COLD segments based on various conditions in __get_segment_type_6():
> - age extent cache (if enabled)
> - FI_HOT_DATA flag (set when dirty_pages <= min_hot_blocks)
> - rw_hint (defaults t
On 11/11/25 14:10, Xiaole He wrote:
When active_logs == 6, dentry blocks can be allocated to HOT, WARM, or
COLD segments based on various conditions in __get_segment_type_6():
- age extent cache (if enabled)
- FI_HOT_DATA flag (set when dirty_pages <= min_hot_blocks)
- rw_hint (defaults to WARM v
When active_logs == 6, dentry blocks can be allocated to HOT, WARM, or
COLD segments based on various conditions in __get_segment_type_6():
- age extent cache (if enabled)
- FI_HOT_DATA flag (set when dirty_pages <= min_hot_blocks)
- rw_hint (defaults to WARM via f2fs_rw_hint_to_seg_type)
- file_is
Hi Yongpeng,
Thanks for your feedback! I've updated the patch per your suggestions:
- Merged the dentry block check into the main loop to avoid duplication
- Check data_blocks + dent_blocks for data segments since both can write to the
same segment
Please see the v2 patch.
Best regards,
Xiaole
On 11/7/2025 7:07 PM, Christoph Hellwig wrote:
> On Fri, Nov 07, 2025 at 02:54:42PM +0530, Kundan Kumar wrote:
>> Predicting the Allocation Group (AG) for aged filesystems and passing
>> this information to per-AG writeback threads appears to be a complex
>> task.
>
> Yes. But in the end aged fil
Hi again,
On Sun, 9 Nov 2025 18:54:16 +0900, Masaharu Noguchi wrote:
> Sphinx LaTeX builder fails with the following error when it tries to
> turn the ASCII tables in f2fs.rst into nested longtables:
>
> Markup is unsupported in LaTeX:
> filesystems/f2fs:: longtable does not support nesting
On 11/10/25 21:26, Xiaole He wrote:
When active_logs == 6, dentry blocks can be allocated to HOT, WARM, or
COLD segments based on various conditions in __get_segment_type_6():
- age extent cache (if enabled)
- FI_HOT_DATA flag (set when dirty_pages <= min_hot_blocks)
- rw_hint (defaults to WARM v
Daeho,
As Zhiguo's reminder, I missed that f2fs_update_meta_page() has the same issue
as well,
so we need to use f2fs_grab_meta_folio() in f2fs_update_meta_page() as well.
Thanks Zhiguo for code review and reminder.
Let me know if I missed something.
On 11/10/25 23:35, Daeho Jeong wrote:
> vo
From: Daeho Jeong
The recent increase in the number of Segment Summary Area (SSA) entries
from 512 to 2048 was an unintentional change in logic of 16kb block
support. This commit corrects the issue.
To better utilize the space available from the erroneous 2048-entry
calculation, we are implement
When active_logs == 6, dentry blocks can be allocated to HOT, WARM, or
COLD segments based on various conditions in __get_segment_type_6():
- age extent cache (if enabled)
- FI_HOT_DATA flag (set when dirty_pages <= min_hot_blocks)
- rw_hint (defaults to WARM via f2fs_rw_hint_to_seg_type)
- file_is
With three exceptions, ->create() methods provided by filesystems ignore
the "excl" flag. Those exception are NFS, GFS2 and vboxsf which all also
provide ->atomic_open.
Since ce8644fcadc5 ("lookup_open(): expand the call of vfs_create()"),
the "excl" argument to the ->create() inode_operation is
On Fri 2025-11-07 20:47:18, Petr Mladek wrote:
> This is outcome of the long discussion about the regression caused
> by 67e1b0052f6bb82 ("printk_ringbuffer: don't needlessly wrap data blocks
> around"),
> see https://lore.kernel.org/all/[email protected]/
>
> The 1st pa
Hi,
On Sun, 9 Nov 2025 18:54:16 +0900, Masaharu Noguchi wrote:
> Sphinx LaTeX builder fails with the following error when it tries to
> turn the ASCII tables in f2fs.rst into nested longtables:
>
> Markup is unsupported in LaTeX:
> filesystems/f2fs:: longtable does not support nesting a tabl
On 11/10/25 16:22, Yongpeng Yang wrote:
> From: Yongpeng Yang
>
> This patch adds a sysfs entry showing the max zones that F2FS can write
> concurrently.
>
> Signed-off-by: Yongpeng Yang
Reviewed-by: Chao Yu
Thanks,
___
Linux-f2fs-devel mailing l
On 11/10/25 16:22, Yongpeng Yang wrote:
> From: Yongpeng Yang
>
> When emulating a ZNS SSD on qemu with zoned.max_open set to 0, the
> F2FS can still be mounted successfully. The sysfs entry shows
> sbi->max_open_zones as UINT_MAX.
>
> root@fedora-vm:~# cat /sys/block/nvme0n1/queue/zoned
> host-
On 11/10/25 16:22, Yongpeng Yang wrote:
> From: Yongpeng Yang
>
> The usage of unusable_blocks_per_sec is already wrapped by
> CONFIG_BLK_DEV_ZONED, except for its declaration and the definitions of
> CAP_BLKS_PER_SEC and CAP_SEGS_PER_SEC. This patch ensures that all code
> related to unusable_bl
On 11/10/25 17:20, Yongpeng Yang wrote:
> On 11/8/25 11:11, Chao Yu via Linux-f2fs-devel wrote:
>> Yunlei,
>>
>> On 2025/11/7 14:29, Yunlei He wrote:
>>> From: Yunlei He
>>>
>>> GC move fbe data block will add some non uptodate page, we'd
>>> better release it at the end.
>>
>> This is just for sa
On 11/5/25 00:24, Daeho Jeong wrote:
> static void write_sum_page(struct f2fs_sb_info *sbi,
> - struct f2fs_summary_block *sum_blk, block_t blk_addr)
> + struct f2fs_summary_block *sum_blk, unsigned int segno)
> {
> - f2fs_update_meta_page(sbi, (void *)sum_blk,
Hi Petr,
Nit: For the patch subject, remove the word "a":
"Create a helper function to decide whether more space is needed"
More below...
On 2025-11-07, Petr Mladek wrote:
> The decision whether some more space is needed is tricky in the printk
> ring buffer code:
>
> 1. The given lpos value
On 11/8/25 11:11, Chao Yu via Linux-f2fs-devel wrote:
Yunlei,
On 2025/11/7 14:29, Yunlei He wrote:
From: Yunlei He
GC move fbe data block will add some non uptodate page, we'd
better release it at the end.
This is just for saving memory, right?
Yes, move_data_block() doesn’t read any dat
On 2025-11-07, Petr Mladek wrote:
> The commit 67e1b0052f6bb8 ("printk_ringbuffer: don't needlessly wrap
> data blocks around") allows to use the last 4 bytes of the ring buffer.
>
> But the check for the @data_size was not properly updated in get_data().
> It fails when "blk_lpos->next" overflows
Some scripts relies on output order of do_read(), let's append the
new logs to keep forward compatibility.
e.g.
f2fs_io read 128 0 $((2*1024)) buffered 1 0 /mnt/f2fs/file
Before:
Read 1073741824 bytes IO time = 153715 us mlock time = 0 us, BW = 6985 MB/s
print 0 bytes:
:
After:
Read 10
From: Yongpeng Yang
When emulating a ZNS SSD on qemu with zoned.max_open set to 0, the
F2FS can still be mounted successfully. The sysfs entry shows
sbi->max_open_zones as UINT_MAX.
root@fedora-vm:~# cat /sys/block/nvme0n1/queue/zoned
host-managed
root@fedora-vm:~# cat /sys/block/nvme0n1/queue/m
From: Yongpeng Yang
This patch adds a sysfs entry showing the max zones that F2FS can write
concurrently.
Signed-off-by: Yongpeng Yang
---
Documentation/ABI/testing/sysfs-fs-f2fs | 6 ++
fs/f2fs/sysfs.c | 2 ++
2 files changed, 8 insertions(+)
diff --git a/Document
From: Yongpeng Yang
The usage of unusable_blocks_per_sec is already wrapped by
CONFIG_BLK_DEV_ZONED, except for its declaration and the definitions of
CAP_BLKS_PER_SEC and CAP_SEGS_PER_SEC. This patch ensures that all code
related to unusable_blocks_per_sec is properly wrapped under the
CONFIG_BL
25 matches
Mail list logo