On 11/17/25 00:31, Yongpeng Yang wrote:
>
> On 11/15/2025 7:36 PM, Chao Yu via Linux-f2fs-devel wrote:
>> On 11/14/2025 3:37 PM, Yongpeng Yang wrote:
>>> On 11/14/25 08:51, Chao Yu via Linux-f2fs-devel wrote:
>>>> On 11/13/2025 5:42 AM, Jaegeuk Kim wrote:
>>>>> This breaks the device giving 0 open zone which was working. Hence, I
>>>>> dropped
>>>>> the change.
>>>>>
>>>>> On 11/10, Yongpeng Yang wrote:
>>>>>> From: Yongpeng Yang <[email protected]>
>>>>>>
>>>>>> When emulating a ZNS SSD on qemu with zoned.max_open set to 0, the
>>>>>> F2FS can still be mounted successfully. The sysfs entry shows
>>>>>> sbi->max_open_zones as UINT_MAX.
>>>>>>
>>>>>> root@fedora-vm:~# cat /sys/block/nvme0n1/queue/zoned
>>>>>> host-managed
>>>>>> root@fedora-vm:~# cat /sys/block/nvme0n1/queue/max_open_zones
>>>>>> 0
>>>>>> root@fedora-vm:~# mkfs.f2fs -m -c /dev/nvme0n1 /dev/vda
>>>>>> root@fedora-vm:~# mount /dev/vda /mnt/f2fs/
>>>>>> root@fedora-vm:~# cat /sys/fs/f2fs/vda/max_open_zones
>>>>>> 4294967295
>>>>>>
>>>>>> The root cause is that sbi->max_open_zones is initialized to UINT_MAX
>>>>>> and only updated when the device’s max_open_zones is greater than 0.
>>>>>> However, both the scsi driver (sd_zbc_read_zones may assigns 0 to
>>>>>> device's max_open_zones) and the nvme driver (nvme_query_zone_info don't
>>>>>> check max_open_zones) allow max_open_zones to be 0.
>>>>>>
>>>>>> This patch fixes the issue by preventing mounting on zoned SSDs when
>>>>>> max_open_zones is 0, while still allowing SMR HDDs to be mounted.
>>>>>> init_blkz_info() is only called by f2fs_scan_devices(), and the
>>>>>> blkzoned feature has already been checked there. So, this patch also
>>>>>> remove redundant zoned device checks.
>>>>>>
>>>>>> Signed-off-by: Yongpeng Yang <[email protected]>
>>>>>> ---
>>>>>> fs/f2fs/super.c | 36 +++++++++++++++++++++---------------
>>>>>> 1 file changed, 21 insertions(+), 15 deletions(-)
>>>>>>
>>>>>> diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
>>>>>> index db7afb806411..6dc8945e24af 100644
>>>>>> --- a/fs/f2fs/super.c
>>>>>> +++ b/fs/f2fs/super.c
>>>>>> @@ -4353,21 +4353,6 @@ static int init_blkz_info(struct f2fs_sb_info
>>>>>> *sbi, int devi)
>>>>>> unsigned int max_open_zones;
>>>>>> int ret;
>>>>>> - if (!f2fs_sb_has_blkzoned(sbi))
>>>>>> - return 0;
>>>>>> -
>>>>>> - if (bdev_is_zoned(FDEV(devi).bdev)) {
>>>>>> - max_open_zones = bdev_max_open_zones(bdev);
>>>>>> - if (max_open_zones && (max_open_zones < sbi->max_open_zones))
>>>>>> - sbi->max_open_zones = max_open_zones;
>>>>>> - if (sbi->max_open_zones < F2FS_OPTION(sbi).active_logs) {
>>>>>> - f2fs_err(sbi,
>>>>>> - "zoned: max open zones %u is too small, need at
>>>>>> least %u open zones",
>>>>>> - sbi->max_open_zones, F2FS_OPTION(sbi).active_logs);
>>>>>> - return -EINVAL;
>>>>>> - }
>>>>>> - }
>>>>>> -
>>>>>> zone_sectors = bdev_zone_sectors(bdev);
>>>>>> if (sbi->blocks_per_blkz && sbi->blocks_per_blkz !=
>>>>>> SECTOR_TO_BLOCK(zone_sectors))
>>>>>> @@ -4378,6 +4363,27 @@ static int init_blkz_info(struct f2fs_sb_info
>>>>>> *sbi, int devi)
>>>>>> if (nr_sectors & (zone_sectors - 1))
>>>>>> FDEV(devi).nr_blkz++;
>>>>>> + max_open_zones = bdev_max_open_zones(bdev);
>>>>>> + if (!max_open_zones) {
>>>>>> + /*
>>>>>> + * SSDs require max_open_zones > 0 to be mountable.
>>>>>> + * For HDDs, if max_open_zones is reported as 0, it doesn't
>>>>>> matter,
>>>>>> + * set it to FDEV(devi).nr_blkz.
>>>>>> + */
>>>>>> + if (bdev_nonrot(bdev)) {
>>>>>> + f2fs_err(sbi, "zoned: SSD device %s without open zones",
>>>>>> FDEV(devi).path);
>>>>>> + return -EINVAL;
>>>>
>>>> Oh, so, for conventional UFS, it will go into this path as SSD w/ zero
>>>> open zone?
>>>>
>>>> Any way to distinguish that?
>>>>
>>>> Thanks,
>>>>
>>>
>>> sbi->max_open_zones might be classified into 4 cases:
>>>
>>> 1. For non rotational devices that have both conventional zones and
>>> sequential zones, we should still ensure that max_open_zones > 0. If the
>>> # of sequential zones exceeds max_open_zones, we still need to guarantee
>>> that max_open_zones >= F2FS_OPTION(sbi).active_logs.
>>>
>>> I tested this with null_blk by emulating a device that has 10
>>> conventional zones and 4 sequential zones, and the filesystem can be
>>> formatted successfully. In this case, the filesystem should also be
>>> mountable, and sbi->max_open_zones should be 14. However, if
>>> zone_max_open is set to 3, the filesystem cannot be mounted.
>>>
>>> #modprobe null_blk nr_devices=1 zoned=1 zone_nr_conv=10 zone_size=1024
>>> gb=14 bs=4096 rotational=0 zone_max_open=4
>>> #mkfs.f2fs -m -c /dev/nullb0 /dev/vda -f
>>>
>>> So, sbi->max_open_zones might be # of max_open_zones or '# of sequential
>>> zones' + '# of conventional zones'.
>>>
>>> 2. For non rotational devices which only have conventional zones, I'm
>>> not sure whether there are zoned flash devices that provide only
>>
>> I guess this is a similar case, we should not let mount() fail for such case,
>> right?
>
> Yes, it should be moutable. I'll take all these cases into account in
> the v3 patch.
>
>>
>> - modprobe null_blk nr_devices=1 zoned=1 zone_nr_conv=512 zone_size=2 \
>> gb=1 bs=4096 rotational=0 zone_max_open=6
>
> This scenario cannot be emulated with null_blk. There must be at least 1
> sequential zone, and zone_max_open is greater than # of sequential
Oh, I see.
root@localhost:~# dump.f2fs -d 3 /dev/nullb0
Info: Debug level = 3
[f2fs_check_zones: 355] Zone 00000: Conventional, cond 0x0 (Not-write-pointer),
sector 0, 4096 sectors
...
[f2fs_check_zones: 355] Zone 00510: Conventional, cond 0x0 (Not-write-pointer),
sector 2088960, 4096 sectors
[f2fs_check_zones: 366] Zone 00511: type 0x2 (Sequential-write-required), cond
0x1 (Empty), need_reset 0, non_seq 0, sector 2093056, 4096 sectors, capacity
4096, wp sector 2093056
I may suffer different failure during mounting a successfully formatted
nullblk device.
[ 346.143520] F2FS-fs (nullb0): Magic Mismatch, valid(0xf2f52010) - read(0x0)
[ 346.143526] F2FS-fs (nullb0): Can't find valid F2FS filesystem in 1th
superblock
[ 346.146159] F2FS-fs (nullb0): Magic Mismatch, valid(0xf2f52010) - read(0x0)
[ 346.146162] F2FS-fs (nullb0): Can't find valid F2FS filesystem in 2th
superblock
Thanks,
> zones, whereas in reality max_open_zones is 0.
>
> Yongpeng,
>
>> - mkfs.f2fs -m /dev/nullb0
>> - mount /dev/nullb0 /mnt/f2fs
>>
>> Thanks,
>>
>>> conventional zones. If such devices do exist, then returning -EINVAL is
>>> indeed not appropriate. sbi->max_open_zones should be # of conventional
>>> zones.
>>>
>>> 3. For non rotational devices which only have sequential zones, sbi-
>>>> max_open_zones should be # max_open_zones.
>>>
>>> 4. For rotational devices, sbi->max_open_zones should be # zones or
>>> max_open_zones.
>>>
>>> Am I missing any other cases?
>>>
>>> Yongpeng,
>>>
>>>>>> + }
>>>>>> + max_open_zones = FDEV(devi).nr_blkz;
>>>>>> + }
>>>>>> + sbi->max_open_zones = min_t(unsigned int, max_open_zones, sbi-
>>>>>>> max_open_zones);
>>>>>> + if (sbi->max_open_zones < F2FS_OPTION(sbi).active_logs) {
>>>>>> + f2fs_err(sbi,
>>>>>> + "zoned: max open zones %u is too small, need at least %u
>>>>>> open zones",
>>>>>> + sbi->max_open_zones, F2FS_OPTION(sbi).active_logs);
>>>>>> + return -EINVAL;
>>>>>> + }
>>>>>> +
>>>>>> FDEV(devi).blkz_seq = f2fs_kvzalloc(sbi,
>>>>>> BITS_TO_LONGS(FDEV(devi).nr_blkz)
>>>>>> * sizeof(unsigned long),
>>>>>> --
>>>>>> 2.43.0
>>>
>>>
>>
>>
>>
>> _______________________________________________
>> Linux-f2fs-devel mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
>
_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel