On 11/14/2025 3:37 PM, Yongpeng Yang wrote:
On 11/14/25 08:51, Chao Yu via Linux-f2fs-devel wrote:
On 11/13/2025 5:42 AM, Jaegeuk Kim wrote:
This breaks the device giving 0 open zone which was working. Hence, I
dropped
the change.

On 11/10, Yongpeng Yang wrote:
From: Yongpeng Yang <[email protected]>

When emulating a ZNS SSD on qemu with zoned.max_open set to 0, the
F2FS can still be mounted successfully. The sysfs entry shows
sbi->max_open_zones as UINT_MAX.

root@fedora-vm:~# cat /sys/block/nvme0n1/queue/zoned
host-managed
root@fedora-vm:~# cat /sys/block/nvme0n1/queue/max_open_zones
0
root@fedora-vm:~# mkfs.f2fs -m -c /dev/nvme0n1 /dev/vda
root@fedora-vm:~# mount /dev/vda /mnt/f2fs/
root@fedora-vm:~# cat /sys/fs/f2fs/vda/max_open_zones
4294967295

The root cause is that sbi->max_open_zones is initialized to UINT_MAX
and only updated when the device’s max_open_zones is greater than 0.
However, both the scsi driver (sd_zbc_read_zones may assigns 0 to
device's max_open_zones) and the nvme driver (nvme_query_zone_info don't
check max_open_zones) allow max_open_zones to be 0.

This patch fixes the issue by preventing mounting on zoned SSDs when
max_open_zones is 0, while still allowing SMR HDDs to be mounted.
init_blkz_info() is only called by f2fs_scan_devices(), and the
blkzoned feature has already been checked there. So, this patch also
remove redundant zoned device checks.

Signed-off-by: Yongpeng Yang <[email protected]>
---
   fs/f2fs/super.c | 36 +++++++++++++++++++++---------------
   1 file changed, 21 insertions(+), 15 deletions(-)

diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index db7afb806411..6dc8945e24af 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -4353,21 +4353,6 @@ static int init_blkz_info(struct f2fs_sb_info
*sbi, int devi)
       unsigned int max_open_zones;
       int ret;
-    if (!f2fs_sb_has_blkzoned(sbi))
-        return 0;
-
-    if (bdev_is_zoned(FDEV(devi).bdev)) {
-        max_open_zones = bdev_max_open_zones(bdev);
-        if (max_open_zones && (max_open_zones < sbi->max_open_zones))
-            sbi->max_open_zones = max_open_zones;
-        if (sbi->max_open_zones < F2FS_OPTION(sbi).active_logs) {
-            f2fs_err(sbi,
-                "zoned: max open zones %u is too small, need at
least %u open zones",
-                sbi->max_open_zones, F2FS_OPTION(sbi).active_logs);
-            return -EINVAL;
-        }
-    }
-
       zone_sectors = bdev_zone_sectors(bdev);
       if (sbi->blocks_per_blkz && sbi->blocks_per_blkz !=
                   SECTOR_TO_BLOCK(zone_sectors))
@@ -4378,6 +4363,27 @@ static int init_blkz_info(struct f2fs_sb_info
*sbi, int devi)
       if (nr_sectors & (zone_sectors - 1))
           FDEV(devi).nr_blkz++;
+    max_open_zones = bdev_max_open_zones(bdev);
+    if (!max_open_zones) {
+        /*
+         * SSDs require max_open_zones > 0 to be mountable.
+         * For HDDs, if max_open_zones is reported as 0, it doesn't
matter,
+         * set it to FDEV(devi).nr_blkz.
+         */
+        if (bdev_nonrot(bdev)) {
+            f2fs_err(sbi, "zoned: SSD device %s without open zones",
FDEV(devi).path);
+            return -EINVAL;

Oh, so, for conventional UFS, it will go into this path as SSD w/ zero
open zone?

Any way to distinguish that?

Thanks,


sbi->max_open_zones might be classified into 4 cases:

1. For non rotational devices that have both conventional zones and
sequential zones, we should still ensure that max_open_zones > 0. If the
# of sequential zones exceeds max_open_zones, we still need to guarantee
that max_open_zones >= F2FS_OPTION(sbi).active_logs.

I tested this with null_blk by emulating a device that has 10
conventional zones and 4 sequential zones, and the filesystem can be
formatted successfully. In this case, the filesystem should also be
mountable, and sbi->max_open_zones should be 14. However, if
zone_max_open is set to 3, the filesystem cannot be mounted.

#modprobe null_blk nr_devices=1 zoned=1  zone_nr_conv=10  zone_size=1024
gb=14 bs=4096 rotational=0 zone_max_open=4
#mkfs.f2fs -m -c /dev/nullb0 /dev/vda -f

So, sbi->max_open_zones might be # of max_open_zones or '# of sequential
zones' + '# of conventional zones'.

2. For non rotational devices which only have conventional zones, I'm
not sure whether there are zoned flash devices that provide only

I guess this is a similar case, we should not let mount() fail for such case,
right?

- modprobe null_blk nr_devices=1 zoned=1 zone_nr_conv=512 zone_size=2 \
gb=1 bs=4096 rotational=0 zone_max_open=6
- mkfs.f2fs -m /dev/nullb0
- mount /dev/nullb0 /mnt/f2fs

Thanks,

conventional zones. If such devices do exist, then returning -EINVAL is
indeed not appropriate. sbi->max_open_zones should be # of conventional
zones.

3. For non rotational devices which only have sequential zones, sbi-
max_open_zones should be # max_open_zones.

4. For rotational devices, sbi->max_open_zones should be # zones or
max_open_zones.

Am I missing any other cases?

Yongpeng,

+        }
+        max_open_zones = FDEV(devi).nr_blkz;
+    }
+    sbi->max_open_zones = min_t(unsigned int, max_open_zones, sbi-
max_open_zones);
+    if (sbi->max_open_zones < F2FS_OPTION(sbi).active_logs) {
+        f2fs_err(sbi,
+            "zoned: max open zones %u is too small, need at least %u
open zones",
+            sbi->max_open_zones, F2FS_OPTION(sbi).active_logs);
+        return -EINVAL;
+    }
+
       FDEV(devi).blkz_seq = f2fs_kvzalloc(sbi,
                       BITS_TO_LONGS(FDEV(devi).nr_blkz)
                       * sizeof(unsigned long),
--
2.43.0





_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to