On 11/13/2025 5:42 AM, Jaegeuk Kim wrote:
This breaks the device giving 0 open zone which was working. Hence, I dropped
the change.
On 11/10, Yongpeng Yang wrote:
From: Yongpeng Yang <[email protected]>
When emulating a ZNS SSD on qemu with zoned.max_open set to 0, the
F2FS can still be mounted successfully. The sysfs entry shows
sbi->max_open_zones as UINT_MAX.
root@fedora-vm:~# cat /sys/block/nvme0n1/queue/zoned
host-managed
root@fedora-vm:~# cat /sys/block/nvme0n1/queue/max_open_zones
0
root@fedora-vm:~# mkfs.f2fs -m -c /dev/nvme0n1 /dev/vda
root@fedora-vm:~# mount /dev/vda /mnt/f2fs/
root@fedora-vm:~# cat /sys/fs/f2fs/vda/max_open_zones
4294967295
The root cause is that sbi->max_open_zones is initialized to UINT_MAX
and only updated when the device’s max_open_zones is greater than 0.
However, both the scsi driver (sd_zbc_read_zones may assigns 0 to
device's max_open_zones) and the nvme driver (nvme_query_zone_info don't
check max_open_zones) allow max_open_zones to be 0.
This patch fixes the issue by preventing mounting on zoned SSDs when
max_open_zones is 0, while still allowing SMR HDDs to be mounted.
init_blkz_info() is only called by f2fs_scan_devices(), and the
blkzoned feature has already been checked there. So, this patch also
remove redundant zoned device checks.
Signed-off-by: Yongpeng Yang <[email protected]>
---
fs/f2fs/super.c | 36 +++++++++++++++++++++---------------
1 file changed, 21 insertions(+), 15 deletions(-)
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index db7afb806411..6dc8945e24af 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -4353,21 +4353,6 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int
devi)
unsigned int max_open_zones;
int ret;
- if (!f2fs_sb_has_blkzoned(sbi))
- return 0;
-
- if (bdev_is_zoned(FDEV(devi).bdev)) {
- max_open_zones = bdev_max_open_zones(bdev);
- if (max_open_zones && (max_open_zones < sbi->max_open_zones))
- sbi->max_open_zones = max_open_zones;
- if (sbi->max_open_zones < F2FS_OPTION(sbi).active_logs) {
- f2fs_err(sbi,
- "zoned: max open zones %u is too small, need at
least %u open zones",
- sbi->max_open_zones,
F2FS_OPTION(sbi).active_logs);
- return -EINVAL;
- }
- }
-
zone_sectors = bdev_zone_sectors(bdev);
if (sbi->blocks_per_blkz && sbi->blocks_per_blkz !=
SECTOR_TO_BLOCK(zone_sectors))
@@ -4378,6 +4363,27 @@ static int init_blkz_info(struct f2fs_sb_info *sbi, int
devi)
if (nr_sectors & (zone_sectors - 1))
FDEV(devi).nr_blkz++;
+ max_open_zones = bdev_max_open_zones(bdev);
+ if (!max_open_zones) {
+ /*
+ * SSDs require max_open_zones > 0 to be mountable.
+ * For HDDs, if max_open_zones is reported as 0, it doesn't
matter,
+ * set it to FDEV(devi).nr_blkz.
+ */
+ if (bdev_nonrot(bdev)) {
+ f2fs_err(sbi, "zoned: SSD device %s without open
zones", FDEV(devi).path);
+ return -EINVAL;
Oh, so, for conventional UFS, it will go into this path as SSD w/ zero open
zone?
Any way to distinguish that?
Thanks,
+ }
+ max_open_zones = FDEV(devi).nr_blkz;
+ }
+ sbi->max_open_zones = min_t(unsigned int, max_open_zones,
sbi->max_open_zones);
+ if (sbi->max_open_zones < F2FS_OPTION(sbi).active_logs) {
+ f2fs_err(sbi,
+ "zoned: max open zones %u is too small, need at least %u
open zones",
+ sbi->max_open_zones, F2FS_OPTION(sbi).active_logs);
+ return -EINVAL;
+ }
+
FDEV(devi).blkz_seq = f2fs_kvzalloc(sbi,
BITS_TO_LONGS(FDEV(devi).nr_blkz)
* sizeof(unsigned long),
--
2.43.0
_______________________________________________
Linux-f2fs-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel