Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-07 Thread Dmitry Fomichev
On Fri, 2021-02-05 at 11:39 +0100, Klaus Jensen wrote:
> On Feb  5 11:10, i...@dantalion.nl wrote:
> > Hello,
> > 
> > Thanks for this, I got everything working including the new device types
> > (nvme-ns, nvme-subsys). I think I have found a small bug and do not know
> > where to report this.
> > 
> 
> This is a good way to report it ;)
> 
> > The values for nvme device property zoned.append_size_limit are not
> > sanity checked, you can set it to invalid values such as 128.
> > 
> > This will latter result in errors when trying to initialize the device:
> > Device not ready; aborting initialisation, CSTS=0x2
> > Removing after probe failure status: -19
> > 
> 
> Yeah. We can at least check that append_size_limit is at least 4k. That
> might still be too small if we run on configurations with larger page
> sizes, and then we cant figure that out until the device is enabled by
> the host anyway. But we can make it a bit more user-friendly in the
> common case.

The current code from nvme-next does validate ZASL value. I tried to set
it to 128 and this results in an error and the namespace doesn't appear at
the guest. The hard minimum is currently the page size.

> 
> > Addtionally, `cat /sys/block/nvmeXnX/queue/nr_zones` reports 0 while
> > `blkzone report /dev/nvmeXnX` clearly shows > 0 zones. Not sure if user
> > error or bug. Also potentially kernel bug and not due to QEMU.
> > 
> 
> I cant reproduce that. Can you share your qemu configuration, kernel
> version?
> 
> > Let me know if sharing this information is helpful or rather just
> > annoying, don't want to bother anyone.
> > 
> 
> It is super helpful and super appreciated! Thanks!



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread Keith Busch
On Sat, Feb 06, 2021 at 01:48:29AM +0900, Minwoo Im wrote:
> Not sure if this is okay just give ctrl->tagset for the head
> request_queue, but this patch works fine as far.

Huh, that's probably not supposed to work: bio-based drivers should
never use tagsets.

Since this is getting a little more complicated, let's take it to the
kernel mailing lists. Meanwhile, I'll work on a proposal for there.
 
> ---
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 282b7a4ea9a9..22febc7baa36 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -375,7 +375,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct 
> nvme_ns_head *head)
> if (!(ctrl->subsys->cmic & NVME_CTRL_CMIC_MULTI_CTRL) || !multipath)
> return 0;
>  
> -   q = blk_alloc_queue(ctrl->numa_node);
> +   q = blk_mq_init_queue(ctrl->tagset);
> if (!q)
> goto out;
> blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
> @@ -677,6 +677,8 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct 
> nvme_id_ns *id)
> if (blk_queue_stable_writes(ns->queue) && ns->head->disk)
> blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES,
>ns->head->disk->queue);
> +   if (blk_queue_is_zoned(ns->queue))
> +   blk_revalidate_disk_zones(ns->head->disk, NULL);
>  }
>  
>  void nvme_mpath_remove_disk(struct nvme_ns_head *head)



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread Minwoo Im
On 21-02-06 01:43:18, Minwoo Im wrote:
> On 21-02-05 08:22:52, Keith Busch wrote:
> > On Sat, Feb 06, 2021 at 01:07:57AM +0900, Minwoo Im wrote:
> > > If multipath is enabled, the namespace head and hidden namespace will be
> > > created.  In this case, /sys/block/nvme0n1/queue/nr_zones are not
> > > returning proper value for the namespace itself.  By the way, the hidden
> > > namespace /sys/block/nvme0c0n1/queue/nr_zones are returning properly.
> > > 
> > > Is it okay for sysfs of the head namespace node (nvme0n1) not to manage
> > > the request queue attributes like nr_zones?
> > 
> > This should fix it. Untested, as my dev machine is in need of repair,
> > but if someone can confirm this is successful, I can send it to the
> > kernel list.
> > 
> > ---
> > diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> > index 65bd6efa5e1c..eb18949bb999 100644
> > --- a/drivers/nvme/host/multipath.c
> > +++ b/drivers/nvme/host/multipath.c
> > @@ -677,6 +677,8 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct 
> > nvme_id_ns *id)
> > if (blk_queue_stable_writes(ns->queue) && ns->head->disk)
> > blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES,
> >ns->head->disk->queue);
> > +   if (blk_queue_is_zoned(ns->queue))
> > +   blk_revalidate_disk_zones(ns->head->disk, NULL);
> >  }
> >  
> >  void nvme_mpath_remove_disk(struct nvme_ns_head *head)
> > --
> 
> Thanks Keith,
> 
> Just for sharing testing result based on this kernel quickly:
> 
> In blk_revalidate_disk_zones(), 
> 
>   488 int blk_revalidate_disk_zones(struct gendisk *disk,
>   489   void (*update_driver_data)(struct 
> gendisk *disk))
>   490 {
>   491 struct request_queue *q = disk->queue;
>   492 struct blk_revalidate_zone_args args = {
>   493 .disk   = disk,
>   494 };
>   495 unsigned int noio_flag;
>   496 int ret;
>   497
>   498 if (WARN_ON_ONCE(!blk_queue_is_zoned(q)))
>   499 return -EIO;
>   500 if (WARN_ON_ONCE(!queue_is_mq(q)))
>   501 return -EIO;
>    
> 
> (q->mq_ops == NULL) in this case, so that the q->nr_zones are not
> getting set.

Not sure if this is okay just give ctrl->tagset for the head
request_queue, but this patch works fine as far.

---
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 282b7a4ea9a9..22febc7baa36 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -375,7 +375,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct 
nvme_ns_head *head)
if (!(ctrl->subsys->cmic & NVME_CTRL_CMIC_MULTI_CTRL) || !multipath)
return 0;
 
-   q = blk_alloc_queue(ctrl->numa_node);
+   q = blk_mq_init_queue(ctrl->tagset);
if (!q)
goto out;
blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
@@ -677,6 +677,8 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct 
nvme_id_ns *id)
if (blk_queue_stable_writes(ns->queue) && ns->head->disk)
blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES,
   ns->head->disk->queue);
+   if (blk_queue_is_zoned(ns->queue))
+   blk_revalidate_disk_zones(ns->head->disk, NULL);
 }
 
 void nvme_mpath_remove_disk(struct nvme_ns_head *head)



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread Minwoo Im
On 21-02-05 08:22:52, Keith Busch wrote:
> On Sat, Feb 06, 2021 at 01:07:57AM +0900, Minwoo Im wrote:
> > If multipath is enabled, the namespace head and hidden namespace will be
> > created.  In this case, /sys/block/nvme0n1/queue/nr_zones are not
> > returning proper value for the namespace itself.  By the way, the hidden
> > namespace /sys/block/nvme0c0n1/queue/nr_zones are returning properly.
> > 
> > Is it okay for sysfs of the head namespace node (nvme0n1) not to manage
> > the request queue attributes like nr_zones?
> 
> This should fix it. Untested, as my dev machine is in need of repair,
> but if someone can confirm this is successful, I can send it to the
> kernel list.
> 
> ---
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 65bd6efa5e1c..eb18949bb999 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -677,6 +677,8 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct 
> nvme_id_ns *id)
>   if (blk_queue_stable_writes(ns->queue) && ns->head->disk)
>   blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES,
>  ns->head->disk->queue);
> + if (blk_queue_is_zoned(ns->queue))
> + blk_revalidate_disk_zones(ns->head->disk, NULL);
>  }
>  
>  void nvme_mpath_remove_disk(struct nvme_ns_head *head)
> --

Thanks Keith,

Just for sharing testing result based on this kernel quickly:

In blk_revalidate_disk_zones(), 

488 int blk_revalidate_disk_zones(struct gendisk *disk,
489   void (*update_driver_data)(struct 
gendisk *disk))
490 {
491 struct request_queue *q = disk->queue;
492 struct blk_revalidate_zone_args args = {
493 .disk   = disk,
494 };
495 unsigned int noio_flag;
496 int ret;
497
498 if (WARN_ON_ONCE(!blk_queue_is_zoned(q)))
499 return -EIO;
500 if (WARN_ON_ONCE(!queue_is_mq(q)))
501 return -EIO;
 

(q->mq_ops == NULL) in this case, so that the q->nr_zones are not
getting set.



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread Keith Busch
On Sat, Feb 06, 2021 at 01:07:57AM +0900, Minwoo Im wrote:
> If multipath is enabled, the namespace head and hidden namespace will be
> created.  In this case, /sys/block/nvme0n1/queue/nr_zones are not
> returning proper value for the namespace itself.  By the way, the hidden
> namespace /sys/block/nvme0c0n1/queue/nr_zones are returning properly.
> 
> Is it okay for sysfs of the head namespace node (nvme0n1) not to manage
> the request queue attributes like nr_zones?

This should fix it. Untested, as my dev machine is in need of repair,
but if someone can confirm this is successful, I can send it to the
kernel list.

---
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 65bd6efa5e1c..eb18949bb999 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -677,6 +677,8 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct 
nvme_id_ns *id)
if (blk_queue_stable_writes(ns->queue) && ns->head->disk)
blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES,
   ns->head->disk->queue);
+   if (blk_queue_is_zoned(ns->queue))
+   blk_revalidate_disk_zones(ns->head->disk, NULL);
 }
 
 void nvme_mpath_remove_disk(struct nvme_ns_head *head)
--



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread Keith Busch
On Sat, Feb 06, 2021 at 01:07:57AM +0900, Minwoo Im wrote:
> On 21-02-05 08:02:10, Keith Busch wrote:
> > On Fri, Feb 05, 2021 at 09:33:54PM +0900, Minwoo Im wrote:
> > > On 21-02-05 12:42:30, Klaus Jensen wrote:
> > > > On Feb  5 12:25, i...@dantalion.nl wrote:
> > > > > On 05-02-2021 11:39, Klaus Jensen wrote:
> > > > > > This is a good way to report it ;)
> > > > > > It is super helpful and super appreciated! Thanks!
> > > > > 
> > > > > Good to know :)
> > > > > 
> > > > > > I cant reproduce that. Can you share your qemu configuration, kernel
> > > > > > version?
> > > > > 
> > > > > I create the image and launch QEMU with:
> > > > > qemu-img create -f raw znsssd.img 16777216
> > > > > 
> > > > > qemu-system-x86_64 -name qemuzns -m 4G -cpu Haswell -smp 2 -hda \
> > > > > ./arch-qemu.qcow2 -net user,hostfwd=tcp::-:22,\
> > > > > hostfwd=tcp::-:2000 -net nic \
> > > > > -drive file=./znsssd.img,id=mynvme,format=raw,if=none \
> > > > > -device nvme-subsys,id=subsys0 \
> > > > > -device nvme,serial=baz,id=nvme2,zoned.append_size_limit=131072,\
> > > > > subsys=subsys0 \
> > > > > -device nvme-ns,id=ns2,drive=mynvme,nsid=2,logical_block_size=4096,\
> > > > > physical_block_size=4096,zoned=true,zoned.zone_size=131072,\
> > > > > zoned.zone_capacity=131072,zoned.max_open=0,zoned.max_active=0,bus=nvme2
> > > > > 
> > > > > This should create 128 zones as 16777216 / 131072 = 128. My qemu 
> > > > > version
> > > > > is on d79d797b0dd02c33dc9428123c18ae97127e967b of nvme-next.
> > > > > 
> > > > > I don't actually think the subsys is needed when you use bus=, that is
> > > > > just something left over from trying to identify why the nvme device 
> > > > > was
> > > > > not initializing.
> > > > > 
> > > > > I use an Arch qcow image with kernel version 5.10.12
> > > > 
> > > > Thanks - I can reproduce it now.
> > > > 
> > > > Happens only when the subsystem is involved. Looks like a kernel issue
> > > > to me since the zones are definitely there when using nvme-cli.
> > > 
> > > Yes, it looks like it happens when CONFIG_NVME_MULTIPATH=y and subsys is
> > > given for namespace sharing.  In that case, the actual hidden namespace
> > > for nvme0n1 might be nvme0c0n1.
> > > 
> > > lrwxrwxrwx 1 root root 0 Feb  5 12:30 /sys/block/nvme0c0n1 -> 
> > > ../devices/pci:00/:00:06.0/nvme/nvme0/nvme0c0n1/
> > > lrwxrwxrwx 1 root root 0 Feb  5 12:30 /sys/block/nvme0n1 -> 
> > > ../devices/virtual/nvme-subsystem/nvme-subsys0/nvme0n1/   
> > > 
> > > cat /sys/block/nvme0c0n1/queue/nr_zones returns proper value.
> > > 
> > > > 
> > > > Stuff also seems to be initialized in the kernel since blkzone report
> > > > works.
> > > > 
> > > > Keith, this might be some fun for you :) ?
> > > 
> > > I also really want to ask about the policy of head namespace policy
> > > in kernel. :)
> > 
> > What's the question? It looks like I'm missing some part of the context.
> 
> If multipath is enabled, the namespace head and hidden namespace will be
> created.  In this case, /sys/block/nvme0n1/queue/nr_zones are not
> returning proper value for the namespace itself.  By the way, the hidden
> namespace /sys/block/nvme0c0n1/queue/nr_zones are returning properly.
> 
> Is it okay for sysfs of the head namespace node (nvme0n1) not to manage
> the request queue attributes like nr_zones?

Gotcha.

The q->nr_zones is not a stacking limit, so the virtual device that's
made visible is not inheriting the path device that contains this
setting. I'll see about getting a kernel fix proposed.



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread Minwoo Im
On 21-02-05 08:02:10, Keith Busch wrote:
> On Fri, Feb 05, 2021 at 09:33:54PM +0900, Minwoo Im wrote:
> > On 21-02-05 12:42:30, Klaus Jensen wrote:
> > > On Feb  5 12:25, i...@dantalion.nl wrote:
> > > > On 05-02-2021 11:39, Klaus Jensen wrote:
> > > > > This is a good way to report it ;)
> > > > > It is super helpful and super appreciated! Thanks!
> > > > 
> > > > Good to know :)
> > > > 
> > > > > I cant reproduce that. Can you share your qemu configuration, kernel
> > > > > version?
> > > > 
> > > > I create the image and launch QEMU with:
> > > > qemu-img create -f raw znsssd.img 16777216
> > > > 
> > > > qemu-system-x86_64 -name qemuzns -m 4G -cpu Haswell -smp 2 -hda \
> > > > ./arch-qemu.qcow2 -net user,hostfwd=tcp::-:22,\
> > > > hostfwd=tcp::-:2000 -net nic \
> > > > -drive file=./znsssd.img,id=mynvme,format=raw,if=none \
> > > > -device nvme-subsys,id=subsys0 \
> > > > -device nvme,serial=baz,id=nvme2,zoned.append_size_limit=131072,\
> > > > subsys=subsys0 \
> > > > -device nvme-ns,id=ns2,drive=mynvme,nsid=2,logical_block_size=4096,\
> > > > physical_block_size=4096,zoned=true,zoned.zone_size=131072,\
> > > > zoned.zone_capacity=131072,zoned.max_open=0,zoned.max_active=0,bus=nvme2
> > > > 
> > > > This should create 128 zones as 16777216 / 131072 = 128. My qemu version
> > > > is on d79d797b0dd02c33dc9428123c18ae97127e967b of nvme-next.
> > > > 
> > > > I don't actually think the subsys is needed when you use bus=, that is
> > > > just something left over from trying to identify why the nvme device was
> > > > not initializing.
> > > > 
> > > > I use an Arch qcow image with kernel version 5.10.12
> > > 
> > > Thanks - I can reproduce it now.
> > > 
> > > Happens only when the subsystem is involved. Looks like a kernel issue
> > > to me since the zones are definitely there when using nvme-cli.
> > 
> > Yes, it looks like it happens when CONFIG_NVME_MULTIPATH=y and subsys is
> > given for namespace sharing.  In that case, the actual hidden namespace
> > for nvme0n1 might be nvme0c0n1.
> > 
> > lrwxrwxrwx 1 root root 0 Feb  5 12:30 /sys/block/nvme0c0n1 -> 
> > ../devices/pci:00/:00:06.0/nvme/nvme0/nvme0c0n1/
> > lrwxrwxrwx 1 root root 0 Feb  5 12:30 /sys/block/nvme0n1 -> 
> > ../devices/virtual/nvme-subsystem/nvme-subsys0/nvme0n1/   
> > 
> > cat /sys/block/nvme0c0n1/queue/nr_zones returns proper value.
> > 
> > > 
> > > Stuff also seems to be initialized in the kernel since blkzone report
> > > works.
> > > 
> > > Keith, this might be some fun for you :) ?
> > 
> > I also really want to ask about the policy of head namespace policy
> > in kernel. :)
> 
> What's the question? It looks like I'm missing some part of the context.

If multipath is enabled, the namespace head and hidden namespace will be
created.  In this case, /sys/block/nvme0n1/queue/nr_zones are not
returning proper value for the namespace itself.  By the way, the hidden
namespace /sys/block/nvme0c0n1/queue/nr_zones are returning properly.

Is it okay for sysfs of the head namespace node (nvme0n1) not to manage
the request queue attributes like nr_zones?



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread Keith Busch
On Fri, Feb 05, 2021 at 09:33:54PM +0900, Minwoo Im wrote:
> On 21-02-05 12:42:30, Klaus Jensen wrote:
> > On Feb  5 12:25, i...@dantalion.nl wrote:
> > > On 05-02-2021 11:39, Klaus Jensen wrote:
> > > > This is a good way to report it ;)
> > > > It is super helpful and super appreciated! Thanks!
> > > 
> > > Good to know :)
> > > 
> > > > I cant reproduce that. Can you share your qemu configuration, kernel
> > > > version?
> > > 
> > > I create the image and launch QEMU with:
> > > qemu-img create -f raw znsssd.img 16777216
> > > 
> > > qemu-system-x86_64 -name qemuzns -m 4G -cpu Haswell -smp 2 -hda \
> > > ./arch-qemu.qcow2 -net user,hostfwd=tcp::-:22,\
> > > hostfwd=tcp::-:2000 -net nic \
> > > -drive file=./znsssd.img,id=mynvme,format=raw,if=none \
> > > -device nvme-subsys,id=subsys0 \
> > > -device nvme,serial=baz,id=nvme2,zoned.append_size_limit=131072,\
> > > subsys=subsys0 \
> > > -device nvme-ns,id=ns2,drive=mynvme,nsid=2,logical_block_size=4096,\
> > > physical_block_size=4096,zoned=true,zoned.zone_size=131072,\
> > > zoned.zone_capacity=131072,zoned.max_open=0,zoned.max_active=0,bus=nvme2
> > > 
> > > This should create 128 zones as 16777216 / 131072 = 128. My qemu version
> > > is on d79d797b0dd02c33dc9428123c18ae97127e967b of nvme-next.
> > > 
> > > I don't actually think the subsys is needed when you use bus=, that is
> > > just something left over from trying to identify why the nvme device was
> > > not initializing.
> > > 
> > > I use an Arch qcow image with kernel version 5.10.12
> > 
> > Thanks - I can reproduce it now.
> > 
> > Happens only when the subsystem is involved. Looks like a kernel issue
> > to me since the zones are definitely there when using nvme-cli.
> 
> Yes, it looks like it happens when CONFIG_NVME_MULTIPATH=y and subsys is
> given for namespace sharing.  In that case, the actual hidden namespace
> for nvme0n1 might be nvme0c0n1.
> 
> lrwxrwxrwx 1 root root 0 Feb  5 12:30 /sys/block/nvme0c0n1 -> 
> ../devices/pci:00/:00:06.0/nvme/nvme0/nvme0c0n1/
> lrwxrwxrwx 1 root root 0 Feb  5 12:30 /sys/block/nvme0n1 -> 
> ../devices/virtual/nvme-subsystem/nvme-subsys0/nvme0n1/   
> 
> cat /sys/block/nvme0c0n1/queue/nr_zones returns proper value.
> 
> > 
> > Stuff also seems to be initialized in the kernel since blkzone report
> > works.
> > 
> > Keith, this might be some fun for you :) ?
> 
> I also really want to ask about the policy of head namespace policy
> in kernel. :)

What's the question? It looks like I'm missing some part of the context.



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread Minwoo Im
On 21-02-05 12:42:30, Klaus Jensen wrote:
> On Feb  5 12:25, i...@dantalion.nl wrote:
> > On 05-02-2021 11:39, Klaus Jensen wrote:
> > > This is a good way to report it ;)
> > > It is super helpful and super appreciated! Thanks!
> > 
> > Good to know :)
> > 
> > > I cant reproduce that. Can you share your qemu configuration, kernel
> > > version?
> > 
> > I create the image and launch QEMU with:
> > qemu-img create -f raw znsssd.img 16777216
> > 
> > qemu-system-x86_64 -name qemuzns -m 4G -cpu Haswell -smp 2 -hda \
> > ./arch-qemu.qcow2 -net user,hostfwd=tcp::-:22,\
> > hostfwd=tcp::-:2000 -net nic \
> > -drive file=./znsssd.img,id=mynvme,format=raw,if=none \
> > -device nvme-subsys,id=subsys0 \
> > -device nvme,serial=baz,id=nvme2,zoned.append_size_limit=131072,\
> > subsys=subsys0 \
> > -device nvme-ns,id=ns2,drive=mynvme,nsid=2,logical_block_size=4096,\
> > physical_block_size=4096,zoned=true,zoned.zone_size=131072,\
> > zoned.zone_capacity=131072,zoned.max_open=0,zoned.max_active=0,bus=nvme2
> > 
> > This should create 128 zones as 16777216 / 131072 = 128. My qemu version
> > is on d79d797b0dd02c33dc9428123c18ae97127e967b of nvme-next.
> > 
> > I don't actually think the subsys is needed when you use bus=, that is
> > just something left over from trying to identify why the nvme device was
> > not initializing.
> > 
> > I use an Arch qcow image with kernel version 5.10.12
> 
> Thanks - I can reproduce it now.
> 
> Happens only when the subsystem is involved. Looks like a kernel issue
> to me since the zones are definitely there when using nvme-cli.

Yes, it looks like it happens when CONFIG_NVME_MULTIPATH=y and subsys is
given for namespace sharing.  In that case, the actual hidden namespace
for nvme0n1 might be nvme0c0n1.

lrwxrwxrwx 1 root root 0 Feb  5 12:30 /sys/block/nvme0c0n1 -> 
../devices/pci:00/:00:06.0/nvme/nvme0/nvme0c0n1/
lrwxrwxrwx 1 root root 0 Feb  5 12:30 /sys/block/nvme0n1 -> 
../devices/virtual/nvme-subsystem/nvme-subsys0/nvme0n1/   

cat /sys/block/nvme0c0n1/queue/nr_zones returns proper value.

> 
> Stuff also seems to be initialized in the kernel since blkzone report
> works.
> 
> Keith, this might be some fun for you :) ?

I also really want to ask about the policy of head namespace policy
in kernel. :)



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread Klaus Jensen
On Feb  5 12:25, i...@dantalion.nl wrote:
> On 05-02-2021 11:39, Klaus Jensen wrote:
> > This is a good way to report it ;)
> > It is super helpful and super appreciated! Thanks!
> 
> Good to know :)
> 
> > I cant reproduce that. Can you share your qemu configuration, kernel
> > version?
> 
> I create the image and launch QEMU with:
> qemu-img create -f raw znsssd.img 16777216
> 
> qemu-system-x86_64 -name qemuzns -m 4G -cpu Haswell -smp 2 -hda \
> ./arch-qemu.qcow2 -net user,hostfwd=tcp::-:22,\
> hostfwd=tcp::-:2000 -net nic \
> -drive file=./znsssd.img,id=mynvme,format=raw,if=none \
> -device nvme-subsys,id=subsys0 \
> -device nvme,serial=baz,id=nvme2,zoned.append_size_limit=131072,\
> subsys=subsys0 \
> -device nvme-ns,id=ns2,drive=mynvme,nsid=2,logical_block_size=4096,\
> physical_block_size=4096,zoned=true,zoned.zone_size=131072,\
> zoned.zone_capacity=131072,zoned.max_open=0,zoned.max_active=0,bus=nvme2
> 
> This should create 128 zones as 16777216 / 131072 = 128. My qemu version
> is on d79d797b0dd02c33dc9428123c18ae97127e967b of nvme-next.
> 
> I don't actually think the subsys is needed when you use bus=, that is
> just something left over from trying to identify why the nvme device was
> not initializing.
> 
> I use an Arch qcow image with kernel version 5.10.12

Thanks - I can reproduce it now.

Happens only when the subsystem is involved. Looks like a kernel issue
to me since the zones are definitely there when using nvme-cli.

Stuff also seems to be initialized in the kernel since blkzone report
works.

Keith, this might be some fun for you :) ?


signature.asc
Description: PGP signature


Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread i...@dantalion.nl
On 05-02-2021 11:39, Klaus Jensen wrote:
> This is a good way to report it ;)
> It is super helpful and super appreciated! Thanks!

Good to know :)

> I cant reproduce that. Can you share your qemu configuration, kernel
> version?

I create the image and launch QEMU with:
qemu-img create -f raw znsssd.img 16777216

qemu-system-x86_64 -name qemuzns -m 4G -cpu Haswell -smp 2 -hda \
./arch-qemu.qcow2 -net user,hostfwd=tcp::-:22,\
hostfwd=tcp::-:2000 -net nic \
-drive file=./znsssd.img,id=mynvme,format=raw,if=none \
-device nvme-subsys,id=subsys0 \
-device nvme,serial=baz,id=nvme2,zoned.append_size_limit=131072,\
subsys=subsys0 \
-device nvme-ns,id=ns2,drive=mynvme,nsid=2,logical_block_size=4096,\
physical_block_size=4096,zoned=true,zoned.zone_size=131072,\
zoned.zone_capacity=131072,zoned.max_open=0,zoned.max_active=0,bus=nvme2

This should create 128 zones as 16777216 / 131072 = 128. My qemu version
is on d79d797b0dd02c33dc9428123c18ae97127e967b of nvme-next.

I don't actually think the subsys is needed when you use bus=, that is
just something left over from trying to identify why the nvme device was
not initializing.

I use an Arch qcow image with kernel version 5.10.12



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread Klaus Jensen
On Feb  5 11:10, i...@dantalion.nl wrote:
> Hello,
> 
> Thanks for this, I got everything working including the new device types
> (nvme-ns, nvme-subsys). I think I have found a small bug and do not know
> where to report this.
> 

This is a good way to report it ;)

> The values for nvme device property zoned.append_size_limit are not
> sanity checked, you can set it to invalid values such as 128.
> 
> This will latter result in errors when trying to initialize the device:
> Device not ready; aborting initialisation, CSTS=0x2
> Removing after probe failure status: -19
> 

Yeah. We can at least check that append_size_limit is at least 4k. That
might still be too small if we run on configurations with larger page
sizes, and then we cant figure that out until the device is enabled by
the host anyway. But we can make it a bit more user-friendly in the
common case.

> Addtionally, `cat /sys/block/nvmeXnX/queue/nr_zones` reports 0 while
> `blkzone report /dev/nvmeXnX` clearly shows > 0 zones. Not sure if user
> error or bug. Also potentially kernel bug and not due to QEMU.
> 

I cant reproduce that. Can you share your qemu configuration, kernel
version?

> Let me know if sharing this information is helpful or rather just
> annoying, don't want to bother anyone.
> 

It is super helpful and super appreciated! Thanks!


signature.asc
Description: PGP signature


Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-05 Thread i...@dantalion.nl
Hello,

Thanks for this, I got everything working including the new device types
(nvme-ns, nvme-subsys). I think I have found a small bug and do not know
where to report this.

The values for nvme device property zoned.append_size_limit are not
sanity checked, you can set it to invalid values such as 128.

This will latter result in errors when trying to initialize the device:
Device not ready; aborting initialisation, CSTS=0x2
Removing after probe failure status: -19

Addtionally, `cat /sys/block/nvmeXnX/queue/nr_zones` reports 0 while
`blkzone report /dev/nvmeXnX` clearly shows > 0 zones. Not sure if user
error or bug. Also potentially kernel bug and not due to QEMU.

Let me know if sharing this information is helpful or rather just
annoying, don't want to bother anyone.

Kind regards,
Corne

On 04-02-2021 19:30, Klaus Jensen wrote:
> On Feb  4 14:32, i...@dantalion.nl wrote:
>> Hello Dmitry,
>>
>> I tried to apply your patches to nvme-next with
>> <20201104102248.32168-1-...@irrelevant.dk> as base but get quite a few
>> 'does not apply errors'.
>>
>> Can you confirm that nvme-next is:
>> git://git.infradead.org/qemu-nvme.git
>>
>> And the base <20201104102248.32168-1-...@irrelevant.dk> is at:
>> 73ad0ff216d2e1cf08909a0597e7b072babfe9c4
>>
>> Otherwise if I have made any mistake could you please indicate where, I
>> am sure I am doing something wrong here. Sorry for being a nuisance.
>>
> 
> Hi,
> 
> That series is already merged in nvme-next. Use the 'nvme-next' branch
> from git://git.infradead.org/qemu-nvme.git.
> 
> 
> Cheers,
> Klaus
> 



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-04 Thread Klaus Jensen
On Feb  4 14:32, i...@dantalion.nl wrote:
> Hello Dmitry,
> 
> I tried to apply your patches to nvme-next with
> <20201104102248.32168-1-...@irrelevant.dk> as base but get quite a few
> 'does not apply errors'.
> 
> Can you confirm that nvme-next is:
> git://git.infradead.org/qemu-nvme.git
> 
> And the base <20201104102248.32168-1-...@irrelevant.dk> is at:
> 73ad0ff216d2e1cf08909a0597e7b072babfe9c4
> 
> Otherwise if I have made any mistake could you please indicate where, I
> am sure I am doing something wrong here. Sorry for being a nuisance.
> 

Hi,

That series is already merged in nvme-next. Use the 'nvme-next' branch
from git://git.infradead.org/qemu-nvme.git.


Cheers,
Klaus


signature.asc
Description: PGP signature


[PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-02-04 Thread i...@dantalion.nl
Hello Dmitry,

I tried to apply your patches to nvme-next with
<20201104102248.32168-1-...@irrelevant.dk> as base but get quite a few
'does not apply errors'.

Can you confirm that nvme-next is:
git://git.infradead.org/qemu-nvme.git

And the base <20201104102248.32168-1-...@irrelevant.dk> is at:
73ad0ff216d2e1cf08909a0597e7b072babfe9c4

Otherwise if I have made any mistake could you please indicate where, I
am sure I am doing something wrong here. Sorry for being a nuisance.

Kind regards,
Corne



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2021-01-07 Thread Klaus Jensen
On Dec  9 10:57, Klaus Jensen wrote:
> Hi Dmitry,
> 
> By and large, this looks OK to me. There are still some issues here and
> there, and some comments of mine that you did not address, but I will
> follow up with patches to fix that. Let's get this merged.
> 
> It looks like the nvme-next you rebased on is slightly old and missing
> two commits:
> 
>   "hw/block/nvme: remove superfluous NvmeCtrl parameter" and
>   "hw/block/nvme: pull aio error handling"
> 
> It caused a couple of conflicts, but nothing that I couldn't fix up.
> 
> Since I didn't manage to convince anyone about the zsze and zcap
> parameters being in terms of LBAs, I'll revert that to be
> 'zoned.zone_size' and 'zoned.zone_capacity'.
> 
> Finally, would you accept that we skip "hw/block/nvme: Add injection of
> Offline/Read-Only zones" for now? I'd like to discuss it a bit since I
> think the random injects feels a bit ad-hoc. Back when I did OCSSD
> emulation with Hans, we did something like this for setting up state
> through a descriptor text file - I think we should explore something
> like that before we lock down the two parameters. I'll amend the final
> documentation commit to not include those parameters.
> 
> Sounds good?
> 
> Otherwise, I think this is mergeable to nvme-next. So, for the series
> (excluding "hw/block/nvme: Add injection of Offline/Read-Only zones"):
> 
> Reviewed-by: Klaus Jensen 
> 

I've applied this series to my local nvme-next. Our repo host is
unavailable this morning (infradead.org), but I will push as soon as
possible.


Thanks!
Klaus


signature.asc
Description: PGP signature


Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2020-12-15 Thread Keith Busch
Hi Dmitry,

Looks good to me, thanks for sticking with it.

Reviewed-by: Keith Busch 



Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2020-12-10 Thread Klaus Jensen
On Dec 10 19:25, Dmitry Fomichev wrote:
> > -Original Message-
> > From: Klaus Jensen 
> > Sent: Wednesday, December 9, 2020 4:58 AM
> > To: Dmitry Fomichev 
> > Cc: Keith Busch ; Klaus Jensen
> > ; Kevin Wolf ; Philippe
> > Mathieu-Daudé ; Max Reitz ;
> > Maxim Levitsky ; Fam Zheng ;
> > Niklas Cassel ; Damien Le Moal
> > ; qemu-block@nongnu.org; qemu-
> > de...@nongnu.org; Alistair Francis ; Matias
> > Bjorling 
> > Subject: Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types
> > and Zoned Namespace Command Set
> > 
> > Hi Dmitry,
> > 
> > By and large, this looks OK to me. There are still some issues here and
> > there, and some comments of mine that you did not address, but I will
> > follow up with patches to fix that. Let's get this merged.
> > 
> > It looks like the nvme-next you rebased on is slightly old and missing
> > two commits:
> > 
> >   "hw/block/nvme: remove superfluous NvmeCtrl parameter" and
> >   "hw/block/nvme: pull aio error handling"
> > 
> > It caused a couple of conflicts, but nothing that I couldn't fix up.
> > 
> > Since I didn't manage to convince anyone about the zsze and zcap
> > parameters being in terms of LBAs, I'll revert that to be
> > 'zoned.zone_size' and 'zoned.zone_capacity'.
> > 
> > Finally, would you accept that we skip "hw/block/nvme: Add injection of
> > Offline/Read-Only zones" for now? I'd like to discuss it a bit since I
> > think the random injects feels a bit ad-hoc. Back when I did OCSSD
> > emulation with Hans, we did something like this for setting up state
> > through a descriptor text file - I think we should explore something
> > like that before we lock down the two parameters. I'll amend the final
> > documentation commit to not include those parameters.
> > 
> > Sounds good?
> 
> Klaus,
> 
> Sounds great! Sure, we can leave out the injection patch. It  was made
> to increase our internal test coverage, but it is not ideal. Since the zones
> are injected randomly, there is no consistency between test runs and
> it is impossible to reliably create many specific test cases (e.g. the first 
> or
> the last zone is offline).

Yes, exactly.

> The descriptor input file seems like a much more
> flexible and capable approach. If you have something in works, I'll be
> happy to discuss or review.
> 

Sure, I'll rip some stuff from OCSSD and cook up a patch.

> Thank you for your very thorough reviews!
> 

Thanks for contributing this.

Keith, you wanna take a look an give this an Ack or so?


signature.asc
Description: PGP signature


RE: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2020-12-10 Thread Dmitry Fomichev
> -Original Message-
> From: Klaus Jensen 
> Sent: Wednesday, December 9, 2020 4:58 AM
> To: Dmitry Fomichev 
> Cc: Keith Busch ; Klaus Jensen
> ; Kevin Wolf ; Philippe
> Mathieu-Daudé ; Max Reitz ;
> Maxim Levitsky ; Fam Zheng ;
> Niklas Cassel ; Damien Le Moal
> ; qemu-block@nongnu.org; qemu-
> de...@nongnu.org; Alistair Francis ; Matias
> Bjorling 
> Subject: Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types
> and Zoned Namespace Command Set
> 
> Hi Dmitry,
> 
> By and large, this looks OK to me. There are still some issues here and
> there, and some comments of mine that you did not address, but I will
> follow up with patches to fix that. Let's get this merged.
> 
> It looks like the nvme-next you rebased on is slightly old and missing
> two commits:
> 
>   "hw/block/nvme: remove superfluous NvmeCtrl parameter" and
>   "hw/block/nvme: pull aio error handling"
> 
> It caused a couple of conflicts, but nothing that I couldn't fix up.
> 
> Since I didn't manage to convince anyone about the zsze and zcap
> parameters being in terms of LBAs, I'll revert that to be
> 'zoned.zone_size' and 'zoned.zone_capacity'.
> 
> Finally, would you accept that we skip "hw/block/nvme: Add injection of
> Offline/Read-Only zones" for now? I'd like to discuss it a bit since I
> think the random injects feels a bit ad-hoc. Back when I did OCSSD
> emulation with Hans, we did something like this for setting up state
> through a descriptor text file - I think we should explore something
> like that before we lock down the two parameters. I'll amend the final
> documentation commit to not include those parameters.
> 
> Sounds good?

Klaus,

Sounds great! Sure, we can leave out the injection patch. It  was made
to increase our internal test coverage, but it is not ideal. Since the zones
are injected randomly, there is no consistency between test runs and
it is impossible to reliably create many specific test cases (e.g. the first or
the last zone is offline). The descriptor input file seems like a much more
flexible and capable approach. If you have something in works, I'll be
happy to discuss or review.

Thank you for your very thorough reviews!

Cheers,
Dmitry

> 
> Otherwise, I think this is mergeable to nvme-next. So, for the series
> (excluding "hw/block/nvme: Add injection of Offline/Read-Only zones"):
> 
> Reviewed-by: Klaus Jensen 
> 
> On Dec  9 05:03, Dmitry Fomichev wrote:
> > v10 -> v11:
> >
> >  - Address review comments by Klaus.
> >
> >  - Add a patch to separate the handling of controller reset
> >and subsystem shutdown. Place the patch at the beginning
> >of the series so it can be picked up separately.
> >
> >  - Rebase on the current nvme-next branch.
> >
> > v9 -> v10:
> >
> >  - Correctly check for MDTS in Zone Management Receive handler.
> >
> >  - Change Klaus' "Reviewed-by" email in UUID patch.
> >
> > v8 -> v9:
> >
> >  - Move the modifications to "include/block/nvme.h" made to
> >introduce ZNS-related definitions to a separate patch.
> >
> >  - Add a new struct, NvmeZonedResult, along the same lines as the
> >existing NvmeAerResult, to carry Zone Append LBA returned to
> >the host. Now, there is no need to modify NvmeCqe struct except
> >renaming DW1 field from "rsvd" to "dw1".
> >
> >  - Add check for MDTS in Zone Management Receive handler.
> >
> >  - Remove checks for ns->attached since the value of this flag
> >is always true for now.
> >
> >  - Rebase to the current quemu-nvme/nvme-next branch.
> >
> > v7 -> v8:
> >
> >  - Move refactoring commits to the front of the series.
> >
> >  - Remove "attached" and "fill_pattern" device properties.
> >
> >  - Only close open zones upon subsystem shutdown, not when CC.EN flag
> >is set to 0. Avoid looping through all zones by iterating through
> >lists of open and closed zones.
> >
> >  - Improve bulk processing of zones aka zoned operations with "all"
> >flag set. Avoid looping through the entire zone array for all zone
> >operations except Offline Zone.
> >
> >  - Prefix ZNS-related property names with "zoned.". The "zoned" Boolean
> >property is retained to turn on zoned command set as it is much more
> >intuitive and user-friendly compared to setting a magic number value
> >to csi property.
> >
> >  - Address review comments.
> >
> >  - Remove unused trace events.
&g

Re: [PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2020-12-09 Thread Klaus Jensen
Hi Dmitry,

By and large, this looks OK to me. There are still some issues here and
there, and some comments of mine that you did not address, but I will
follow up with patches to fix that. Let's get this merged.

It looks like the nvme-next you rebased on is slightly old and missing
two commits:

  "hw/block/nvme: remove superfluous NvmeCtrl parameter" and
  "hw/block/nvme: pull aio error handling"

It caused a couple of conflicts, but nothing that I couldn't fix up.

Since I didn't manage to convince anyone about the zsze and zcap
parameters being in terms of LBAs, I'll revert that to be
'zoned.zone_size' and 'zoned.zone_capacity'.

Finally, would you accept that we skip "hw/block/nvme: Add injection of
Offline/Read-Only zones" for now? I'd like to discuss it a bit since I
think the random injects feels a bit ad-hoc. Back when I did OCSSD
emulation with Hans, we did something like this for setting up state
through a descriptor text file - I think we should explore something
like that before we lock down the two parameters. I'll amend the final
documentation commit to not include those parameters.

Sounds good?

Otherwise, I think this is mergeable to nvme-next. So, for the series
(excluding "hw/block/nvme: Add injection of Offline/Read-Only zones"):

Reviewed-by: Klaus Jensen 

On Dec  9 05:03, Dmitry Fomichev wrote:
> v10 -> v11:
> 
>  - Address review comments by Klaus.
> 
>  - Add a patch to separate the handling of controller reset
>and subsystem shutdown. Place the patch at the beginning
>of the series so it can be picked up separately.
> 
>  - Rebase on the current nvme-next branch.
> 
> v9 -> v10:
> 
>  - Correctly check for MDTS in Zone Management Receive handler.
> 
>  - Change Klaus' "Reviewed-by" email in UUID patch.
> 
> v8 -> v9:
> 
>  - Move the modifications to "include/block/nvme.h" made to
>introduce ZNS-related definitions to a separate patch.
> 
>  - Add a new struct, NvmeZonedResult, along the same lines as the
>existing NvmeAerResult, to carry Zone Append LBA returned to
>the host. Now, there is no need to modify NvmeCqe struct except
>renaming DW1 field from "rsvd" to "dw1".
> 
>  - Add check for MDTS in Zone Management Receive handler.
> 
>  - Remove checks for ns->attached since the value of this flag
>is always true for now.
> 
>  - Rebase to the current quemu-nvme/nvme-next branch.
> 
> v7 -> v8:
> 
>  - Move refactoring commits to the front of the series.
> 
>  - Remove "attached" and "fill_pattern" device properties.
> 
>  - Only close open zones upon subsystem shutdown, not when CC.EN flag
>is set to 0. Avoid looping through all zones by iterating through
>lists of open and closed zones.
> 
>  - Improve bulk processing of zones aka zoned operations with "all"
>flag set. Avoid looping through the entire zone array for all zone
>operations except Offline Zone.
> 
>  - Prefix ZNS-related property names with "zoned.". The "zoned" Boolean
>property is retained to turn on zoned command set as it is much more
>intuitive and user-friendly compared to setting a magic number value
>to csi property.
> 
>  - Address review comments.
> 
>  - Remove unused trace events.
> 
> v6 -> v7:
> 
>  - Introduce ns->iocs initialization function earlier in the series,
>in CSE Log patch.
> 
>  - Set NVM iocs for zoned namespaces when CC.CSS is set to
>NVME_CC_CSS_NVM.
> 
>  - Clean up code in CSE log handler.
>  
> v5 -> v6:
> 
>  - Remove zoned state persistence code. Replace position-independent
>zone lists with QTAILQs.
> 
>  - Close all open zones upon clearing of the controller. This is
>a similar procedure to the one previously performed upon powering
>up with zone persistence. 
> 
>  - Squash NS Types and ZNS triplets of commits to keep definitions
>and trace event definitions together with the implementation code.
> 
>  - Move namespace UUID generation to a separate patch. Add the new
>"uuid" property as suggested by Klaus.
> 
>  - Rework Commands and Effects patch to make sure that the log is
>always in sync with the actual set of commands supported.
> 
>  - Add two refactoring commits at the end of the series to
>optimize read and write i/o path.
> 
> - Incorporate feedback from Keith, Klaus and Niklas:
> 
>   * fix rebase errors in nvme_identify_ns_descr_list()
>   * remove unnecessary code from nvme_write_bar()
>   * move csi to NvmeNamespace and use it from the beginning in NSTypes
> patch
>   * change zone read processing to cover all corner cases with RAZB=1
>   * sync w_ptr and d.wp in case of a i/o error at the preceding zone
>   * reword the commit message in active/inactive patch with the new
> text from Niklas
>   * correct dlfeat reporting depending on the fill pattern set
>   * add more checks for "attached" n/s parameter to prevent i/o and
> get/set features on inactive namespaces
>   * Use DEFINE_PROP_SIZE and DEFINE_PROP_SIZE32 for zone size/capacity
> and ZASL 

[PATCH v11 00/13] hw/block/nvme: Support Namespace Types and Zoned Namespace Command Set

2020-12-08 Thread Dmitry Fomichev
v10 -> v11:

 - Address review comments by Klaus.

 - Add a patch to separate the handling of controller reset
   and subsystem shutdown. Place the patch at the beginning
   of the series so it can be picked up separately.

 - Rebase on the current nvme-next branch.

v9 -> v10:

 - Correctly check for MDTS in Zone Management Receive handler.

 - Change Klaus' "Reviewed-by" email in UUID patch.

v8 -> v9:

 - Move the modifications to "include/block/nvme.h" made to
   introduce ZNS-related definitions to a separate patch.

 - Add a new struct, NvmeZonedResult, along the same lines as the
   existing NvmeAerResult, to carry Zone Append LBA returned to
   the host. Now, there is no need to modify NvmeCqe struct except
   renaming DW1 field from "rsvd" to "dw1".

 - Add check for MDTS in Zone Management Receive handler.

 - Remove checks for ns->attached since the value of this flag
   is always true for now.

 - Rebase to the current quemu-nvme/nvme-next branch.

v7 -> v8:

 - Move refactoring commits to the front of the series.

 - Remove "attached" and "fill_pattern" device properties.

 - Only close open zones upon subsystem shutdown, not when CC.EN flag
   is set to 0. Avoid looping through all zones by iterating through
   lists of open and closed zones.

 - Improve bulk processing of zones aka zoned operations with "all"
   flag set. Avoid looping through the entire zone array for all zone
   operations except Offline Zone.

 - Prefix ZNS-related property names with "zoned.". The "zoned" Boolean
   property is retained to turn on zoned command set as it is much more
   intuitive and user-friendly compared to setting a magic number value
   to csi property.

 - Address review comments.

 - Remove unused trace events.

v6 -> v7:

 - Introduce ns->iocs initialization function earlier in the series,
   in CSE Log patch.

 - Set NVM iocs for zoned namespaces when CC.CSS is set to
   NVME_CC_CSS_NVM.

 - Clean up code in CSE log handler.
 
v5 -> v6:

 - Remove zoned state persistence code. Replace position-independent
   zone lists with QTAILQs.

 - Close all open zones upon clearing of the controller. This is
   a similar procedure to the one previously performed upon powering
   up with zone persistence. 

 - Squash NS Types and ZNS triplets of commits to keep definitions
   and trace event definitions together with the implementation code.

 - Move namespace UUID generation to a separate patch. Add the new
   "uuid" property as suggested by Klaus.

 - Rework Commands and Effects patch to make sure that the log is
   always in sync with the actual set of commands supported.

 - Add two refactoring commits at the end of the series to
   optimize read and write i/o path.

- Incorporate feedback from Keith, Klaus and Niklas:

  * fix rebase errors in nvme_identify_ns_descr_list()
  * remove unnecessary code from nvme_write_bar()
  * move csi to NvmeNamespace and use it from the beginning in NSTypes
patch
  * change zone read processing to cover all corner cases with RAZB=1
  * sync w_ptr and d.wp in case of a i/o error at the preceding zone
  * reword the commit message in active/inactive patch with the new
text from Niklas
  * correct dlfeat reporting depending on the fill pattern set
  * add more checks for "attached" n/s parameter to prevent i/o and
get/set features on inactive namespaces
  * Use DEFINE_PROP_SIZE and DEFINE_PROP_SIZE32 for zone size/capacity
and ZASL respectively
  * Improve zone size and capacity validation
  * Correctly report NSZE

v4 -> v5:

 - Rebase to the current qemu-nvme.

 - Use HostMemoryBackendFile as the backing storage for persistent
   zone metadata.

 - Fix the issue with filling the valid data in the next zone if RAZB
   is enabled.

v3 -> v4:

 - Fix bugs introduced in v2/v3 for QD > 1 operation. Now, all writes
   to a zone happen at the new write pointer variable, zone->w_ptr,
   that is advanced right after submitting the backend i/o. The existing
   zone->d.wp variable is updated upon the successful write completion
   and it is used for zone reporting. Some code has been split from
   nvme_finalize_zoned_write() function to a new function,
   nvme_advance_zone_wp().

 - Make the code compile under mingw. Switch to using QEMU API for
   mmap/msync, i.e. memory_region...(). Since mmap is not available in
   mingw (even though there is mman-win32 library available on Github),
   conditional compilation is added around these calls to avoid
   undefined symbols under mingw. A better fix would be to add stub
   functions to softmmu/memory.c for the case when CONFIG_POSIX is not
   defined, but such change is beyond the scope of this patchset and it
   can be made in a separate patch.

 - Correct permission mask used to open zone metadata file.

 - Fold "Define 64 bit cqe.result" patch into ZNS commit.

 - Use clz64/clz32 instead of defining nvme_ilog2() function.

 - Simplify rpt_empty_id_struct() code, move nvme_fill_data() back
   to ZNS patch.

 - Fix a