On 14/04/2026 12:43, Nilay Shroff wrote:

about the queue-depth iopolicy, why is depth per controller and not per NS (path)? The following does not mention:

https://lore.kernel.org/linux-nvme/20240625122605.857462-3- [email protected]/

Is the idea that some controller may have another NS attached and have traffic there, and we need to account according to this also?

Yes, the idea is that congestion should be evaluated at the controller level rather than per-namespace. In NVMe, multiple namespaces can be attached to the same controller, and all of them share the same transport path and I/O queue resources (submission and completion queues). As a result, any contention or congestion is fundamentally observed at the controller, and not at an individual namespace.

If we were to track queue depth per namespace, it could give a misleading view of the actual load on the underlying path, since multiple namespaces may be contributing to the same set of queues. In contrast, tracking queue depth per controller provides a more accurate representation of the total outstanding I/O
and the level of congestion on that path.

In a multipath configuration, this allows us to compare controllers directly. For example, if one controller has a lower queue depth than another, it is likely experiencing less contention and may offer lower latency,
making it a better candidate for forwarding I/O.

ok, thanks for the info. So on this basis I would think that SCSI host would be where we track requests for scsi-multipath. I need to consider it more... Hannes, thoughts?

John


Reply via email to