Hi Kevin,

On 11/25/24 18:17, Kevin Traynor wrote:
Hi Maxime,

On 07/11/2024 17:51, Maxime Coquelin wrote:
This patch uses the new rte_vhost_driver_set_max_queue_num
API to set the maximum number of queue pairs supported by
the Vhost-user port.

This is required for VDUSE which needs to specify the
maximum number of queue pairs at creation time. Without it
128 queue pairs metadata would be allocated.


This reuses the options:n_rxq that is used to set the phy device rxqs.

n_rxq for phy is the actual number of queues that are configured (or at
least attempted), they are then used in so far as they are created and
polled by the pmd cores. So there is some semantic difference in the
usage for vduse.

Yes, I agree.
In the case of VDUSE it is the maximum number of queue pairs device will
support, but the driver side may use less.

I tested a bit with vhost-user (even though it's noop), just to see the
effect of changing n_rxq, see comments below.

Indeed, this is a noop in the case of Vhost-user.


Signed-off-by: Maxime Coquelin <[email protected]>
---
Changes in v2:
==============
- Address checkpatch warnings.

---
  lib/netdev-dpdk.c | 12 ++++++++++++
  1 file changed, 12 insertions(+)

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index e454a4a5d..966063ea7 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -2506,6 +2506,9 @@ netdev_dpdk_vhost_client_set_config(struct netdev *netdev,
          VLOG_INFO("Max Tx retries for vhost device '%s' set to %d",
                    netdev_get_name(netdev), max_tx_retries);
      }
+
+    dpdk_set_rxq_config(dev, args);
+
      ovs_mutex_unlock(&dev->mutex);
return 0;
@@ -6298,6 +6301,7 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev 
*netdev)
          uint64_t virtio_unsup_features = 0;
          uint64_t vhost_flags = 0;
          bool enable_tso;
+        int nr_qp;
enable_tso = userspace_tso_enabled()
                       && dev->virtio_features_state & OVS_VIRTIO_F_CLEAN;
@@ -6371,6 +6375,14 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev 
*netdev)
              goto unlock;
          }
+ nr_qp = MAX(dev->requested_n_rxq, dev->requested_n_txq);
+        err = rte_vhost_driver_set_max_queue_num(dev->vhost_id, nr_qp);
+        if (err) {
+            VLOG_ERR("rte_vhost_driver_set_max_queue_num failed for "
+                    "vhost-user client port: %s\n", dev->up.name);
+            goto unlock;
+        }
+

requested_n_rxq is defaulted to NR_QUEUE (1) and this code won't be
reached after the device is registered (minus one exception), so it
seems to not pick up the n_rxq value.

Also for n_rxq, it can be changed anytime on the fly, which i suppose
will not be possible here.

Right, the implementation is bogues in this version, and I agree reusing
n_rxq might not be the best way. A dedicated option might be more
appriopriate.

I'm sending a new version introducing a new option.

Thanks,
Maxime


For vhost-user this code has no effect but we are calling and logging an
error on fail. I didn't look closely at vduse, but if we can't
differentiate at this point, at least we can add some comments to explain.

thanks,
Kevin.


          err = rte_vhost_driver_start(dev->vhost_id);
          if (err) {
              VLOG_ERR("rte_vhost_driver_start failed for vhost user "


_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to