Currently, there are few inconsistencies between dpif-netdev
and netdev layers:
* dpif-netdev can't know about exact number of tx queues
allocated inside netdev.
This leads to constant mapping of queue-ids to 'real' ones.
* dpif-netdev is able to change number of tx queues while
it knows nothing about real hardware or number of queues
allocated in VM.
This leads to complications in reconfiguration of vhost-user
ports, because setting of 'n_txq' from different sources
(dpif-netdev and 'new_device()' call) requires additional
sychronization between this two layers.
Also: We are able to configure 'n_rxq' for vhost-user devices, but
there is only one sane number of rx queues which must be used and
configured manually (number of queues that allocated in QEMU).
This patch moves all configuration of queues to netdev layer and disables
configuration of 'n_rxq' for vhost devices.
Configuration of rx and tx queues now automatically applied from
connected virtio device. Standard reconfiguration mechanism was used to
apply this changes.
Number of tx queues by default set to 'n_cores + 1' for physical ports
and old 'needs_locking' logic preserved.
For dummy-pmd ports new undocumented option 'n_txq' introduced to
configure number of tx queues.
Ex.:
ovs-vsctl set interface dummy-pmd0 options:n_txq=32
Signed-off-by: Ilya Maximets <[email protected]>
---
INSTALL.DPDK-ADVANCED.md | 26 +++-----
NEWS | 2 +
lib/dpif-netdev.c | 31 ++-------
lib/netdev-bsd.c | 1 -
lib/netdev-dpdk.c | 162 +++++++++++++++++++----------------------------
lib/netdev-dummy.c | 31 ++-------
lib/netdev-linux.c | 1 -
lib/netdev-provider.h | 16 -----
lib/netdev-vport.c | 1 -
lib/netdev.c | 30 ---------
lib/netdev.h | 1 -
vswitchd/vswitch.xml | 3 +-
12 files changed, 90 insertions(+), 215 deletions(-)
diff --git a/INSTALL.DPDK-ADVANCED.md b/INSTALL.DPDK-ADVANCED.md
index ec47e26..9ae536d 100644
--- a/INSTALL.DPDK-ADVANCED.md
+++ b/INSTALL.DPDK-ADVANCED.md
@@ -246,16 +246,13 @@ needs to be affinitized accordingly.
NIC port0 <-> OVS <-> VM <-> OVS <-> NIC port 1
-### 4.3 DPDK port Rx Queues
+### 4.3 DPDK physical port Rx Queues
`ovs-vsctl set Interface <DPDK interface> options:n_rxq=<integer>`
- The command above sets the number of rx queues for DPDK interface.
+ The command above sets the number of rx queues for DPDK physical interface.
The rx queues are assigned to pmd threads on the same NUMA node in a
- round-robin fashion. For more information, please refer to the
- Open_vSwitch TABLE section in
-
- `man ovs-vswitchd.conf.db`
+ round-robin fashion.
### 4.4 Exact Match Cache
@@ -454,16 +451,8 @@ DPDK 16.04 supports two types of vhost:
3. Enable multiqueue support(OPTIONAL)
- The vhost-user interface must be configured in Open vSwitch with the
- desired amount of queues with:
-
- ```
- ovs-vsctl set Interface vhost-user-2 options:n_rxq=<requested queues>
- ```
-
- QEMU needs to be configured as well.
- The $q below should match the queues requested in OVS (if $q is more,
- packets will not be received).
+ QEMU needs to be configured to use multiqueue.
+ The $q below is the number of queues.
The $v is the number of vectors, which is '$q x 2 + 2'.
```
@@ -472,6 +461,11 @@ DPDK 16.04 supports two types of vhost:
-device
virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mq=on,vectors=$v
```
+ The vhost-user interface will be automatically reconfigured with
required
+ number of rx and tx queues after connection of virtio device.
+ Manual configuration of `n_rxq` is not supported because OVS will work
+ properly only if `n_rxq` will match number of queues configured in QEMU.
+
A least 2 PMDs should be configured for the vswitch when using
multiqueue.
Using a single PMD will cause traffic to be enqueued to the same vhost
queue rather than being distributed among different vhost queues for a
diff --git a/NEWS b/NEWS
index f7b202b..a6d4035 100644
--- a/NEWS
+++ b/NEWS
@@ -37,6 +37,8 @@ Post-v2.5.0
- DPDK:
* New option "n_rxq" for PMD interfaces.
Old 'other_config:n-dpdk-rxqs' is no longer supported.
+ Not supported by vHost interfaces. For them number of rx and tx queues
+ is applied from connected virtio device.
* New appctl command 'dpif-netdev/pmd-rxq-show' to check the port/rxq
assignment.
* Type of log messages from PMD threads changed from INFO to DBG.
diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 37c2631..1c5d6a1 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -442,8 +442,10 @@ struct dp_netdev_pmd_thread {
pthread_t thread;
unsigned core_id; /* CPU core id of this pmd thread. */
int numa_id; /* numa node id of this pmd thread. */
- atomic_int tx_qid; /* Queue id used by this pmd thread to
- * send packets on all netdevs */
+
+ /* Queue id used by this pmd thread to send packets on all netdevs.
+ * All tx_qid's are unique and less than 'ovs_numa_get_n_cores() + 1'. */
+ atomic_int tx_qid;
struct ovs_mutex port_mutex; /* Mutex for 'poll_list' and 'tx_ports'. */
/* List of rx queues to poll. */
@@ -1153,31 +1155,6 @@ port_create(const char *devname, const char *open_type,
const char *type,
goto out;
}
- if (netdev_is_pmd(netdev)) {
- int n_cores = ovs_numa_get_n_cores();
-
- if (n_cores == OVS_CORE_UNSPEC) {
- VLOG_ERR("%s, cannot get cpu core info", devname);
- error = ENOENT;
- goto out;
- }
- /* There can only be ovs_numa_get_n_cores() pmd threads,
- * so creates a txq for each, and one extra for the non
- * pmd threads. */
- error = netdev_set_tx_multiq(netdev, n_cores + 1);
- if (error && (error != EOPNOTSUPP)) {
- VLOG_ERR("%s, cannot set multiq", devname);
- goto out;
- }
- }
-
- if (netdev_is_reconf_required(netdev)) {
- error = netdev_reconfigure(netdev);
- if (error) {
- goto out;
- }
- }
-
port = xzalloc(sizeof *port);
port->port_no = port_no;
port->netdev = netdev;
diff --git a/lib/netdev-bsd.c b/lib/netdev-bsd.c
index 2e92d97..becff43 100644
--- a/lib/netdev-bsd.c
+++ b/lib/netdev-bsd.c
@@ -1500,7 +1500,6 @@ netdev_bsd_update_flags(struct netdev *netdev_, enum
netdev_flags off,
NULL, /* push header */ \
NULL, /* pop header */ \
NULL, /* get_numa_id */ \
- NULL, /* set_tx_multiq */ \
\
netdev_bsd_send, \
netdev_bsd_send_wait, \
diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 8bb33d6..6687960 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -136,7 +136,8 @@ BUILD_ASSERT_DECL((MAX_NB_MBUF /
ROUND_DOWN_POW2(MAX_NB_MBUF/MIN_NB_MBUF))
#define OVS_VHOST_QUEUE_MAP_UNKNOWN (-1) /* Mapping not initialized. */
#define OVS_VHOST_QUEUE_DISABLED (-2) /* Queue was disabled by guest and not
* yet mapped to another queue. */
-
+static int max_tx_queue_id; /* Maximum index of TX queue, that may
+ * be used by caller of netdev_send(). */
#ifdef VHOST_CUSE
static char *cuse_dev_name = NULL; /* Character device cuse_dev_name. */
#endif
@@ -349,12 +350,10 @@ struct netdev_dpdk {
struct rte_eth_link link;
int link_reset_cnt;
- /* The user might request more txqs than the NIC has. We remap those
- * ('up.n_txq') on these ('real_n_txq').
- * If the numbers match, 'txq_needs_locking' is false, otherwise it is
- * true and we will take a spinlock on transmission */
- int real_n_txq;
- int real_n_rxq;
+ /* Caller of netdev_send() might want to use more txqs than the NIC has.
+ * If the 'max_tx_queue_id' less then 'up.n_txq', 'txq_needs_locking'
+ * is false, otherwise it is true and we will take a spinlock on
+ * transmission. */
bool txq_needs_locking;
/* virtio-net structure for vhost device */
@@ -627,7 +626,7 @@ dpdk_eth_dev_queue_setup(struct netdev_dpdk *dev, int
n_rxq, int n_txq)
}
dev->up.n_rxq = n_rxq;
- dev->real_n_txq = n_txq;
+ dev->up.n_txq = n_txq;
return 0;
}
@@ -767,20 +766,21 @@ netdev_dpdk_init(struct netdev *netdev, unsigned int
port_no,
dev->policer_rate = 0;
dev->policer_burst = 0;
- netdev->n_txq = NR_QUEUE;
netdev->n_rxq = NR_QUEUE;
- dev->requested_n_rxq = NR_QUEUE;
- dev->requested_n_txq = NR_QUEUE;
- dev->real_n_txq = NR_QUEUE;
+ netdev->n_txq = (type == DPDK_DEV_ETH) ? (max_tx_queue_id + 1) : NR_QUEUE;
+ dev->requested_n_rxq = netdev->n_rxq;
+ dev->requested_n_txq = netdev->n_txq;
if (type == DPDK_DEV_ETH) {
- netdev_dpdk_alloc_txq(dev, NR_QUEUE);
err = dpdk_eth_dev_init(dev);
if (err) {
goto unlock;
}
+ netdev_dpdk_alloc_txq(dev, netdev->n_txq);
+ dev->txq_needs_locking = netdev->n_txq <= max_tx_queue_id;
} else {
netdev_dpdk_alloc_txq(dev, OVS_VHOST_MAX_QUEUE_NUM);
+ dev->txq_needs_locking = true;
/* Enable DPDK_DEV_VHOST device and set promiscuous mode flag. */
dev->flags = NETDEV_UP | NETDEV_PROMISC;
}
@@ -788,9 +788,6 @@ netdev_dpdk_init(struct netdev *netdev, unsigned int
port_no,
ovs_list_push_back(&dpdk_list, &dev->list_node);
unlock:
- if (err) {
- rte_free(dev->tx_q);
- }
ovs_mutex_unlock(&dev->mutex);
return err;
}
@@ -975,8 +972,8 @@ netdev_dpdk_get_config(const struct netdev *netdev, struct
smap *args)
smap_add_format(args, "requested_rx_queues", "%d", dev->requested_n_rxq);
smap_add_format(args, "configured_rx_queues", "%d", netdev->n_rxq);
- smap_add_format(args, "requested_tx_queues", "%d", netdev->n_txq);
- smap_add_format(args, "configured_tx_queues", "%d", dev->real_n_txq);
+ smap_add_format(args, "requested_tx_queues", "%d", dev->requested_n_txq);
+ smap_add_format(args, "configured_tx_queues", "%d", netdev->n_txq);
ovs_mutex_unlock(&dev->mutex);
return 0;
@@ -1007,26 +1004,6 @@ netdev_dpdk_get_numa_id(const struct netdev *netdev)
return dev->socket_id;
}
-/* Sets the number of tx queues for the dpdk interface. */
-static int
-netdev_dpdk_set_tx_multiq(struct netdev *netdev, unsigned int n_txq)
-{
- struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
-
- ovs_mutex_lock(&dev->mutex);
-
- if (dev->requested_n_txq == n_txq) {
- goto out;
- }
-
- dev->requested_n_txq = n_txq;
- netdev_request_reconfigure(netdev);
-
-out:
- ovs_mutex_unlock(&dev->mutex);
- return 0;
-}
-
static struct netdev_rxq *
netdev_dpdk_rxq_alloc(void)
{
@@ -1232,10 +1209,6 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,
return EAGAIN;
}
- if (rxq->queue_id >= dev->real_n_rxq) {
- return EOPNOTSUPP;
- }
-
nb_rx = rte_vhost_dequeue_burst(virtio_dev, qid * VIRTIO_QNUM + VIRTIO_TXQ,
dev->dpdk_mp->mp,
(struct rte_mbuf **)packets,
@@ -1339,7 +1312,7 @@ __netdev_dpdk_vhost_send(struct netdev *netdev, int qid,
unsigned int qos_pkts = cnt;
int retries = 0;
- qid = dev->tx_q[qid % dev->real_n_txq].map;
+ qid = dev->tx_q[qid % netdev->n_txq].map;
if (OVS_UNLIKELY(!is_vhost_running(virtio_dev) || qid < 0
|| !(dev->flags & NETDEV_UP))) {
@@ -1502,7 +1475,7 @@ netdev_dpdk_send__(struct netdev_dpdk *dev, int qid,
int i;
if (OVS_UNLIKELY(dev->txq_needs_locking)) {
- qid = qid % dev->real_n_txq;
+ qid = qid % dev->up.n_txq;
rte_spinlock_lock(&dev->tx_q[qid].tx_lock);
}
@@ -2197,14 +2170,14 @@ set_irq_status(struct virtio_net *virtio_dev)
/*
* Fixes mapping for vhost-user tx queues. Must be called after each
- * enabling/disabling of queues and real_n_txq modifications.
+ * enabling/disabling of queues and n_txq modifications.
*/
static void
netdev_dpdk_remap_txqs(struct netdev_dpdk *dev)
OVS_REQUIRES(dev->mutex)
{
int *enabled_queues, n_enabled = 0;
- int i, k, total_txqs = dev->real_n_txq;
+ int i, k, total_txqs = dev->up.n_txq;
enabled_queues = dpdk_rte_mzalloc(total_txqs * sizeof *enabled_queues);
@@ -2236,33 +2209,6 @@ netdev_dpdk_remap_txqs(struct netdev_dpdk *dev)
rte_free(enabled_queues);
}
-static int
-netdev_dpdk_vhost_set_queues(struct netdev_dpdk *dev, struct virtio_net
*virtio_dev)
- OVS_REQUIRES(dev->mutex)
-{
- uint32_t qp_num;
-
- qp_num = virtio_dev->virt_qp_nb;
- if (qp_num > dev->up.n_rxq) {
- VLOG_ERR("vHost Device '%s' %"PRIu64" can't be added - "
- "too many queues %d > %d", virtio_dev->ifname,
virtio_dev->device_fh,
- qp_num, dev->up.n_rxq);
- return -1;
- }
-
- dev->real_n_rxq = qp_num;
- dev->real_n_txq = qp_num;
- dev->txq_needs_locking = true;
- /* Enable TX queue 0 by default if it wasn't disabled. */
- if (dev->tx_q[0].map == OVS_VHOST_QUEUE_MAP_UNKNOWN) {
- dev->tx_q[0].map = 0;
- }
-
- netdev_dpdk_remap_txqs(dev);
-
- return 0;
-}
-
/*
* A new virtio-net device is added to a vhost port.
*/
@@ -2278,15 +2224,9 @@ new_device(struct virtio_net *virtio_dev)
/* Add device to the vhost port with the same name as that passed down. */
LIST_FOR_EACH(dev, list_node, &dpdk_list) {
if (strncmp(virtio_dev->ifname, dev->vhost_id, IF_NAME_SZ) == 0) {
- ovs_mutex_lock(&dev->mutex);
- if (netdev_dpdk_vhost_set_queues(dev, virtio_dev)) {
- ovs_mutex_unlock(&dev->mutex);
- ovs_mutex_unlock(&dpdk_mutex);
- return -1;
- }
- ovsrcu_set(&dev->virtio_dev, virtio_dev);
- exists = true;
+ uint32_t qp_num = virtio_dev->virt_qp_nb;
+ ovs_mutex_lock(&dev->mutex);
/* Get NUMA information */
err = get_mempolicy(&newnode, NULL, 0, virtio_dev,
MPOL_F_NODE | MPOL_F_ADDR);
@@ -2294,12 +2234,16 @@ new_device(struct virtio_net *virtio_dev)
VLOG_INFO("Error getting NUMA info for vHost Device '%s'",
virtio_dev->ifname);
newnode = dev->socket_id;
- } else if (newnode != dev->socket_id) {
- dev->requested_socket_id = newnode;
- netdev_request_reconfigure(&dev->up);
}
- virtio_dev->flags |= VIRTIO_DEV_RUNNING;
+ dev->requested_socket_id = newnode;
+ dev->requested_n_rxq = qp_num;
+ dev->requested_n_txq = qp_num;
+ netdev_request_reconfigure(&dev->up);
+
+ ovsrcu_set(&dev->virtio_dev, virtio_dev);
+ exists = true;
+
/* Disable notifications. */
set_irq_status(virtio_dev);
netdev_change_seq_changed(&dev->up);
@@ -2328,7 +2272,7 @@ netdev_dpdk_txq_map_clear(struct netdev_dpdk *dev)
{
int i;
- for (i = 0; i < dev->real_n_txq; i++) {
+ for (i = 0; i < dev->up.n_txq; i++) {
dev->tx_q[i].map = OVS_VHOST_QUEUE_MAP_UNKNOWN;
}
}
@@ -2352,10 +2296,15 @@ destroy_device(volatile struct virtio_net *virtio_dev)
ovs_mutex_lock(&dev->mutex);
virtio_dev->flags &= ~VIRTIO_DEV_RUNNING;
ovsrcu_set(&dev->virtio_dev, NULL);
+ /* Clear tx/rx queue settings. */
netdev_dpdk_txq_map_clear(dev);
- exists = true;
+ dev->requested_n_rxq = NR_QUEUE;
+ dev->requested_n_txq = NR_QUEUE;
+ netdev_request_reconfigure(&dev->up);
+
netdev_change_seq_changed(&dev->up);
ovs_mutex_unlock(&dev->mutex);
+ exists = true;
break;
}
}
@@ -2863,9 +2812,9 @@ netdev_dpdk_reconfigure(struct netdev *netdev)
rte_free(dev->tx_q);
err = dpdk_eth_dev_init(dev);
- netdev_dpdk_alloc_txq(dev, dev->real_n_txq);
+ netdev_dpdk_alloc_txq(dev, netdev->n_txq);
- dev->txq_needs_locking = dev->real_n_txq != netdev->n_txq;
+ dev->txq_needs_locking = netdev->n_txq <= max_tx_queue_id;
out:
@@ -2879,6 +2828,7 @@ static int
netdev_dpdk_vhost_user_reconfigure(struct netdev *netdev)
{
struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);
+ struct virtio_net *virtio_dev = netdev_dpdk_get_virtio(dev);
int err = 0;
ovs_mutex_lock(&dpdk_mutex);
@@ -2887,6 +2837,13 @@ netdev_dpdk_vhost_user_reconfigure(struct netdev *netdev)
netdev->n_txq = dev->requested_n_txq;
netdev->n_rxq = dev->requested_n_rxq;
+ /* Enable TX queue 0 by default if it wasn't disabled. */
+ if (dev->tx_q[0].map == OVS_VHOST_QUEUE_MAP_UNKNOWN) {
+ dev->tx_q[0].map = 0;
+ }
+
+ netdev_dpdk_remap_txqs(dev);
+
if (dev->requested_socket_id != dev->socket_id) {
dev->socket_id = dev->requested_socket_id;
/* Change mempool to new NUMA Node */
@@ -2897,6 +2854,10 @@ netdev_dpdk_vhost_user_reconfigure(struct netdev *netdev)
}
}
+ if (virtio_dev) {
+ virtio_dev->flags |= VIRTIO_DEV_RUNNING;
+ }
+
ovs_mutex_unlock(&dev->mutex);
ovs_mutex_unlock(&dpdk_mutex);
@@ -2912,9 +2873,7 @@ netdev_dpdk_vhost_cuse_reconfigure(struct netdev *netdev)
ovs_mutex_lock(&dev->mutex);
netdev->n_txq = dev->requested_n_txq;
- dev->real_n_txq = 1;
netdev->n_rxq = 1;
- dev->txq_needs_locking = dev->real_n_txq != netdev->n_txq;
ovs_mutex_unlock(&dev->mutex);
ovs_mutex_unlock(&dpdk_mutex);
@@ -2922,9 +2881,10 @@ netdev_dpdk_vhost_cuse_reconfigure(struct netdev *netdev)
return 0;
}
-#define NETDEV_DPDK_CLASS(NAME, INIT, CONSTRUCT, DESTRUCT, SEND, \
- GET_CARRIER, GET_STATS, GET_FEATURES, \
- GET_STATUS, RECONFIGURE, RXQ_RECV) \
+#define NETDEV_DPDK_CLASS(NAME, INIT, CONSTRUCT, DESTRUCT, \
+ SET_CONFIG, SEND, GET_CARRIER, \
+ GET_STATS, GET_FEATURES, GET_STATUS,\
+ RECONFIGURE, RXQ_RECV) \
{ \
NAME, \
true, /* is_pmd */ \
@@ -2937,13 +2897,12 @@ netdev_dpdk_vhost_cuse_reconfigure(struct netdev
*netdev)
DESTRUCT, \
netdev_dpdk_dealloc, \
netdev_dpdk_get_config, \
- netdev_dpdk_set_config, \
+ SET_CONFIG, \
NULL, /* get_tunnel_config */ \
NULL, /* build header */ \
NULL, /* push header */ \
NULL, /* pop header */ \
netdev_dpdk_get_numa_id, /* get_numa_id */ \
- netdev_dpdk_set_tx_multiq, \
\
SEND, /* send */ \
NULL, /* send_wait */ \
@@ -3239,6 +3198,13 @@ dpdk_init__(const struct smap *ovs_other_config)
VLOG_INFO("DPDK Enabled, initializing");
+ max_tx_queue_id = ovs_numa_get_n_cores();
+
+ if (max_tx_queue_id == OVS_CORE_UNSPEC) {
+ VLOG_ERR("Cannot get cpu core info.");
+ return;
+ }
+
#ifdef VHOST_CUSE
if (process_vhost_flags("cuse-dev-name", xstrdup("vhost-net"),
PATH_MAX, ovs_other_config, &cuse_dev_name)) {
@@ -3392,6 +3358,7 @@ static const struct netdev_class dpdk_class =
NULL,
netdev_dpdk_construct,
netdev_dpdk_destruct,
+ netdev_dpdk_set_config,
netdev_dpdk_eth_send,
netdev_dpdk_get_carrier,
netdev_dpdk_get_stats,
@@ -3406,6 +3373,7 @@ static const struct netdev_class dpdk_ring_class =
NULL,
netdev_dpdk_ring_construct,
netdev_dpdk_destruct,
+ netdev_dpdk_set_config,
netdev_dpdk_ring_send,
netdev_dpdk_get_carrier,
netdev_dpdk_get_stats,
@@ -3420,6 +3388,7 @@ static const struct netdev_class OVS_UNUSED
dpdk_vhost_cuse_class =
dpdk_vhost_cuse_class_init,
netdev_dpdk_vhost_cuse_construct,
netdev_dpdk_vhost_destruct,
+ NULL,
netdev_dpdk_vhost_send,
netdev_dpdk_vhost_get_carrier,
netdev_dpdk_vhost_get_stats,
@@ -3434,6 +3403,7 @@ static const struct netdev_class OVS_UNUSED
dpdk_vhost_user_class =
dpdk_vhost_user_class_init,
netdev_dpdk_vhost_user_construct,
netdev_dpdk_vhost_destruct,
+ NULL,
netdev_dpdk_vhost_send,
netdev_dpdk_vhost_get_carrier,
netdev_dpdk_vhost_get_stats,
diff --git a/lib/netdev-dummy.c b/lib/netdev-dummy.c
index 24c107e..c92f0e3 100644
--- a/lib/netdev-dummy.c
+++ b/lib/netdev-dummy.c
@@ -821,7 +821,7 @@ netdev_dummy_set_config(struct netdev *netdev_, const
struct smap *args)
{
struct netdev_dummy *netdev = netdev_dummy_cast(netdev_);
const char *pcap;
- int new_n_rxq, new_numa_id;
+ int new_n_rxq, new_n_txq, new_numa_id;
ovs_mutex_lock(&netdev->mutex);
netdev->ifindex = smap_get_int(args, "ifindex", -EOPNOTSUPP);
@@ -858,10 +858,13 @@ netdev_dummy_set_config(struct netdev *netdev_, const
struct smap *args)
}
new_n_rxq = MAX(smap_get_int(args, "n_rxq", netdev->requested_n_rxq), 1);
+ new_n_txq = MAX(smap_get_int(args, "n_txq", netdev->requested_n_txq), 1);
new_numa_id = smap_get_int(args, "numa_id", 0);
if (new_n_rxq != netdev->requested_n_rxq
+ || new_n_txq != netdev->requested_n_txq
|| new_numa_id != netdev->requested_numa_id) {
netdev->requested_n_rxq = new_n_rxq;
+ netdev->requested_n_txq = new_n_txq;
netdev->requested_numa_id = new_numa_id;
netdev_request_reconfigure(netdev_);
}
@@ -883,26 +886,6 @@ netdev_dummy_get_numa_id(const struct netdev *netdev_)
return numa_id;
}
-/* Requests the number of tx queues for the dummy PMD interface. */
-static int
-netdev_dummy_set_tx_multiq(struct netdev *netdev_, unsigned int n_txq)
-{
- struct netdev_dummy *netdev = netdev_dummy_cast(netdev_);
-
- ovs_mutex_lock(&netdev->mutex);
-
- if (netdev_->n_txq == n_txq) {
- goto out;
- }
-
- netdev->requested_n_txq = n_txq;
- netdev_request_reconfigure(netdev_);
-
-out:
- ovs_mutex_unlock(&netdev->mutex);
- return 0;
-}
-
/* Sets the number of tx queues and rx queues for the dummy PMD interface. */
static int
netdev_dummy_reconfigure(struct netdev *netdev_)
@@ -1325,7 +1308,7 @@ netdev_dummy_update_flags(struct netdev *netdev_,
/* Helper functions. */
-#define NETDEV_DUMMY_CLASS(NAME, PMD, TX_MULTIQ, RECOFIGURE) \
+#define NETDEV_DUMMY_CLASS(NAME, PMD, RECOFIGURE) \
{ \
NAME, \
PMD, /* is_pmd */ \
@@ -1344,7 +1327,6 @@ netdev_dummy_update_flags(struct netdev *netdev_,
NULL, /* push header */ \
NULL, /* pop header */ \
netdev_dummy_get_numa_id, \
- TX_MULTIQ, \
\
netdev_dummy_send, /* send */ \
NULL, /* send_wait */ \
@@ -1396,11 +1378,10 @@ netdev_dummy_update_flags(struct netdev *netdev_,
}
static const struct netdev_class dummy_class =
- NETDEV_DUMMY_CLASS("dummy", false, NULL, NULL);
+ NETDEV_DUMMY_CLASS("dummy", false, NULL);
static const struct netdev_class dummy_pmd_class =
NETDEV_DUMMY_CLASS("dummy-pmd", true,
- netdev_dummy_set_tx_multiq,
netdev_dummy_reconfigure);
static void
diff --git a/lib/netdev-linux.c b/lib/netdev-linux.c
index 486910a..81edfbf 100644
--- a/lib/netdev-linux.c
+++ b/lib/netdev-linux.c
@@ -2779,7 +2779,6 @@ netdev_linux_update_flags(struct netdev *netdev_, enum
netdev_flags off,
NULL, /* push header */ \
NULL, /* pop header */ \
NULL, /* get_numa_id */ \
- NULL, /* set_tx_multiq */ \
\
netdev_linux_send, \
netdev_linux_send_wait, \
diff --git a/lib/netdev-provider.h b/lib/netdev-provider.h
index 5da377f..3b2759f 100644
--- a/lib/netdev-provider.h
+++ b/lib/netdev-provider.h
@@ -299,22 +299,6 @@ struct netdev_class {
* such info, returns NETDEV_NUMA_UNSPEC. */
int (*get_numa_id)(const struct netdev *netdev);
- /* Configures the number of tx queues of 'netdev'. Returns 0 if successful,
- * otherwise a positive errno value.
- *
- * 'n_txq' specifies the exact number of transmission queues to create.
- * The caller will call netdev_send() concurrently from 'n_txq' different
- * threads (with different qid). The netdev provider is responsible for
- * making sure that these concurrent calls do not create a race condition
- * by using multiple hw queues or locking.
- *
- * The caller will call netdev_reconfigure() (if necessary) before using
- * netdev_send() on any of the newly configured queues, giving the provider
- * a chance to adjust its settings.
- *
- * On error, the tx queue configuration is unchanged. */
- int (*set_tx_multiq)(struct netdev *netdev, unsigned int n_txq);
-
/* Sends buffers on 'netdev'.
* Returns 0 if successful (for every buffer), otherwise a positive errno
* value. Returns EAGAIN without blocking if one or more packets cannot be
diff --git a/lib/netdev-vport.c b/lib/netdev-vport.c
index 83a795c..22e161b 100644
--- a/lib/netdev-vport.c
+++ b/lib/netdev-vport.c
@@ -833,7 +833,6 @@ get_stats(const struct netdev *netdev, struct netdev_stats
*stats)
PUSH_HEADER, \
POP_HEADER, \
NULL, /* get_numa_id */ \
- NULL, /* set_tx_multiq */ \
\
NULL, /* send */ \
NULL, /* send_wait */ \
diff --git a/lib/netdev.c b/lib/netdev.c
index 6651173..f15e13a 100644
--- a/lib/netdev.c
+++ b/lib/netdev.c
@@ -651,36 +651,6 @@ netdev_rxq_drain(struct netdev_rxq *rx)
: 0);
}
-/* Configures the number of tx queues of 'netdev'. Returns 0 if successful,
- * otherwise a positive errno value.
- *
- * 'n_txq' specifies the exact number of transmission queues to create.
- * If this function returns successfully, the caller can make 'n_txq'
- * concurrent calls to netdev_send() (each one with a different 'qid' in the
- * range [0..'n_txq'-1]).
- *
- * The change might not effective immediately. The caller must check if a
- * reconfiguration is required with netdev_is_reconf_required() and eventually
- * call netdev_reconfigure() before using the new queues.
- *
- * On error, the tx queue configuration is unchanged */
-int
-netdev_set_tx_multiq(struct netdev *netdev, unsigned int n_txq)
-{
- int error;
-
- error = (netdev->netdev_class->set_tx_multiq
- ? netdev->netdev_class->set_tx_multiq(netdev, MAX(n_txq, 1))
- : EOPNOTSUPP);
-
- if (error && error != EOPNOTSUPP) {
- VLOG_DBG_RL(&rl, "failed to set tx queue for network device %s:"
- "%s", netdev_get_name(netdev), ovs_strerror(error));
- }
-
- return error;
-}
-
/* Sends 'batch' on 'netdev'. Returns 0 if successful (for every packet),
* otherwise a positive errno value. Returns EAGAIN without blocking if
* at least one the packets cannot be queued immediately. Returns EMSGSIZE
diff --git a/lib/netdev.h b/lib/netdev.h
index 591d861..43c497c 100644
--- a/lib/netdev.h
+++ b/lib/netdev.h
@@ -134,7 +134,6 @@ const char *netdev_get_type_from_name(const char *);
int netdev_get_mtu(const struct netdev *, int *mtup);
int netdev_set_mtu(const struct netdev *, int mtu);
int netdev_get_ifindex(const struct netdev *);
-int netdev_set_tx_multiq(struct netdev *, unsigned int n_txq);
/* Packet reception. */
int netdev_rxq_open(struct netdev *, struct netdev_rxq **, int id);
diff --git a/vswitchd/vswitch.xml b/vswitchd/vswitch.xml
index 072fef4..a32f4ef 100644
--- a/vswitchd/vswitch.xml
+++ b/vswitchd/vswitch.xml
@@ -2346,12 +2346,13 @@
Only PMD netdevs support these options.
</p>
- <column name="options" key="n_rxqs"
+ <column name="options" key="n_rxq"
type='{"type": "integer", "minInteger": 1}'>
<p>
Specifies the maximum number of rx queues to be created for PMD
netdev. If not specified or specified to 0, one rx queue will
be created by default.
+ Not supported by vHost interfaces.
</p>
</column>
</group>
--
2.7.4
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev