On 27.06.2019 17:17, Ian Stokes wrote:
> On 6/27/2019 10:43 AM, David Marchand wrote:
>> Rather than poll all disabled queues and waste some memory for vms that
>> have been shutdown, we can reconfigure when receiving a destroy
>> connection notification from the vhost library.
>>
>> $ while true; do
>>    ovs-appctl dpif-netdev/pmd-rxq-show |awk '
>>    /port: / {
>>      tot++;
>>      if ($5 == "(enabled)") {
>>        en++;
>>      }
>>    }
>>    END {
>>      print "total: " tot ", enabled: " en
>>    }'
>>    sleep 1
>> done
>>
>> total: 66, enabled: 66
>> total: 6, enabled: 2
>>
>> This change requires a fix in the DPDK vhost library, so bump the minimal
>> required version to 18.11.2.
> 
> So in testing this I was seeing an error with the following
> 
> 2019-06-27T10:00:08Z|00057|dpdk|INFO|VHOST_CONFIG: vhost peer closed
> 2019-06-27T10:00:08Z|00058|dpdk|ERR|VHOST_CONFIG: (0) device not found.
> 2019-06-27T10:00:08Z|00059|netdev_dpdk|INFO|vHost Device '' not found
> 
> It wasn't until I realized I was testing with DPDK 18.11.2.rc1 that it dawned 
> on me the required fix wasn't there...doh!
> 
> So in short I can confirm the requirement for 18.11.2 :).
> 
> Is there any requirement on the QEMU version side for this patch also? Or has 
> the functionality been in place on that side for quite some time but was just 
> mishandled in DPDK?

'destroy_connection' is just a callback that is called when socket closed
by DPDK or QEMU. So, no dependency on QEMU version.
The reason of above issue with 18.11.1 is that dpdk destroyed device before
calling the 'destroy_connection' callback, so OVS wasn't able to find
corresponding port, because 'rte_vhost_get_ifname' always returned an empty
string.

> 
> One other minor query below but overall looks ok and has validated without 
> issue.
> 
>>
>> Co-authored-by: Ilya Maximets <[email protected]>
>> Signed-off-by: Ilya Maximets <[email protected]>
>> Signed-off-by: David Marchand <[email protected]>
>> ---
>> Changes since v3:
>> - rebased on master following 18.11.2 support
>> - added a note in NEWS about minimal dpdk version
>>
>> Changes since v2:
>> - added authorship tags for Ilya
>> - added logs in destroy_connection
>>
>> ---
>>   NEWS              |  4 +++-
>>   lib/netdev-dpdk.c | 47 ++++++++++++++++++++++++++++++++++++++++++++++-
>>   2 files changed, 49 insertions(+), 2 deletions(-)
>>
>> diff --git a/NEWS b/NEWS
>> index c03e287..2f8171f 100644
>> --- a/NEWS
>> +++ b/NEWS
>> @@ -4,7 +4,9 @@ Post-v2.11.0
>>        * New option 'other_config:dpdk-socket-limit' to limit amount of
>>          hugepage memory that can be used by DPDK.
>>        * Add support for vHost Post-copy Live Migration (experimental).
>> -     * OVS validated with DPDK 18.11.2 which is recommended to be used.
>> +     * OVS validated with DPDK 18.11.2 which is the new minimal supported
>> +       version.
>> +     * DPDK 18.11.1 and lower is no longer supported.
>>      - OpenFlow:
>>        * All features required by OpenFlow 1.5 are now implemented, so
>>          ovs-vswitchd now enables OpenFlow 1.5 by default (in addition to
>> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
>> index b7f4543..0b9bea4 100644
>> --- a/lib/netdev-dpdk.c
>> +++ b/lib/netdev-dpdk.c
>> @@ -185,12 +185,15 @@ static const struct rte_eth_conf port_conf = {
>>   static int new_device(int vid);
>>   static void destroy_device(int vid);
>>   static int vring_state_changed(int vid, uint16_t queue_id, int enable);
>> +static void destroy_connection(int vid);
>>   static const struct vhost_device_ops virtio_net_device_ops =
>>   {
>>       .new_device =  new_device,
>>       .destroy_device = destroy_device,
>>       .vring_state_changed = vring_state_changed,
>> -    .features_changed = NULL
>> +    .features_changed = NULL,
>> +    .new_connection = NULL,
> 
> I take it .new_connection is a requirement? It's just that it's added as 
> NULL, but I assume it goes hand in hand with the introduction of .destroy 
> connection?

'new_connection' was introduced in DPDK along with 'destroy_connection'.
Setting it as NULL here just to have a full list of callbacks as we
already have NULL 'features_changed'. To be consistent here we need to
add 'new_connection' or remove 'features_changed'.
Anyway compiler will clear all the non-initialized fields.

> 
> Ian
>> +    .destroy_connection = destroy_connection,
>>   };
>>     enum { DPDK_RING_SIZE = 256 };
>> @@ -3663,6 +3666,48 @@ vring_state_changed(int vid, uint16_t queue_id, int 
>> enable)
>>       return 0;
>>   }
>>   +static void
>> +destroy_connection(int vid)
>> +{
>> +    struct netdev_dpdk *dev;
>> +    char ifname[IF_NAME_SZ];
>> +    bool exists = false;
>> +
>> +    rte_vhost_get_ifname(vid, ifname, sizeof ifname);
>> +
>> +    ovs_mutex_lock(&dpdk_mutex);
>> +    LIST_FOR_EACH (dev, list_node, &dpdk_list) {
>> +        ovs_mutex_lock(&dev->mutex);
>> +        if (nullable_string_is_equal(ifname, dev->vhost_id)) {
>> +            uint32_t qp_num = NR_QUEUE;
>> +
>> +            if (netdev_dpdk_get_vid(dev) >= 0) {
>> +                VLOG_ERR("Connection on socket '%s' destroyed while vhost "
>> +                         "device still attached.", dev->vhost_id);
>> +            }
>> +
>> +            /* Restore the number of queue pairs to default. */
>> +            if (dev->requested_n_txq != qp_num
>> +                || dev->requested_n_rxq != qp_num) {
>> +                dev->requested_n_rxq = qp_num;
>> +                dev->requested_n_txq = qp_num;
>> +                netdev_request_reconfigure(&dev->up);
>> +            }
>> +            ovs_mutex_unlock(&dev->mutex);
>> +            exists = true;
>> +            break;
>> +        }
>> +        ovs_mutex_unlock(&dev->mutex);
>> +    }
>> +    ovs_mutex_unlock(&dpdk_mutex);
>> +
>> +    if (exists) {
>> +        VLOG_INFO("vHost Device '%s' connection has been destroyed", 
>> ifname);
>> +    } else {
>> +        VLOG_INFO("vHost Device '%s' not found", ifname);
>> +    }
>> +}
>> +
>>   /*
>>    * Retrieve the DPDK virtio device ID (vid) associated with a vhostuser
>>    * or vhostuserclient netdev.
>>
> 
> 
> 
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to