On 6/1/23 21:44, Alex Yeh (ayeh) wrote:
> Hi Ilya,
>       Thanks for the pointe. Just want to close off the thread as we have 
> found the cause. After upgrade libvirt, this XML tags inside domain XML needs 
> to move to the correct section. Now ping works on DPDK bridge.
> 
>   <memoryBacking>
>     <hugepages/>
>     <access mode='shared'/>
>   </memoryBacking>
> 

OK.  So it was indeed a problem with the memory backend not being shared.

Thanks for the follow up!

Best regards, Ilya Maximets.

> Alex
> 
> On 5/26/23, 5:04 AM, "Ilya Maximets" <[email protected] 
> <mailto:[email protected]>> wrote:
> 
> 
> On 5/26/23 01:17, Alex Yeh (ayeh) wrote:
>> Hi Ilya,
>> Thanks for you reply. We did further investigation and from the finding it 
>> seems related to the QEMU/libvirt version. The ping starts to work on DPDK 
>> bridge after we rollback the QEMU/libvirt version. Are you aware if there 
>> any new config needed to use the newer QEMU version?
> 
> 
> No, should generally be backward compatible.
> 
> 
>>
>> Alex
>>
>> Non-working: The ping from host to VM caused the ovs_tx_failure_drops 
>> counter to go up.
> 
> 
> This might be a sign of memory backend not being shared or the
> driver inside the VM doesn't dequeue packets for some other reason.
> 
> 
> I'd suggest comparing xml files between good and bad setups.
> If they are the same - compare command lines of the running QEMU
> processes.
> 
> 
> And check the QEMU logs. And libvirt logs if there are any.
> 
> 
> Best regards, Ilya Maximets.
> 
> 
>>
>> [root@nfvis ~]# virsh version
>> Compiled against library: libvirt 8.0.0
>> Using library: libvirt 8.0.0
>> Using API: QEMU 8.0.0
>> Running hypervisor: QEMU 6.2.0
>>
>> [root@nfvis ~]# ovs-vswitchd --version
>> ovs-vswitchd (Open vSwitch) 2.17.0
>> DPDK 21.11.0
>> [root@nfvis ~]# ovs-vsctl get Interface vnic1 statistics
>> {ovs_rx_qos_drops=0, ovs_tx_failure_drops=15, ovs_tx_invalid_hwol_drops=0, 
>> ovs_tx_mtu_exceeded_drops=0, ovs_tx_qos_drops=0, ovs_tx_retries=0, 
>> rx_1024_to_1522_packets=0, rx_128_to_255_packets=0, 
>> rx_1523_to_max_packets=0, rx_1_to_64_packets=0, rx_256_to_511_packets=0, 
>> rx_512_to_1023_packets=0, rx_65_to_127_packets=0, rx_bytes=0, rx_dropped=0, 
>> rx_errors=0, rx_packets=0, tx_bytes=0, tx_dropped=15, tx_packets=0}
>> [root@nfvis ~]#
>>
>>
>> Working:
>>
>> [root@nfvis ~]# virsh version
>> Compiled against library: libvirt 6.0.0. <-- rollback to 6.0.0
>> Using library: libvirt 6.0.0
>> Using API: QEMU 6.0.0
>> Running hypervisor: QEMU 4.2.0
>>
>> [root@nfvis ~]# ovs-vswitchd --version
>> ovs-vswitchd (Open vSwitch) 2.17.0
>> DPDK 21.11.0
>> [root@nfvis ~]# ovs-vsctl get Interface vnic1 statistics
>> {ovs_rx_qos_drops=0, ovs_tx_failure_drops=0, ovs_tx_invalid_hwol_drops=0, 
>> ovs_tx_mtu_exceeded_drops=0, ovs_tx_qos_drops=0, ovs_tx_retries=0, 
>> rx_1024_to_1522_packets=0, rx_128_to_255_packets=0, 
>> rx_1523_to_max_packets=0, rx_1_to_64_packets=1, rx_256_to_511_packets=0, 
>> rx_512_to_1023_packets=0, rx_65_to_127_packets=73, rx_bytes=6318, 
>> rx_dropped=0, rx_errors=0, rx_packets=74, tx_bytes=6318, tx_dropped=0, 
>> tx_packets=74}
>> [root@nfvis ~]#
>>
>>
>> On 5/24/23, 12:37 PM, "Ilya Maximets" <[email protected] 
>> <mailto:[email protected]> <mailto:[email protected] 
>> <mailto:[email protected]>>> wrote:
>>
>>
>> On 5/24/23 20:04, Alex Yeh (ayeh) via discuss wrote:
>>> Hi All,
>>>
>>> A little more info on the qemu version of the working and not working setup.
>>>
>>> Thanks
>>> Alex
>>>
>>> Working:
>>> [root@nfvis ~]# /usr/libexec/qemu-kvm --version
>>> QEMU emulator version 4.2.0 (qemu-kvm-4.2.0-48.el8)
>>> Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers
>>>
>>> [root@nfvis ~]#
>>> Non-working:
>>>
>>> [root@nfvis ~]# /usr/libexec/qemu-kvm --version
>>> QEMU emulator version 6.2.0 (qemu-kvm-6.2.0-11.el8)
>>> Copyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers
>>>
>>> Thanks
>>> Alex
>>>
>>> *From: *"Alex Yeh (ayeh)" <[email protected] <mailto:[email protected]> 
>>> <mailto:[email protected] <mailto:[email protected]>>>
>>> *Date: *Tuesday, May 23, 2023 at 2:58 PM
>>> *To: *"[email protected] <mailto:[email protected]> 
>>> <mailto:[email protected] <mailto:[email protected]>>" 
>>> <[email protected] <mailto:[email protected]> 
>>> <mailto:[email protected] <mailto:[email protected]>>>
>>> *Subject: *Ping over dpdk bridge failed after upgrade to OVS 2.17.3
>>>
>>> Hi All,
>>>
>>> We were running OVS 2.13.0/DPDK 19.11.1 and the VMs were able to ping over 
>>> the DPDK bridge. After upgrade to OVS 2.17.3 the VMs can’t ping over the 
>>> DPDK bridge with the same OVS config. Does anyone have seen the same issue 
>>> and have a way to fix the issue?
>>
>>
>> Nothing in particular comes to mind. You need to check the logs for warnings
>> or errors. And check datapath flows installed to see if there is something
>> abnormal there. Check QEMU logs for vhost user related messages.
>> If everything looks correct, but there is no traffic, check if you have
>> shared memory backend in qemu (a common issue when everything seems normal).
>>
>>
>> Best regards, Ilya Maximets.
>>
>>
>>>
>>> Thanks
>>> Alex
>>>
>>> OVS bridge setup, VMs are connected to vnic1 and vnic3:
>>>
>>> Bridge dpdk-br
>>> datapath_type: netdev
>>> Port dpdk-br
>>> Interface dpdk-br
>>> type: internal
>>>
>>> Port vnic1
>>> Interface vnic1
>>> type: dpdkvhostuserclient
>>> options: {vhost-server-path="/run/vhostfd/vnic1"}
>>>
>>> Port vnic3
>>> Interface vnic3
>>> type: dpdkvhostuserclient
>>> options: {vhost-server-path="/run/vhostfd/vnic3"}
>>>
>>> ovs_version: "2.17.7"
>>>
>>> Working:
>>> [root@nfvis-csp-45 ~]# ovs-vswitchd --version
>>> ovs-vswitchd (Open vSwitch) 2.13.0
>>> DPDK 19.11.1
>>>
>>> Not working:
>>> [root@nfvis nfvos-confd]# ovs-vswitchd --version
>>> ovs-vswitchd (Open vSwitch) 2.17.3
>>> DPDK 21.11.0
>>
>>
>>
>>
>>
> 
> 
> 
> 
> 

_______________________________________________
discuss mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to