Hello

On Wed, Mar 2, 2022 at 6:45 PM Harold Huang <baymaxhu...@gmail.com> wrote:

> On Thu, Mar 3, 2022 at 3:33 AM Amir Alimohammadifar
> <amir.af...@gmail.com> wrote:
> >
> > Hello,
> >
> > I am having troubles with hardware vxlan offloading when the
> tx_checksumming is turned on.
> > Here is my environment:
> >
> > GuestVM1 ---> VM1 (running in ESXi 6.7 Hypervisor1) <--VxLAN tunnel-->
> VM2 (running in ESXi 6.7 Hypervisor2) ---> GuestVM2
>
> Do you use OVS kernel datapath to create VxLAN tunnels and forward traffic?
>
I cannot use the configure with the "--with-linux=/lib/modules/$(uname
-r)/build" option for kernels above > 5.8. And I think the vxlan offload is
not supporded in kernels with versions below < 5.8.
for example, using Ubuntu Focal with Kernel 5.4, the options for vxlan
offload are disabled and turned off

amir@focalvm2:~$ uname -r

5.4.0-100-generic

amir@focalvm2:~$ ethtool -k ens224 | grep tx-udp

*tx-udp*_tnl-segmentation: off [fixed]

*tx-udp*_tnl-csum-segmentation: off [fixed]

*tx-udp*-segmentation: off [fixed]

>
>
>
> > The VM1 and VM2 are responsible for creating openflow flows using VxLAN
> tunnels to route the traffic between the GuestVMs.
> >
> > I have created all the required configurations to route the traffic and
> everything works fine. However, when I try to enable VxLAN offloading,
> using the parameters below:
> >
> > VM1 (transmitter):
> > ...
> >
> > tx-checksumming: on
> >
> >         tx-checksum-ipv4: off [fixed]
> >
> >         tx-checksum-ip-generic: on
> >
> >         tx-checksum-ipv6: off [fixed]
> >
> >         tx-checksum-fcoe-crc: off [fixed]
> >
> >         tx-checksum-sctp: off [fixed]
> >
> > ...
> >
> > tx-udp_tnl-segmentation: on
> >
> > tx-udp_tnl-csum-segmentation: on
> > ...
> >
> > The throughput between GustVM1 ---> GuestVM2 is less than 5Mbps.
> > After googling around, I turned off the tx_checksumming on the
> transmitter side and everything works well. (I can see 10Gbps traffic going
> through but the CPU usage is terrible)
>
> Could you show the CPU stats in your host and perf results?



*With tx-checksumming ON:VM2 Settings:*

amir@jammyvm2:~$ uname -r

5.15.0-18-generic

amir@jammyvm2:~$ ethtool -k ens224 | grep "tx-udp\|tx_check"

*tx-check*summing: on

        *tx-check*sum-ipv4: off [fixed]

        *tx-check*sum-ip-generic: on

        *tx-check*sum-ipv6: off [fixed]

        *tx-check*sum-fcoe-crc: off [fixed]

        *tx-check*sum-sctp: off [fixed]

*tx-udp*_tnl-segmentation: on

*tx-udp*_tnl-csum-segmentation: on

*tx-udp*-segmentation: off [fixed]

amir@jammyvm2:~$ cat /etc/lsb-release

DISTRIB_ID=Ubuntu

DISTRIB_RELEASE=22.04

DISTRIB_CODENAME=jammy

DISTRIB_DESCRIPTION="Ubuntu Jammy Jellyfish (development branch)"


*CPU in the VM1: *

Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal
%guest  %gnice   %idle

Average:     all*    0.03*    0.00    0.00    0.00    0.00    0.00    0.00
  0.00    0.00*   99.97*

Average:       0*    0.05*    0.00    0.00    0.00    0.00    0.00    0.00
  0.00    0.00*   99.95*

Average:       1    0.00    0.00    0.00    0.00    0.00    0.00    0.00
0.00    0.00*  100.00*


Average:    NODE    %usr   %nice    %sys %iowait    %irq   %soft  %steal
%guest  %gnice   %idle
Average:     all*    0.03*    0.00    0.00    0.00    0.00    0.00    0.00
  0.00    0.00
*   99.97*
*Iperf in the GuestVM2:*

amir@guestvm2:~$ iperf3 -c 10.0.0.13 --bind 10.0.0.12 -V -t 20 -i 5

iperf 3.7

Linux guestvm2 5.4.0-100-generic #113-Ubuntu SMP Thu Feb 3 18:43:29 UTC
2022 x86_64

Control connection MSS 1398

Time: Thu, 03 Mar 2022 18:57:24 GMT

Connecting to host 10.0.0.13, port 5201

      Cookie: 2dmnehmxcmeodeel5tdv3ojvvjeguokvzj22

      TCP MSS: 1398 (default)

[  5] local 10.0.0.12 port 45497 connected to 10.0.0.13 port 5201

Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0
seconds, 20 second test, tos 0

[ ID] Interval           Transfer     Bitrate         Retr  Cwnd

[  5]   0.00-5.00   sec  1.45 MBytes  2.43 Mbits/sec  438   2.73 KBytes


[  5]   5.00-10.00  sec  2.07 MBytes  3.47 Mbits/sec  622   2.73 KBytes


[  5]  10.00-15.00  sec  2.15 MBytes  3.60 Mbits/sec  624   2.73 KBytes


[  5]  15.00-20.00  sec  2.07 MBytes  3.47 Mbits/sec  624   2.73 KBytes


- - - - - - - - - - - - - - - - - - - - - - - - -

Test Complete. Summary Results:

[ ID] Interval           Transfer     Bitrate         Retr

[  5]   0.00-20.00  sec  7.73 MBytes  3.24 Mbits/sec  2308
sender

[  5]   0.00-20.00  sec  7.62 MBytes  3.20 Mbits/sec
receiver

CPU Utilization: local/sender 0.0% (0.0%u/0.0%s), remote/receiver 0.2%
(0.0%u/0.1%s)

snd_tcp_congestion cubic

rcv_tcp_congestion cubic


iperf Done.

*NOW with tx-checksumming off:*

amir@jammyvm2:~$ ethtool -k ens224 | grep "tx-udp\|tx-check"

*tx-check*summing: off

        *tx-check*sum-ipv4: off [fixed]

        *tx-check*sum-ip-generic: off

        *tx-check*sum-ipv6: off [fixed]

        *tx-check*sum-fcoe-crc: off [fixed]

        *tx-check*sum-sctp: off [fixed]

*tx-udp*_tnl-segmentation: on

*tx-udp*_tnl-csum-segmentation: on

*tx-udp*-segmentation: off [fixed]


*CPU in the VM1:*

Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal
%guest  %gnice   %idle

Average:     all*    0.07*    0.00*    0.07*    0.00    0.00*   46.76*
0.00    0.00    0.00*   53.09*

Average:       0    0.00    0.00*    0.05*    0.00    0.00*   93.27*    0.00
  0.00    0.00*    6.68*

Average:       1*    0.15*    0.00*    0.10*    0.00    0.00*    0.25*
0.00    0.00    0.00*   99.50*


Average:    NODE    %usr   %nice    %sys %iowait    %irq   %soft  %steal
%guest  %gnice   %idle

Average:     all*    0.07*    0.00*    0.07*    0.00    0.00*   46.76*
0.00    0.00    0.00

*   53.09*
*Iperf in the GuestVM2:*

amir@guestvm2:~$ iperf3 -c 10.0.0.13 --bind 10.0.0.12 -V -t 20 -i 5

iperf 3.7

Linux guestvm2 5.4.0-100-generic #113-Ubuntu SMP Thu Feb 3 18:43:29 UTC
2022 x86_64

Control connection MSS 1398

Time: Thu, 03 Mar 2022 19:03:10 GMT

Connecting to host 10.0.0.13, port 5201

      Cookie: pvlmogudjq4vcxq5xvwxwmfuaq3upvf5xtba

      TCP MSS: 1398 (default)

[  5] local 10.0.0.12 port 59369 connected to 10.0.0.13 port 5201

Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0
seconds, 20 second test, tos 0

[ ID] Interval           Transfer     Bitrate         Retr  Cwnd

[  5]   0.00-5.00   sec  4.33 GBytes  7.44 Gbits/sec   89   2.12 MBytes


[  5]   5.00-10.00  sec  5.35 GBytes  9.19 Gbits/sec  152   1.67 MBytes


[  5]  10.00-15.00  sec  5.37 GBytes  9.23 Gbits/sec  228   2.00 MBytes


[  5]  15.00-20.00  sec  5.34 GBytes  9.18 Gbits/sec  227   1.55 MBytes


- - - - - - - - - - - - - - - - - - - - - - - - -

Test Complete. Summary Results:

[ ID] Interval           Transfer     Bitrate         Retr

[  5]   0.00-20.00  sec  20.4 GBytes  8.76 Gbits/sec  696             sender

[  5]   0.00-20.00  sec  20.4 GBytes  8.76 Gbits/sec
receiver

CPU Utilization: local/sender 15.3% (0.2%u/15.1%s), remote/receiver 2.3%
(0.1%u/2.2%s)

snd_tcp_congestion cubic

rcv_tcp_congestion cubic


iperf Done.


>
>
> > VM1 (transmitter):
> >
> > ...
> >
> > tx-checksumming: off
> >
> >         tx-checksum-ipv4: off [fixed]
> >
> >         tx-checksum-ip-generic: off
> >
> >         tx-checksum-ipv6: off [fixed]
> >
> >         tx-checksum-fcoe-crc: off [fixed]
> >
> >         tx-checksum-sctp: off [fixed]
> >
> > ...
> >
> > tx-udp_tnl-segmentation: on
> >
> > tx-udp_tnl-csum-segmentation: on
> >
> > ...
> >
> >
> > This issue doesn't exist when I use the Linux-bridge for connecting the
> VM1 and VM2.
> >
> > I was wondering if there is an issue with openvswitch when I keep the
> tx_checsumming on?
> >
> > And I am using the following NICs which are capable of VxLAN offloading:
> > Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
> >
> >    Driver Info:
> >          Bus Info: 0000:3b:00:0
> >          Driver: nmlx5_core
> >          Firmware Version: 14.23.1020
> >          Version: 4.15.10.3
> >
> >
> > Thank you,
> > Amir
> >
> > _______________________________________________
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to