Sorry! this time wrapped :)

Hello all,

we are experimenting with an Openstack-based OVS setup and are looking into
performance. In particular, we are currently using a 4.4.21 Linux kernel with
OVS 2.5.1.

We observe the following behavior: When GRO is enabled in the receiving NIC
driver (tested with multiple HW & drivers), we see large TCP segments being
passed up from the NIC (observed via tcpdump). However, the OVS module seems to
refragment those segments again to MTU-sized packets, i.e. tcpdump on
qvo/tap/etc interfaces shows small segments again. This is causing considerable
overhead, because of subsequent per-packet costs.

At various places on the web it is suggested to switch off NIC-driver GRO and
we then see that OVS kicks in its own GRO, which is better than low-level GRO
and refragmentation.

However, the ideal case would be NIC-driver GRO and no refragmentation in the
network stack. This is the behavior that we have seen with OVS 2.1.2.

Does anybody have a clue what cause this refragmentation and how to disable it?

Thanks,
Marc

SAP SE
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to