I found the similar issue.
Wetl found that cpu usage is grasped by a function in ovs code.
function name is ovs_dp_notify_wq

this function is chcking the network status again and again.  Does anyone konw why?

在 "Xu (Simon) Chen" <[email protected]>,2014-10-27 下午12:25写道:

Hey folks,

I've been trying to leverage vxlan hardware offload (checksum) to improve tunnel performance.

If I run vxlan tunnels over a single 10Gbps interface, I can achieve roughly 9Gbps throughput between VMs with MTU 1500 vnic. Without hardware offload, the performance is much worse.

With bonding (2x10G), however, the performance doesn't go above 8Gbps. I've tried doing bonding via ifenslave (6.5Gbps) as well as OVS bonding (8Gbps). 

Any ideas why bonding seems to negate the hardware offload capability? Any recommendations on configuration to fully leverage such hardware?

Thanks.
-Simon
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss