Hi,
I think this is a FAQ, but I seem to be blind enough to miss any clue
about this issue.
I have recently (test-)migrated our KVM-based virtualisation
infrastructure from one-standard-bridge-per-VLAN to a fully VLAN-aware
openvswitch (Debian testing, which means kernel 3.2.32 with
openvswitch-datapath-dkms 1.4.2) because it feels much more natural to
use.
I don't think the actual configuration matters a lot, but here it comes
- two GigE ports in active-standby bond0 to upstream switches
- two GigE ports in LACP bond1 to partner
I have ruled out any CPU load issues mentioned in the FAQ like loops and
broadcast storms.
My problem is that when I move some high traffic VMs to this host the
CPU usage of ovs-vswitchd is massively increased. I'm reasonably sure
this is due to the fact that there seems to be a flow setup for each L4
connection through that vswitch, which in this case is highly
unfortunate (firewall VM with a lot of concurrent sessions).
root@virt1:~# ovs-dpctl show
system@br0:
lookups: hit:5200011861 missed:829071115 lost:279074
flows: 34392
I have already set other_config:flow-eviction-threshold=100000,
otherwise it would be a lot worse. But I still have a CPU load that is
several times higher than it was before with the standard bridge, and I
still see occasional loss during high traffic periods.
I don't actually need all that fancy OpenFlow stuff, so in my case a
traditional switching based on MAC learning and the destination MAC
would suffice. I cannot find any information whether that is possible.
Thanks,
Bernhard
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss