Are eth3 and eth4 on the same network segment?  If so, I'd guess you've 
introduced a loop.

I wouldn't recommend setting your evection threshold so high, since OVS is 
going to have to do a lot of work to maintain so many kernel flows.  I wouldn't 
go above 10s of thousands of flows.  What do your kernel flows look like?  You 
have too many to post here, but maybe you can provide a sampling of a couple 
hundred.  Do you see any patterns?

--Justin


On Jun 4, 2012, at 10:40 PM, Kaushal Shubhank wrote:

> Hello,
> 
> We have a simple setup in which a server running a transparent proxy needs to 
> intercept the http port 80 data. We have installed openvswitch (1.4.1) in the 
> same server (running Ubuntu-natty 2.6.38-12-server 64bit) to feed the proxy 
> with the corresponding type of packets while bridging all other types of 
> packets. The functionality is working properly but the CPU usage is quite 
> high (~30% for 20mbps traffic). The total load we need to deploy on is around 
> 350mbps, and as soon as we plug in, the CPU usage shoots up to 100% (on a 
> quad core Intel(R) Xeon(R) CPU E5420  @ 2.50GHz), even when only allowing all 
> the packets to flow through br0. Packet loss also starts to occur.
> 
> After reading similar discussions on previous threads I made my bridge 
> stp-enabled and increased the flow-eviction-threshold to "1000000". Still the 
> CPU load is high due to misses in kernel flow table. I have defined only the 
> following flows:
> 
> $ ovs-ofctl dump-flows br0
> 
> NXST_FLOW reply (xid=0x4):
>  cookie=0x0, duration=80105.621s, table=0, n_packets=61978784, 
> n_bytes=7438892513, priority=100,tcp,in_port=1,tp_dst=80 
> actions=mod_dl_dst:00:e0:ed:15:24:4a,LOCAL
>  cookie=0x0, duration=80105.501s, table=0, n_packets=49343241, 
> n_bytes=113922939324, priority=100,tcp,dl_src=00:e0:ed:15:24:4a,tp_src=80 
> actions=output:1
>  cookie=0x0, duration=518332.577s, table=0, n_packets=3052099665, 
> n_bytes=2041603012562, priority=0 actions=NORMAL
>  cookie=0x0, duration=80105.586s, table=0, n_packets=46209782, 
> n_bytes=109671221356, priority=100,tcp,in_port=2,tp_src=80 
> actions=mod_dl_dst:00:e0:ed:15:24:4a,LOCAL
>  cookie=0x0, duration=80105.601s, table=0, n_packets=40389137, 
> n_bytes=5660094662, priority=100,tcp,dl_src=00:e0:ed:15:24:4a,tp_dst=80 
> actions=output:2
> 
> where 00:e0:ed:15:24:4a is br0's MAC address
> 
> $ ovs-dpctl show
> 
> system@br0:
>       lookups: hit:3105457869 missed:792488043 lost:903955 {these lost 
> packets came with 350mbps load and do not change with 20mbps}
>       flows: 12251
>       port 0: br0 (internal)
>       port 1: eth3
>       port 2: eth4
> 
> As far as we could understand, the missed packets here cause context switch 
> to user-mode and increase CPU usage. Let me know if any other detail about 
> the setup is required.
> 
> Is there anything else we can do to reduce CPU usage?
> Can the flows above be improved in some way?
> Is there any other configuration for deployment in production that we missed?
> 
> Regards,
> Kaushal
> _______________________________________________
> discuss mailing list
> [email protected]
> http://openvswitch.org/mailman/listinfo/discuss

_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to