On Mon, May 19, 2014 at 3:33 PM, Andrey Korolyov <[email protected]> wrote:
> On Sat, May 17, 2014 at 12:02 AM, Andrey Korolyov <[email protected]> wrote:
>> On Fri, May 16, 2014 at 10:16 PM, Ben Pfaff <[email protected]> wrote:
>>> On Fri, May 16, 2014 at 09:38:27PM +0400, Andrey Korolyov wrote:
>>>> Can anyone please explain following performance impact:
>>>>
>>>> Taking following snippet of forward table (one-way traffic cleaner), I
>>>> am observing very huge CPU impact on the ovs-vswitchd on the specific
>>>> test for IP address violation, but it seemingly comes from nowhere -
>>>> packets should be silently dropped by kernel module since there is a
>>>> definitely 'drop' action for unicast IP flood from following:
>>>>
>>>> hping3 -1 --flood --rand-source 10.0.0.51 <-- active neighbour
>>>>
>>>> As one can see from counters, packets are dropping, but raising CPU
>>>> consumption of the ovs-vswitchd up to four cores on E5 cpu, instead of
>>>> being silently blackholed by kernel module. When no path to neighbor
>>>> with certain address exists (for example, if one shut down interface
>>>> 10.0.0.51), overhead disappears. Can anyone have a point on what can be
>>>> fixed there to not heat up vswitchd process so much? I am using 2.1
>>>> series userspace tools.
>>>
>>> I think you need to turn on prefix tracking to get the best
>>> performance out of this flow table. Try this:
>>>
>>> ovs-vsctl \
>>> -- set Bridge br0 flow_tables:2=@N2 \
>>> -- --id=@N2 create Flow_Table name=table2 prefixes=ip_dst,ip_src
>>>
>>> You'll need to replace br0 by the name of your bridge.
>>>
>>> We've had discussion of enabling prefix tracking by default. I
>>> thought that we had concluded that it was a good idea, but it doesn't
>>> seem to be on master yet. I'll follow up on that.
>>
>> Thanks, but setting this had absolutely no difference in vswitchd
>> consumption. I believe that there is a very simple knob of which I am
>> unaware of, because of huge difference between flood overhead with
>> reachable and unreachable (in same L2 area) target.
>
> A little follow-up: on a passthrough (e.g. legitimate from a lot of
> /32s) prefix tracking reduces CPU consumption three times down. Anyway,
> the flood case from above is still unresolved for me and I`d be
> grateful for any hints to check.
Just replying to myself again :)
I may have to redefine a question - is there a way, currently
available, or planned, to deterministically manage wildcard flows in
the datapath? Despite hit/miss ratio looks pretty good for me, I think
that it`ll be easy to eliminate tail which is currently hitting me by
creating wildcard flow programmatically. Here is a dump from
production system which serves a couple of VMs with rules which looks
like one I mentioned in the first message.
lookups: hit:133423937 missed:17311742 lost:0
flows: 1726
masks: hit:317565778 total:3 hit/pkt:2.11
_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss