On 29/03/2016 06:08, "Flavio Leitner" <f...@redhat.com> wrote:

>On Tue, Mar 29, 2016 at 02:13:18AM +0000, Daniele Di Proietto wrote:
>> Hi Flavio and Karl,
>> 
>> thanks for the patch! I have a couple of comments:
>> 
>> Can you point out a configuration where this is the bottleneck?
>> I'm interested in reproducing this.
>
>Karl, since you did the tests, could you please provide more details?
>
>
>> I think the implementation would look simpler if we could
>> avoid explicitly taking the mutex in dpif-netdev and instead
>> having a ovsrcu_try_quiesce(). What do you think?
>
>My concern is that it is freeing one entry from EMC each round
>and it should quiesce to allow the callbacks to run.  If, for
>some reason, it fails to quiesce for a long period, then it might
>backlog a significant number of entries.
>
>
>> I think we can avoid the recursive mutex as well if we introduce
>> some explicit APIs in seq (seq_try_lock, seq_change_protected and
>> seq_unlock), but I'd like to understand the performance implication
>> of this commit first.
>
>The issue is the latency spike when the PMD thread blocks on the
>busy mutex.
>
>The goal with recursive locking is to make sure we can sweep
>the EMC cache and quiesce without blocking.  Fixing seq API
>would help to not block, but then we have no control to whether
>we did both tasks in the same round.
>
>fbl

If I understand your concerns correctly, I think we can have something
like:

if (ovsrcu_try_quiesce()) {
    ...
    emc_cache_slow_sweep();
    ...
}

Sure, the swept flows will need to wait another round to actually get
freed,
but I think this is ok

_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to