Hello Eric,

Coming back to this...

On Jun 16, 2014, at 12:01 PM, Rafael Tinoco <rafael.tin...@canonical.com> wrote:

> ...
> 
> On Fri, Jun 13, 2014 at 9:02 PM, Eric W. Biederman
> <ebied...@xmission.com> wrote:
>> Rafael Tinoco <rafael.tin...@canonical.com> writes:
>> 
>>> Okay,
>>> 
>>> Tests with the same script were done.
>>> I'm comparing : master + patch vs 3.15.0-rc5 (last sync'ed rcu commit)
>>> and 3.9 last bisect good.
>>> 
>>> Same tests were made. I'm comparing the following versions:
>>> 
>>> 1) master + suggested patch
>>> 2) 3.15.0-rc5 (last rcu commit in my clone)
>>> 3) 3.9-rc2 (last bisect good)
>> 
>> I am having a hard time making sense of your numbers.
>> 
>> If I have read your email correctly my suggested patch caused:
>> "ip netns add" numbers to improve
>> 1x "ip netns exec" to improve some
>> 2x "ip netns exec" to show no improvement
>> "ip link add" to show no effect (after the 2x ip netns exec)
> 
> - "netns add" are as good as they were before this regression.
> - "netns exec" are improved but still 50% of the last good bisect commit.
> - "link add" didn't show difference.
> 
>> This is interesting in a lot of ways.
>> - This seems to confirm that the only rcu usage in ip netns add
>>  was switch_task_namespaces.  Which is convinient as that rules
>>  out most of the network stack when looking for performance oddities.
>> 
>> - "ip netns exec" had an expected performance improvement
>> - "ip netns exec" is still slow (so something odd is still going on)
>> - "ip link add" appears immaterial to the performance problem.
>> 
>> It would be interesting to switch the "ip link add" and "ip netns exec"
>> in your test case to confirm that there is nothing interesting/slow
>> going on in "ip link add"
> 
> - will do that.

IP link add seems ok.

> 
>> 
>> Which leaves me with the question what ip "ip netns exec" remains
>> that is using rcu and is slowing all of this down.
> 
> - will check this also.

Based on my tests (and some other users that deployed this patch on a server 
farm)
it looks like changing rcu_read_lock() to task_lock() did the trick. We are 
getting
same (sometimes much better) results - comparing bisect good - for a big amount 
of netns being created simultaneously. 

Is it possible to make this change permanent in kernel tree ? 

Much appreciate your attention Eric

Regards

Rafael Tinoco

> 
>> Eric
> 
> Tks
> 
> Rafael

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to