Rafael David Tinoco writes:
> Hello Eric,
>
> Coming back to this...
>
> On Jun 16, 2014, at 12:01 PM, Rafael Tinoco
> wrote:
>
>> ...
>>
>> On Fri, Jun 13, 2014 at 9:02 PM, Eric W. Biederman
>> wrote:
>>> Rafael Tinoco writes:
>>>
Okay,
Tests with the same script were
Rafael David Tinoco rafael.tin...@canonical.com writes:
Hello Eric,
Coming back to this...
On Jun 16, 2014, at 12:01 PM, Rafael Tinoco rafael.tin...@canonical.com
wrote:
...
On Fri, Jun 13, 2014 at 9:02 PM, Eric W. Biederman
ebied...@xmission.com wrote:
Rafael Tinoco
Hello Eric,
Coming back to this...
On Jun 16, 2014, at 12:01 PM, Rafael Tinoco wrote:
> ...
>
> On Fri, Jun 13, 2014 at 9:02 PM, Eric W. Biederman
> wrote:
>> Rafael Tinoco writes:
>>
>>> Okay,
>>>
>>> Tests with the same script were done.
>>> I'm comparing : master + patch vs 3.15.0-rc5
Hello Eric,
Coming back to this...
On Jun 16, 2014, at 12:01 PM, Rafael Tinoco rafael.tin...@canonical.com wrote:
...
On Fri, Jun 13, 2014 at 9:02 PM, Eric W. Biederman
ebied...@xmission.com wrote:
Rafael Tinoco rafael.tin...@canonical.com writes:
Okay,
Tests with the same script
...
On Fri, Jun 13, 2014 at 9:02 PM, Eric W. Biederman
wrote:
> Rafael Tinoco writes:
>
>> Okay,
>>
>> Tests with the same script were done.
>> I'm comparing : master + patch vs 3.15.0-rc5 (last sync'ed rcu commit)
>> and 3.9 last bisect good.
>>
>> Same tests were made. I'm comparing the
...
On Fri, Jun 13, 2014 at 9:02 PM, Eric W. Biederman
ebied...@xmission.com wrote:
Rafael Tinoco rafael.tin...@canonical.com writes:
Okay,
Tests with the same script were done.
I'm comparing : master + patch vs 3.15.0-rc5 (last sync'ed rcu commit)
and 3.9 last bisect good.
Same tests
Rafael Tinoco writes:
> Okay,
>
> Tests with the same script were done.
> I'm comparing : master + patch vs 3.15.0-rc5 (last sync'ed rcu commit)
> and 3.9 last bisect good.
>
> Same tests were made. I'm comparing the following versions:
>
> 1) master + suggested patch
> 2) 3.15.0-rc5 (last rcu
Okay,
Tests with the same script were done.
I'm comparing : master + patch vs 3.15.0-rc5 (last sync'ed rcu commit)
and 3.9 last bisect good.
Same tests were made. I'm comparing the following versions:
1) master + suggested patch
2) 3.15.0-rc5 (last rcu commit in my clone)
3) 3.9-rc2 (last
Okay,
Tests with the same script were done.
I'm comparing : master + patch vs 3.15.0-rc5 (last sync'ed rcu commit)
and 3.9 last bisect good.
Same tests were made. I'm comparing the following versions:
1) master + suggested patch
2) 3.15.0-rc5 (last rcu commit in my clone)
3) 3.9-rc2 (last
Rafael Tinoco rafael.tin...@canonical.com writes:
Okay,
Tests with the same script were done.
I'm comparing : master + patch vs 3.15.0-rc5 (last sync'ed rcu commit)
and 3.9 last bisect good.
Same tests were made. I'm comparing the following versions:
1) master + suggested patch
2)
Ok, some misconfiguration here probably, never mind. I'll finish the
tests tomorrow, compare with existent ones and let you know asap. Tks.
On Wed, Jun 11, 2014 at 10:09 PM, Eric W. Biederman
wrote:
> Rafael Tinoco writes:
>
>> I'm getting a kernel panic with your patch:
>>
>> -- panic
>> --
Rafael Tinoco writes:
> I'm getting a kernel panic with your patch:
>
> -- panic
> -- mount_block_root
> -- mount_root
> -- prepare_namespace
> -- kernel_init_freeable
>
> It is giving me an unknown block device for the same config file i
> used on other builds. Since my test is running on a kvm
I'm getting a kernel panic with your patch:
-- panic
-- mount_block_root
-- mount_root
-- prepare_namespace
-- kernel_init_freeable
It is giving me an unknown block device for the same config file i
used on other builds. Since my test is running on a kvm guest under a
ramdisk, i'm still checking
"Paul E. McKenney" writes:
> On Wed, Jun 11, 2014 at 04:12:15PM -0700, Eric W. Biederman wrote:
>> "Paul E. McKenney" writes:
>>
>> > On Wed, Jun 11, 2014 at 01:46:08PM -0700, Eric W. Biederman wrote:
>> >> On the chance it is dropping the old nsproxy which calls syncrhonize_rcu
>> >> in
On Wed, Jun 11, 2014 at 04:12:15PM -0700, Eric W. Biederman wrote:
> "Paul E. McKenney" writes:
>
> > On Wed, Jun 11, 2014 at 01:46:08PM -0700, Eric W. Biederman wrote:
> >> On the chance it is dropping the old nsproxy which calls syncrhonize_rcu
> >> in switch_task_namespaces that is causing
"Paul E. McKenney" writes:
> On Wed, Jun 11, 2014 at 01:46:08PM -0700, Eric W. Biederman wrote:
>> On the chance it is dropping the old nsproxy which calls syncrhonize_rcu
>> in switch_task_namespaces that is causing you problems I have attached
>> a patch that changes from rcu_read_lock to
On Wed, Jun 11, 2014 at 01:46:08PM -0700, Eric W. Biederman wrote:
> Dave Chiluk writes:
>
> > On 06/11/2014 11:18 AM, Paul E. McKenney wrote:
> >> On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
> >>> Now think about what happens when a gateway goes down, the namespaces
> >>> need
On 06/11/2014 03:46 PM, Eric W. Biederman wrote:
> ip netns add also performs a bind mount so we get into all of the vfs
> level locking as well.
It's actually quite a bit worse than that as ip netns exec creates a new
mount namespace as well. That being said, the vfs issues have been
healthily
Eric,
I'll test the patch with the same testcase and let you all know.
Really appreciate everybody's efforts.
On Wed, Jun 11, 2014 at 5:55 PM, Eric W. Biederman
wrote:
> "Paul E. McKenney" writes:
>
>> On Wed, Jun 11, 2014 at 01:27:07PM -0500, Dave Chiluk wrote:
>>> On 06/11/2014 11:18 AM,
"Paul E. McKenney" writes:
> On Wed, Jun 11, 2014 at 01:27:07PM -0500, Dave Chiluk wrote:
>> On 06/11/2014 11:18 AM, Paul E. McKenney wrote:
>> > On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
>> >> Now think about what happens when a gateway goes down, the namespaces
>> >> need
Dave Chiluk writes:
> On 06/11/2014 11:18 AM, Paul E. McKenney wrote:
>> On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
>>> Now think about what happens when a gateway goes down, the namespaces
>>> need to be migrated, or a new machine needs to be brought up to replace
>>> it.
On Wed, Jun 11, 2014 at 01:27:07PM -0500, Dave Chiluk wrote:
> On 06/11/2014 11:18 AM, Paul E. McKenney wrote:
> > On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
> >> Now think about what happens when a gateway goes down, the namespaces
> >> need to be migrated, or a new machine
On 06/11/2014 11:18 AM, Paul E. McKenney wrote:
> On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
>> Now think about what happens when a gateway goes down, the namespaces
>> need to be migrated, or a new machine needs to be brought up to replace
>> it. When we're talking about 3000
On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
> On 06/11/2014 10:17 AM, Rafael Tinoco wrote:
> > This script simulates a failure on a cloud infrastructure, for ex. As soon
> > as
> > one virtualization host fails all its network namespaces have to be migrated
> > to other node.
On 06/11/2014 10:17 AM, Rafael Tinoco wrote:
> This script simulates a failure on a cloud infrastructure, for ex. As soon as
> one virtualization host fails all its network namespaces have to be migrated
> to other node. Creating thousands of netns in the shortest time possible
> is the objective
> I am having a really hard time distinguishing the colors on both charts
> (yeah, red-green colorblind, go figure). Any chance of brighter colors,
> patterned lines, or (better yet) the data in tabular form (for example,
> with the configuration choices as columns and the releases/commits
> as
On Wed, Jun 11, 2014 at 02:52:09AM -0300, Rafael Tinoco wrote:
> Paul E. McKenney, Eric Biederman, David Miller (and/or anyone else
> interested):
>
> It was brought to my attention that netns creation/execution might
> have suffered scalability/performance regression after v3.8.
>
> I would
Rafael Tinoco writes:
> Paul E. McKenney, Eric Biederman, David Miller (and/or anyone else
> interested):
>
> It was brought to my attention that netns creation/execution might
> have suffered scalability/performance regression after v3.8.
>
> I would like you, or anyone interested, to review
Rafael Tinoco rafael.tin...@canonical.com writes:
Paul E. McKenney, Eric Biederman, David Miller (and/or anyone else
interested):
It was brought to my attention that netns creation/execution might
have suffered scalability/performance regression after v3.8.
I would like you, or anyone
On Wed, Jun 11, 2014 at 02:52:09AM -0300, Rafael Tinoco wrote:
Paul E. McKenney, Eric Biederman, David Miller (and/or anyone else
interested):
It was brought to my attention that netns creation/execution might
have suffered scalability/performance regression after v3.8.
I would like you,
I am having a really hard time distinguishing the colors on both charts
(yeah, red-green colorblind, go figure). Any chance of brighter colors,
patterned lines, or (better yet) the data in tabular form (for example,
with the configuration choices as columns and the releases/commits
as rows)?
On 06/11/2014 10:17 AM, Rafael Tinoco wrote:
This script simulates a failure on a cloud infrastructure, for ex. As soon as
one virtualization host fails all its network namespaces have to be migrated
to other node. Creating thousands of netns in the shortest time possible
is the objective
On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
On 06/11/2014 10:17 AM, Rafael Tinoco wrote:
This script simulates a failure on a cloud infrastructure, for ex. As soon
as
one virtualization host fails all its network namespaces have to be migrated
to other node. Creating
On 06/11/2014 11:18 AM, Paul E. McKenney wrote:
On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
Now think about what happens when a gateway goes down, the namespaces
need to be migrated, or a new machine needs to be brought up to replace
it. When we're talking about 3000
On Wed, Jun 11, 2014 at 01:27:07PM -0500, Dave Chiluk wrote:
On 06/11/2014 11:18 AM, Paul E. McKenney wrote:
On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
Now think about what happens when a gateway goes down, the namespaces
need to be migrated, or a new machine needs to be
Dave Chiluk chi...@canonical.com writes:
On 06/11/2014 11:18 AM, Paul E. McKenney wrote:
On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
Now think about what happens when a gateway goes down, the namespaces
need to be migrated, or a new machine needs to be brought up to replace
Paul E. McKenney paul...@linux.vnet.ibm.com writes:
On Wed, Jun 11, 2014 at 01:27:07PM -0500, Dave Chiluk wrote:
On 06/11/2014 11:18 AM, Paul E. McKenney wrote:
On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
Now think about what happens when a gateway goes down, the
Eric,
I'll test the patch with the same testcase and let you all know.
Really appreciate everybody's efforts.
On Wed, Jun 11, 2014 at 5:55 PM, Eric W. Biederman
ebied...@xmission.com wrote:
Paul E. McKenney paul...@linux.vnet.ibm.com writes:
On Wed, Jun 11, 2014 at 01:27:07PM -0500, Dave
On 06/11/2014 03:46 PM, Eric W. Biederman wrote:
ip netns add also performs a bind mount so we get into all of the vfs
level locking as well.
It's actually quite a bit worse than that as ip netns exec creates a new
mount namespace as well. That being said, the vfs issues have been
healthily
On Wed, Jun 11, 2014 at 01:46:08PM -0700, Eric W. Biederman wrote:
Dave Chiluk chi...@canonical.com writes:
On 06/11/2014 11:18 AM, Paul E. McKenney wrote:
On Wed, Jun 11, 2014 at 10:46:00AM -0500, David Chiluk wrote:
Now think about what happens when a gateway goes down, the namespaces
Paul E. McKenney paul...@linux.vnet.ibm.com writes:
On Wed, Jun 11, 2014 at 01:46:08PM -0700, Eric W. Biederman wrote:
On the chance it is dropping the old nsproxy which calls syncrhonize_rcu
in switch_task_namespaces that is causing you problems I have attached
a patch that changes from
On Wed, Jun 11, 2014 at 04:12:15PM -0700, Eric W. Biederman wrote:
Paul E. McKenney paul...@linux.vnet.ibm.com writes:
On Wed, Jun 11, 2014 at 01:46:08PM -0700, Eric W. Biederman wrote:
On the chance it is dropping the old nsproxy which calls syncrhonize_rcu
in switch_task_namespaces that
Paul E. McKenney paul...@linux.vnet.ibm.com writes:
On Wed, Jun 11, 2014 at 04:12:15PM -0700, Eric W. Biederman wrote:
Paul E. McKenney paul...@linux.vnet.ibm.com writes:
On Wed, Jun 11, 2014 at 01:46:08PM -0700, Eric W. Biederman wrote:
On the chance it is dropping the old nsproxy which
I'm getting a kernel panic with your patch:
-- panic
-- mount_block_root
-- mount_root
-- prepare_namespace
-- kernel_init_freeable
It is giving me an unknown block device for the same config file i
used on other builds. Since my test is running on a kvm guest under a
ramdisk, i'm still checking
Rafael Tinoco rafael.tin...@canonical.com writes:
I'm getting a kernel panic with your patch:
-- panic
-- mount_block_root
-- mount_root
-- prepare_namespace
-- kernel_init_freeable
It is giving me an unknown block device for the same config file i
used on other builds. Since my test is
Ok, some misconfiguration here probably, never mind. I'll finish the
tests tomorrow, compare with existent ones and let you know asap. Tks.
On Wed, Jun 11, 2014 at 10:09 PM, Eric W. Biederman
ebied...@xmission.com wrote:
Rafael Tinoco rafael.tin...@canonical.com writes:
I'm getting a kernel
Paul E. McKenney, Eric Biederman, David Miller (and/or anyone else interested):
It was brought to my attention that netns creation/execution might
have suffered scalability/performance regression after v3.8.
I would like you, or anyone interested, to review these charts/data
and check if there
Paul E. McKenney, Eric Biederman, David Miller (and/or anyone else interested):
It was brought to my attention that netns creation/execution might
have suffered scalability/performance regression after v3.8.
I would like you, or anyone interested, to review these charts/data
and check if there
48 matches
Mail list logo