On Thursday 15 November 2007 12:11, Herbert Xu wrote:
> On Wed, Nov 14, 2007 at 05:03:25PM -0800, Christoph Lameter wrote:
> > Well this is likely the result of the SLUB regression. If you allocate an
> > order 1 page then the zone locks need to be taken. SLAB queues the a
Yeah, it appears this
On Wed, Nov 14, 2007 at 05:03:25PM -0800, Christoph Lameter wrote:
>
> Well this is likely the result of the SLUB regression. If you allocate an
> order 1 page then the zone locks need to be taken. SLAB queues the a
> couple of higher order pages and can so serve a couple of requests without
>
On Wed, 14 Nov 2007, David Miller wrote:
> > As a result, we may allocate more than a page of data in the
> > non-TSO case when exactly one page is desired.
Well this is likely the result of the SLUB regression. If you allocate an
order 1 page then the zone locks need to be taken. SLAB queues
From: Nick Piggin <[EMAIL PROTECTED]>
Date: Thu, 15 Nov 2007 11:21:36 +1100
> On Thursday 15 November 2007 10:46, David Miller wrote:
> > From: Herbert Xu <[EMAIL PROTECTED]>
> > Date: Wed, 14 Nov 2007 19:48:44 +0800
>
> > > Signed-off-by: Herbert Xu <[EMAIL PROTECTED]>
> >
> > Applied and I'll
On Thursday 15 November 2007 10:46, David Miller wrote:
> From: Herbert Xu <[EMAIL PROTECTED]>
> Date: Wed, 14 Nov 2007 19:48:44 +0800
> > Signed-off-by: Herbert Xu <[EMAIL PROTECTED]>
>
> Applied and I'll queue it up for -stable too.
Good result. Thanks, everyone!
-
To unsubscribe from this
From: Herbert Xu <[EMAIL PROTECTED]>
Date: Wed, 14 Nov 2007 19:48:44 +0800
> [TCP]: Fix size calculation in sk_stream_alloc_pskb
>
> We round up the header size in sk_stream_alloc_pskb so that
> TSO packets get zero tail room. Unfortunately this rounding
> up is not coordinated with the
On Wed, 14 Nov 2007, David Miller wrote:
> > Still interested to know why SLAB didn't see the same thing...
>
> Yes, I wonder why too. I bet objects just got packed differently.
The objects are packed tightly in SLUB and SLUB can allocate smaller
objects (minimum is 8 SLAB mininum is 32).
On
From: Nick Piggin <[EMAIL PROTECTED]>
Date: Wed, 14 Nov 2007 11:02:11 +1100
> On Wednesday 14 November 2007 22:48, Herbert Xu wrote:
> > On Wed, Nov 14, 2007 at 03:10:22AM -0800, David Miller wrote:
> > > So the thing that's being effected here in TCP is
> > > net/ipv4/tcp.c:select_size(),
On Wednesday 14 November 2007 22:48, Herbert Xu wrote:
> On Wed, Nov 14, 2007 at 03:10:22AM -0800, David Miller wrote:
> > So the thing that's being effected here in TCP is
> > net/ipv4/tcp.c:select_size(), specifically the else branch:
>
> Thanks for the pointer. Indeed there is a bug in that
On Wed, Nov 14, 2007 at 03:10:22AM -0800, David Miller wrote:
>
> So the thing that's being effected here in TCP is
> net/ipv4/tcp.c:select_size(), specifically the else branch:
Thanks for the pointer. Indeed there is a bug in that area.
I'm not sure whether it's causing the problem at hand but
On Wednesday 14 November 2007 22:10, David Miller wrote:
> From: Nick Piggin <[EMAIL PROTECTED]>
> Date: Wed, 14 Nov 2007 09:27:39 +1100
>
> > OK, in vanilla kernels, the page allocator definitely shows higher
> > in the results (than with Herbert's patch reverted).
>
> ...
>
> > I can't see that
From: Nick Piggin <[EMAIL PROTECTED]>
Date: Wed, 14 Nov 2007 09:27:39 +1100
> OK, in vanilla kernels, the page allocator definitely shows higher
> in the results (than with Herbert's patch reverted).
...
> I can't see that these numbers show much useful, unfortunately.
Thanks for all of this
On Wednesday 14 November 2007 09:27, Nick Piggin wrote:
> > 2) Try removing NETIF_F_SG in drivers/net/loopback.c's dev->feastures
> >setting.
>
> Will try that now.
Doesn't help (with vanilla kernel -- Herbert's patch applied).
data_len histogram drops to 0 and goes to len (I guess that's
On Wednesday 14 November 2007 17:37, David Miller wrote:
> From: Nick Piggin <[EMAIL PROTECTED]>
> > I'm doing some oprofile runs now to see if I can get any more info.
OK, in vanilla kernels, the page allocator definitely shows higher
in the results (than with Herbert's patch reverted).
27516
On Wednesday 14 November 2007 17:37, David Miller wrote:
From: Nick Piggin [EMAIL PROTECTED]
I'm doing some oprofile runs now to see if I can get any more info.
OK, in vanilla kernels, the page allocator definitely shows higher
in the results (than with Herbert's patch reverted).
27516
On Wednesday 14 November 2007 09:27, Nick Piggin wrote:
2) Try removing NETIF_F_SG in drivers/net/loopback.c's dev-feastures
setting.
Will try that now.
Doesn't help (with vanilla kernel -- Herbert's patch applied).
data_len histogram drops to 0 and goes to len (I guess that's not
From: Nick Piggin [EMAIL PROTECTED]
Date: Wed, 14 Nov 2007 09:27:39 +1100
OK, in vanilla kernels, the page allocator definitely shows higher
in the results (than with Herbert's patch reverted).
...
I can't see that these numbers show much useful, unfortunately.
Thanks for all of this data
On Wednesday 14 November 2007 22:10, David Miller wrote:
From: Nick Piggin [EMAIL PROTECTED]
Date: Wed, 14 Nov 2007 09:27:39 +1100
OK, in vanilla kernels, the page allocator definitely shows higher
in the results (than with Herbert's patch reverted).
...
I can't see that these numbers
On Wed, Nov 14, 2007 at 03:10:22AM -0800, David Miller wrote:
So the thing that's being effected here in TCP is
net/ipv4/tcp.c:select_size(), specifically the else branch:
Thanks for the pointer. Indeed there is a bug in that area.
I'm not sure whether it's causing the problem at hand but
On Wednesday 14 November 2007 22:48, Herbert Xu wrote:
On Wed, Nov 14, 2007 at 03:10:22AM -0800, David Miller wrote:
So the thing that's being effected here in TCP is
net/ipv4/tcp.c:select_size(), specifically the else branch:
Thanks for the pointer. Indeed there is a bug in that area.
From: Nick Piggin [EMAIL PROTECTED]
Date: Wed, 14 Nov 2007 11:02:11 +1100
On Wednesday 14 November 2007 22:48, Herbert Xu wrote:
On Wed, Nov 14, 2007 at 03:10:22AM -0800, David Miller wrote:
So the thing that's being effected here in TCP is
net/ipv4/tcp.c:select_size(), specifically the
On Wed, 14 Nov 2007, David Miller wrote:
Still interested to know why SLAB didn't see the same thing...
Yes, I wonder why too. I bet objects just got packed differently.
The objects are packed tightly in SLUB and SLUB can allocate smaller
objects (minimum is 8 SLAB mininum is 32).
On
From: Herbert Xu [EMAIL PROTECTED]
Date: Wed, 14 Nov 2007 19:48:44 +0800
[TCP]: Fix size calculation in sk_stream_alloc_pskb
We round up the header size in sk_stream_alloc_pskb so that
TSO packets get zero tail room. Unfortunately this rounding
up is not coordinated with the select_size()
On Thursday 15 November 2007 10:46, David Miller wrote:
From: Herbert Xu [EMAIL PROTECTED]
Date: Wed, 14 Nov 2007 19:48:44 +0800
Signed-off-by: Herbert Xu [EMAIL PROTECTED]
Applied and I'll queue it up for -stable too.
Good result. Thanks, everyone!
-
To unsubscribe from this list: send
From: Nick Piggin [EMAIL PROTECTED]
Date: Thu, 15 Nov 2007 11:21:36 +1100
On Thursday 15 November 2007 10:46, David Miller wrote:
From: Herbert Xu [EMAIL PROTECTED]
Date: Wed, 14 Nov 2007 19:48:44 +0800
Signed-off-by: Herbert Xu [EMAIL PROTECTED]
Applied and I'll queue it up for
On Wed, 14 Nov 2007, David Miller wrote:
As a result, we may allocate more than a page of data in the
non-TSO case when exactly one page is desired.
Well this is likely the result of the SLUB regression. If you allocate an
order 1 page then the zone locks need to be taken. SLAB queues the a
On Wed, Nov 14, 2007 at 05:03:25PM -0800, Christoph Lameter wrote:
Well this is likely the result of the SLUB regression. If you allocate an
order 1 page then the zone locks need to be taken. SLAB queues the a
couple of higher order pages and can so serve a couple of requests without
On Thursday 15 November 2007 12:11, Herbert Xu wrote:
On Wed, Nov 14, 2007 at 05:03:25PM -0800, Christoph Lameter wrote:
Well this is likely the result of the SLUB regression. If you allocate an
order 1 page then the zone locks need to be taken. SLAB queues the a
Yeah, it appears this is
From: Nick Piggin <[EMAIL PROTECTED]>
Date: Wed, 14 Nov 2007 05:14:27 +1100
> On Wednesday 14 November 2007 17:12, David Miller wrote:
> > Is your test system using HIGHMEM?
> >
> > That's one thing the page vector in the sk_buff can do a lot,
> > kmaps.
>
> No, it's an x86-64, so no highmem.
On Wednesday 14 November 2007 17:12, David Miller wrote:
> From: Nick Piggin <[EMAIL PROTECTED]>
> Date: Wed, 14 Nov 2007 04:36:24 +1100
>
> > On Wednesday 14 November 2007 12:58, David Miller wrote:
> > > I suspect the issue is about having a huge skb->data linear area for
> > > TCP sends over
From: Nick Piggin <[EMAIL PROTECTED]>
Date: Wed, 14 Nov 2007 04:36:24 +1100
> On Wednesday 14 November 2007 12:58, David Miller wrote:
> > I suspect the issue is about having a huge skb->data linear area for
> > TCP sends over loopback. We're likely getting a much smaller
> > skb->data linear
On Wednesday 14 November 2007 12:58, David Miller wrote:
> From: Nick Piggin <[EMAIL PROTECTED]>
> Date: Tue, 13 Nov 2007 22:41:58 +1100
>
> > On Tuesday 13 November 2007 06:44, Christoph Lameter wrote:
> > > On Sat, 10 Nov 2007, Nick Piggin wrote:
> > > > BTW. your size-2048 kmalloc cache is
From: Nick Piggin <[EMAIL PROTECTED]>
Date: Tue, 13 Nov 2007 22:41:58 +1100
> On Tuesday 13 November 2007 06:44, Christoph Lameter wrote:
> > On Sat, 10 Nov 2007, Nick Piggin wrote:
> > > BTW. your size-2048 kmalloc cache is order-1 in the default setup,
> > > wheras kmalloc(1024) or
On Tuesday 13 November 2007 06:44, Christoph Lameter wrote:
> On Sat, 10 Nov 2007, Nick Piggin wrote:
> > BTW. your size-2048 kmalloc cache is order-1 in the default setup,
> > wheras kmalloc(1024) or kmalloc(4096) will be order-0 allocations. And
> > SLAB also uses order-0 for size-2048. It would
On Tuesday 13 November 2007 06:44, Christoph Lameter wrote:
On Sat, 10 Nov 2007, Nick Piggin wrote:
BTW. your size-2048 kmalloc cache is order-1 in the default setup,
wheras kmalloc(1024) or kmalloc(4096) will be order-0 allocations. And
SLAB also uses order-0 for size-2048. It would be
From: Nick Piggin [EMAIL PROTECTED]
Date: Tue, 13 Nov 2007 22:41:58 +1100
On Tuesday 13 November 2007 06:44, Christoph Lameter wrote:
On Sat, 10 Nov 2007, Nick Piggin wrote:
BTW. your size-2048 kmalloc cache is order-1 in the default setup,
wheras kmalloc(1024) or kmalloc(4096) will be
On Wednesday 14 November 2007 12:58, David Miller wrote:
From: Nick Piggin [EMAIL PROTECTED]
Date: Tue, 13 Nov 2007 22:41:58 +1100
On Tuesday 13 November 2007 06:44, Christoph Lameter wrote:
On Sat, 10 Nov 2007, Nick Piggin wrote:
BTW. your size-2048 kmalloc cache is order-1 in the
From: Nick Piggin [EMAIL PROTECTED]
Date: Wed, 14 Nov 2007 04:36:24 +1100
On Wednesday 14 November 2007 12:58, David Miller wrote:
I suspect the issue is about having a huge skb-data linear area for
TCP sends over loopback. We're likely getting a much smaller
skb-data linear data area
From: Nick Piggin [EMAIL PROTECTED]
Date: Wed, 14 Nov 2007 05:14:27 +1100
On Wednesday 14 November 2007 17:12, David Miller wrote:
Is your test system using HIGHMEM?
That's one thing the page vector in the sk_buff can do a lot,
kmaps.
No, it's an x86-64, so no highmem.
Ok.
What's
On Wednesday 14 November 2007 17:12, David Miller wrote:
From: Nick Piggin [EMAIL PROTECTED]
Date: Wed, 14 Nov 2007 04:36:24 +1100
On Wednesday 14 November 2007 12:58, David Miller wrote:
I suspect the issue is about having a huge skb-data linear area for
TCP sends over loopback. We're
On Sat, 10 Nov 2007, Nick Piggin wrote:
> BTW. your size-2048 kmalloc cache is order-1 in the default setup,
> wheras kmalloc(1024) or kmalloc(4096) will be order-0 allocations. And
> SLAB also uses order-0 for size-2048. It would be nice if SLUB did the
> same...
You can try to see the effect
On Sat, 10 Nov 2007, Nick Piggin wrote:
BTW. your size-2048 kmalloc cache is order-1 in the default setup,
wheras kmalloc(1024) or kmalloc(4096) will be order-0 allocations. And
SLAB also uses order-0 for size-2048. It would be nice if SLUB did the
same...
You can try to see the effect that
On Saturday 10 November 2007 12:29, Nick Piggin wrote:
> cc'ed linux-netdev
Err, make that 'netdev' :P
> On Saturday 10 November 2007 10:46, Christoph Lameter wrote:
> > commit deea84b0ae3d26b41502ae0a39fe7fe134e703d0 seems to cause a drop
> > in SLUB tbench performance:
> >
> > 8p x86_64
cc'ed linux-netdev
On Saturday 10 November 2007 10:46, Christoph Lameter wrote:
> commit deea84b0ae3d26b41502ae0a39fe7fe134e703d0 seems to cause a drop
> in SLUB tbench performance:
>
> 8p x86_64 system:
>
> 2.6.24-rc2:
> 1260.80 MB/sec
>
> After reverting the patch:
> 2350.04 MB/sec
commit deea84b0ae3d26b41502ae0a39fe7fe134e703d0 seems to cause a drop
in SLUB tbench performance:
8p x86_64 system:
2.6.24-rc2:
1260.80 MB/sec
After reverting the patch:
2350.04 MB/sec
SLAB performance (which is at 2435.58 MB/sec, ~3% better than SLUB) is not
affected by the
cc'ed linux-netdev
On Saturday 10 November 2007 10:46, Christoph Lameter wrote:
commit deea84b0ae3d26b41502ae0a39fe7fe134e703d0 seems to cause a drop
in SLUB tbench performance:
8p x86_64 system:
2.6.24-rc2:
1260.80 MB/sec
After reverting the patch:
2350.04 MB/sec
SLAB
On Saturday 10 November 2007 12:29, Nick Piggin wrote:
cc'ed linux-netdev
Err, make that 'netdev' :P
On Saturday 10 November 2007 10:46, Christoph Lameter wrote:
commit deea84b0ae3d26b41502ae0a39fe7fe134e703d0 seems to cause a drop
in SLUB tbench performance:
8p x86_64 system:
commit deea84b0ae3d26b41502ae0a39fe7fe134e703d0 seems to cause a drop
in SLUB tbench performance:
8p x86_64 system:
2.6.24-rc2:
1260.80 MB/sec
After reverting the patch:
2350.04 MB/sec
SLAB performance (which is at 2435.58 MB/sec, ~3% better than SLUB) is not
affected by the
48 matches
Mail list logo