On Fri, 14 Dec 2007, Ingo Molnar wrote:
> which is of little help if it regresses on other workloads. As we've
> seen it, SLUB can be more than 10 times slower on hackbench. You can
> tune SLUB to use 2MB pages but of course that's not a production level
> system. OTOH, have you tried to tune
On Fri, 14 Dec 2007, Ingo Molnar wrote:
which is of little help if it regresses on other workloads. As we've
seen it, SLUB can be more than 10 times slower on hackbench. You can
tune SLUB to use 2MB pages but of course that's not a production level
system. OTOH, have you tried to tune SLAB
* Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > I think we should we make SLAB the default for v2.6.24 ...
>
> If you guarantee that all the regression of SLAB vs. SLUB are
> addressed then thats fine but AFAICT that is not possible.
huh? You got the ordering wrong ;-) SLUB needs to
* Christoph Lameter [EMAIL PROTECTED] wrote:
I think we should we make SLAB the default for v2.6.24 ...
If you guarantee that all the regression of SLAB vs. SLUB are
addressed then thats fine but AFAICT that is not possible.
huh? You got the ordering wrong ;-) SLUB needs to resolve all
Christoph Lameter a écrit :
On Sat, 8 Dec 2007, Ingo Molnar wrote:
Good. Although we should perhaps look at that reported performance
problem with SLUB. It looks like SLUB will do a memclear() for the
area twice (first for the whole page, then for the thing it allocated)
for the slow case.
On Sat, 8 Dec 2007, Ingo Molnar wrote:
>
> > Good. Although we should perhaps look at that reported performance
> > problem with SLUB. It looks like SLUB will do a memclear() for the
> > area twice (first for the whole page, then for the thing it allocated)
> > for the slow case. Maybe that
On Sun, 9 Dec 2007, Ingo Molnar wrote:
> unless i'm missing something obvious (and i easily might), i see SLUB as
> SLAB reimplemented with a different queueing model. Not "without
> queueing".
The "queue" that you are talking about is the freelist of a slab. It exist
for each slab. SLAB uses
On Sat, 8 Dec 2007, Linus Torvalds wrote:
> On that note, shouldn't we also do this for slub.c? Christoph?
SLUB does not pass __GFP_ZERO to the page allocator.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo
On Sat, 8 Dec 2007, Linus Torvalds wrote:
On that note, shouldn't we also do this for slub.c? Christoph?
SLUB does not pass __GFP_ZERO to the page allocator.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo
On Sun, 9 Dec 2007, Ingo Molnar wrote:
unless i'm missing something obvious (and i easily might), i see SLUB as
SLAB reimplemented with a different queueing model. Not without
queueing.
The queue that you are talking about is the freelist of a slab. It exist
for each slab. SLAB uses a
On Sat, 8 Dec 2007, Ingo Molnar wrote:
Good. Although we should perhaps look at that reported performance
problem with SLUB. It looks like SLUB will do a memclear() for the
area twice (first for the whole page, then for the thing it allocated)
for the slow case. Maybe that
Christoph Lameter a écrit :
On Sat, 8 Dec 2007, Ingo Molnar wrote:
Good. Although we should perhaps look at that reported performance
problem with SLUB. It looks like SLUB will do a memclear() for the
area twice (first for the whole page, then for the thing it allocated)
for the slow case.
On Tue, 2007-12-11 at 09:52 +0100, Ingo Molnar wrote:
> * Dave Jones <[EMAIL PROTECTED]> wrote:
> > Which leaves my only other gripe. It broke slabtop.
>
> that's actually a _bad_ ABI regression. Rafael, could you please add
> this to the regressions list?
>
> > There's an alternative
* Dave Jones <[EMAIL PROTECTED]> wrote:
> On Sat, Dec 08, 2007 at 08:52:11PM +0100, Ingo Molnar wrote:
>
> > so even today's upstream kernel, which has 'ancient' SLUB code, SLAB and
> > SLUB have essentially the same linecount:
> >
> > $ wc -l mm/slab.c mm/slub.c
> > 4478 mm/slab.c
>
* Dave Jones [EMAIL PROTECTED] wrote:
On Sat, Dec 08, 2007 at 08:52:11PM +0100, Ingo Molnar wrote:
so even today's upstream kernel, which has 'ancient' SLUB code, SLAB and
SLUB have essentially the same linecount:
$ wc -l mm/slab.c mm/slub.c
4478 mm/slab.c
4125
On Tue, 2007-12-11 at 09:52 +0100, Ingo Molnar wrote:
* Dave Jones [EMAIL PROTECTED] wrote:
Which leaves my only other gripe. It broke slabtop.
that's actually a _bad_ ABI regression. Rafael, could you please add
this to the regressions list?
There's an alternative implementation in
On Sat, Dec 08, 2007 at 08:52:11PM +0100, Ingo Molnar wrote:
> so even today's upstream kernel, which has 'ancient' SLUB code, SLAB and
> SLUB have essentially the same linecount:
>
> $ wc -l mm/slab.c mm/slub.c
> 4478 mm/slab.c
> 4125 mm/slub.c
>
> (and while linecount !=
On Sat, Dec 08, 2007 at 08:52:11PM +0100, Ingo Molnar wrote:
so even today's upstream kernel, which has 'ancient' SLUB code, SLAB and
SLUB have essentially the same linecount:
$ wc -l mm/slab.c mm/slub.c
4478 mm/slab.c
4125 mm/slub.c
(and while linecount != complexity,
On Sun, 9 Dec 2007 10:20:19 +0200
"Pekka Enberg" <[EMAIL PROTECTED]> wrote:
\
> Now, while SLAB code is "pleasant and straightforward code" (thanks,
> btw) for UMA, it's really hairy for NUMA plus the "alien caches" eat
> tons of memory
.. and they make slab slower on numa systems for database
On Sunday, 9 of December 2007, Ingo Molnar wrote:
>
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> > It's a kmap_atomic() debugging patch which I wrote ages ago and whcih
> > Ingo sucked into his tree. I don't _think_ this warning is present in
> > your tree at all.
> >
> >
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > The problem is that for each cache, you have an "per-node alien
> > queues" for each node (see struct kmem_cache nodelists -> struct
> > kmem_list3 alien). Moving slab metadata to struct page solves this
> > but now you can only have one "queue"
* Pekka Enberg <[EMAIL PROTECTED]> wrote:
> I mostly live in the legacy 32-bit UMA/UP land still so I cannot
> verify this myself but the kind folks at SGI claim the following
> (again from the announcement):
>
> "On our systems with 1k nodes / processors we have several gigabytes
> just
Hi Ingo,
On Dec 9, 2007 10:50 AM, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> yes, i understand the initial announcement (and the Kconfig entry still
> says the same), but that is not matched up by the reality i see in the
> actual code - SLUB clearly uses a queue/list of objects (as cited in my
>
* Pekka Enberg <[EMAIL PROTECTED]> wrote:
> Hi Ingo,
>
> On Dec 8, 2007 10:29 PM, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > so it has a "free list", which is clearly per cpu. Hang on! Isnt that
> > actually a per CPU queue? Which SLUB has not, we are told? The "U" in
> > SLUB. How on earth can
Hi Linus,
On Dec 8, 2007 7:54 PM, Linus Torvalds <[EMAIL PROTECTED]> wrote:
> diff --git a/mm/slob.c b/mm/slob.c
> index ee2ef8a..773a7aa 100644
> --- a/mm/slob.c
> +++ b/mm/slob.c
> @@ -330,7 +330,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int
> align, int node)
>
> /* Not
Hi Ingo,
On Dec 8, 2007 10:29 PM, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> so it has a "free list", which is clearly per cpu. Hang on! Isnt that
> actually a per CPU queue? Which SLUB has not, we are told? The "U" in
> SLUB. How on earth can an allocator in 2007 claim to have no queuing
> (which
Hi Ingo,
On Dec 8, 2007 10:29 PM, Ingo Molnar [EMAIL PROTECTED] wrote:
so it has a free list, which is clearly per cpu. Hang on! Isnt that
actually a per CPU queue? Which SLUB has not, we are told? The U in
SLUB. How on earth can an allocator in 2007 claim to have no queuing
(which is in
Hi Linus,
On Dec 8, 2007 7:54 PM, Linus Torvalds [EMAIL PROTECTED] wrote:
diff --git a/mm/slob.c b/mm/slob.c
index ee2ef8a..773a7aa 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -330,7 +330,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int
align, int node)
/* Not enough space:
* Pekka Enberg [EMAIL PROTECTED] wrote:
Hi Ingo,
On Dec 8, 2007 10:29 PM, Ingo Molnar [EMAIL PROTECTED] wrote:
so it has a free list, which is clearly per cpu. Hang on! Isnt that
actually a per CPU queue? Which SLUB has not, we are told? The U in
SLUB. How on earth can an allocator in
Hi Ingo,
On Dec 9, 2007 10:50 AM, Ingo Molnar [EMAIL PROTECTED] wrote:
yes, i understand the initial announcement (and the Kconfig entry still
says the same), but that is not matched up by the reality i see in the
actual code - SLUB clearly uses a queue/list of objects (as cited in my
* Pekka Enberg [EMAIL PROTECTED] wrote:
I mostly live in the legacy 32-bit UMA/UP land still so I cannot
verify this myself but the kind folks at SGI claim the following
(again from the announcement):
On our systems with 1k nodes / processors we have several gigabytes
just tied up for
* Ingo Molnar [EMAIL PROTECTED] wrote:
The problem is that for each cache, you have an per-node alien
queues for each node (see struct kmem_cache nodelists - struct
kmem_list3 alien). Moving slab metadata to struct page solves this
but now you can only have one queue thats part of the
On Sunday, 9 of December 2007, Ingo Molnar wrote:
* Andrew Morton [EMAIL PROTECTED] wrote:
It's a kmap_atomic() debugging patch which I wrote ages ago and whcih
Ingo sucked into his tree. I don't _think_ this warning is present in
your tree at all.
On Sun, 9 Dec 2007 10:20:19 +0200
Pekka Enberg [EMAIL PROTECTED] wrote:
\
Now, while SLAB code is pleasant and straightforward code (thanks,
btw) for UMA, it's really hairy for NUMA plus the alien caches eat
tons of memory
.. and they make slab slower on numa systems for database workloads
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> It's a kmap_atomic() debugging patch which I wrote ages ago and whcih
> Ingo sucked into his tree. I don't _think_ this warning is present in
> your tree at all.
>
> http://lkml.org/lkml/2007/11/29/157 is where it starts.
>
> I had a lenghty
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> the SLUB concept is proudly outlined in init/Kconfig:
>
> config SLUB
> bool "SLUB (Unqueued Allocator)"
> help
>SLUB is a slab allocator that minimizes cache line usage
>instead of managing queues of cached
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> > But I don't think we need to do anything for 2.6.24..
>
> Good. Although we should perhaps look at that reported performance
> problem with SLUB. It looks like SLUB will do a memclear() for the
> area twice (first for the whole page, then for
On Sat, Dec 08, 2007 at 09:54:06AM -0800, Linus Torvalds wrote:
>
>
> On Sat, 8 Dec 2007, Linus Torvalds wrote:
> >
> > But I'll apply it anyway, because it looks "obviously correct" from the
> > standpoint that the _other_??slob user already clears the end result
> > explicitly later on, and
On Sat, 8 Dec 2007, Andrew Morton wrote:
>
> > So which warning is it that triggers the bogus error?
>
> It's a kmap_atomic() debugging patch which I wrote ages ago and whcih Ingo
> sucked into his tree. I don't _think_ this warning is present in your tree
> at all.
Ok, that explains it.
>
On Sat, Dec 08, 2007 at 09:54:06AM -0800, Linus Torvalds wrote:
>
>
> On Sat, 8 Dec 2007, Linus Torvalds wrote:
> >
> > But I'll apply it anyway, because it looks "obviously correct" from the
> > standpoint that the _other_??slob user already clears the end result
> > explicitly later on, and
On Sat, 8 Dec 2007 09:54:06 -0800 (PST) Linus Torvalds <[EMAIL PROTECTED]>
wrote:
> On Sat, 8 Dec 2007, Linus Torvalds wrote:
> >
> > But I'll apply it anyway, because it looks "obviously correct" from the
> > standpoint that the _other___slob user already clears the end result
> > explicitly
On Sat, 8 Dec 2007, Linus Torvalds wrote:
>
> But I'll apply it anyway, because it looks "obviously correct" from the
> standpoint that the _other_Â slob user already clears the end result
> explicitly later on, and we simply should never pass down __GFP_ZERO to
> the actual page allocator.
On Sat, 8 Dec 2007, Matt Mackall wrote:
>
> Avoid calling page allocator with __GFP_ZERO, as we might be in atomic
> context and this will make thing unhappy on highmem systems. Instead,
> manually zero allocations from the page allocator.
I think this is fine, but didn't we fix the warning
On Sat, Dec 08, 2007 at 10:30:39AM +0100, Ingo Molnar wrote:
>
> * Rafael J. Wysocki <[EMAIL PROTECTED]> wrote:
>
> > Subject : tipc_init(), WARNING: at arch/x86/mm/highmem_32.c:52
> > kmap_atomic_prot()
> > Submitter : Ingo Molnar <[EMAIL PROTECTED]>
> > References :
On Sat, 8 Dec 2007 10:30:39 +0100 Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Rafael J. Wysocki <[EMAIL PROTECTED]> wrote:
>
> > Subject : tipc_init(), WARNING: at arch/x86/mm/highmem_32.c:52
> > kmap_atomic_prot()
> > Submitter : Ingo Molnar <[EMAIL PROTECTED]>
> > References
* Rafael J. Wysocki <[EMAIL PROTECTED]> wrote:
> Subject : tipc_init(), WARNING: at arch/x86/mm/highmem_32.c:52
> kmap_atomic_prot()
> Submitter : Ingo Molnar <[EMAIL PROTECTED]>
> References: http://lkml.org/lkml/2007/11/29/157
>
On Sat, 8 Dec 2007 10:30:39 +0100 Ingo Molnar [EMAIL PROTECTED] wrote:
* Rafael J. Wysocki [EMAIL PROTECTED] wrote:
Subject : tipc_init(), WARNING: at arch/x86/mm/highmem_32.c:52
kmap_atomic_prot()
Submitter : Ingo Molnar [EMAIL PROTECTED]
References :
* Rafael J. Wysocki [EMAIL PROTECTED] wrote:
Subject : tipc_init(), WARNING: at arch/x86/mm/highmem_32.c:52
kmap_atomic_prot()
Submitter : Ingo Molnar [EMAIL PROTECTED]
References: http://lkml.org/lkml/2007/11/29/157
On Sat, Dec 08, 2007 at 10:30:39AM +0100, Ingo Molnar wrote:
* Rafael J. Wysocki [EMAIL PROTECTED] wrote:
Subject : tipc_init(), WARNING: at arch/x86/mm/highmem_32.c:52
kmap_atomic_prot()
Submitter : Ingo Molnar [EMAIL PROTECTED]
References :
On Sat, 8 Dec 2007, Matt Mackall wrote:
Avoid calling page allocator with __GFP_ZERO, as we might be in atomic
context and this will make thing unhappy on highmem systems. Instead,
manually zero allocations from the page allocator.
I think this is fine, but didn't we fix the warning
On Sat, 8 Dec 2007, Linus Torvalds wrote:
But I'll apply it anyway, because it looks obviously correct from the
standpoint that the _other_Â slob user already clears the end result
explicitly later on, and we simply should never pass down __GFP_ZERO to
the actual page allocator.
On Sat, 8 Dec 2007 09:54:06 -0800 (PST) Linus Torvalds [EMAIL PROTECTED]
wrote:
On Sat, 8 Dec 2007, Linus Torvalds wrote:
But I'll apply it anyway, because it looks obviously correct from the
standpoint that the _other___slob user already clears the end result
explicitly later on,
On Sat, Dec 08, 2007 at 09:54:06AM -0800, Linus Torvalds wrote:
On Sat, 8 Dec 2007, Linus Torvalds wrote:
But I'll apply it anyway, because it looks obviously correct from the
standpoint that the _other_??slob user already clears the end result
explicitly later on, and we simply
On Sat, 8 Dec 2007, Andrew Morton wrote:
So which warning is it that triggers the bogus error?
It's a kmap_atomic() debugging patch which I wrote ages ago and whcih Ingo
sucked into his tree. I don't _think_ this warning is present in your tree
at all.
Ok, that explains it.
Knocking
On Sat, Dec 08, 2007 at 09:54:06AM -0800, Linus Torvalds wrote:
On Sat, 8 Dec 2007, Linus Torvalds wrote:
But I'll apply it anyway, because it looks obviously correct from the
standpoint that the _other_??slob user already clears the end result
explicitly later on, and we simply
* Linus Torvalds [EMAIL PROTECTED] wrote:
But I don't think we need to do anything for 2.6.24..
Good. Although we should perhaps look at that reported performance
problem with SLUB. It looks like SLUB will do a memclear() for the
area twice (first for the whole page, then for the thing
* Ingo Molnar [EMAIL PROTECTED] wrote:
the SLUB concept is proudly outlined in init/Kconfig:
config SLUB
bool SLUB (Unqueued Allocator)
help
SLUB is a slab allocator that minimizes cache line usage
instead of managing queues of cached objects (SLAB
* Andrew Morton [EMAIL PROTECTED] wrote:
It's a kmap_atomic() debugging patch which I wrote ages ago and whcih
Ingo sucked into his tree. I don't _think_ this warning is present in
your tree at all.
http://lkml.org/lkml/2007/11/29/157 is where it starts.
I had a lenghty
58 matches
Mail list logo