On Mon, 2007-10-01 at 14:30 -0700, Andrew Morton wrote:
On Mon, 1 Oct 2007 13:55:29 -0700 (PDT)
Christoph Lameter [EMAIL PROTECTED] wrote:
On Sat, 29 Sep 2007, Andrew Morton wrote:
atomic allocations. And with SLUB using higher order pages, atomic !0
order allocations will be very
On Tuesday 02 October 2007 07:01, Christoph Lameter wrote:
On Sat, 29 Sep 2007, Peter Zijlstra wrote:
On Fri, 2007-09-28 at 11:20 -0700, Christoph Lameter wrote:
Really? That means we can no longer even allocate stacks for forking.
I think I'm running with 4k stacks...
4k stacks will
On Fri, 28 Sep 2007, Nick Piggin wrote:
I thought it was slower. Have you fixed the performance regression?
(OK, I read further down that you are still working on it but not confirmed
yet...)
The problem is with the weird way of Intel testing and communication.
Every 3-6 month or so they
On Sat, 29 Sep 2007, Andrew Morton wrote:
atomic allocations. And with SLUB using higher order pages, atomic !0
order allocations will be very very common.
Oh OK.
I thought we'd already fixed slub so that it didn't do that. Maybe that
fix is in -mm but I don't think so.
Trying to
On Fri, 28 Sep 2007, Mel Gorman wrote:
Minimally, SLUB by default should continue to use order-0 pages. Peter has
managed to bust order-1 pages with mem=128MB. Admittedly, it was a really
hostile workload but the point remains. It was artifically worked around
with min_free_kbytes (value set
On Sat, 29 Sep 2007, Peter Zijlstra wrote:
On Fri, 2007-09-28 at 11:20 -0700, Christoph Lameter wrote:
Really? That means we can no longer even allocate stacks for forking.
I think I'm running with 4k stacks...
4k stacks will never fly on an SGI x86_64 NUMA configuration given the
On Mon, 1 Oct 2007 13:55:29 -0700 (PDT)
Christoph Lameter [EMAIL PROTECTED] wrote:
On Sat, 29 Sep 2007, Andrew Morton wrote:
atomic allocations. And with SLUB using higher order pages, atomic !0
order allocations will be very very common.
Oh OK.
I thought we'd already fixed
On Mon, 1 Oct 2007, Andrew Morton wrote:
Do slab and slub use the same underlying page size for each slab?
SLAB cannot pack objects as dense as SLUB and they have different
algorithm to make the choice of order. Thus the number of objects per slab
may vary between SLAB and SLUB and therefore
On Mon, 1 Oct 2007 14:38:55 -0700 (PDT)
Christoph Lameter [EMAIL PROTECTED] wrote:
On Mon, 1 Oct 2007, Andrew Morton wrote:
Do slab and slub use the same underlying page size for each slab?
SLAB cannot pack objects as dense as SLUB and they have different
algorithm to make the choice of
On Mon, 1 Oct 2007, Andrew Morton wrote:
Ah. So the already-dropped
slub-exploit-page-mobility-to-increase-allocation-order.patch was the
culprit?
Yes without that patch SLUB will no longer take special action if antifrag
is around.
-
To unsubscribe from this list: send the line
On Sunday 30 September 2007 05:20, Andrew Morton wrote:
On Sat, 29 Sep 2007 06:19:33 +1000 Nick Piggin [EMAIL PROTECTED]
wrote:
On Saturday 29 September 2007 19:27, Andrew Morton wrote:
On Sat, 29 Sep 2007 11:14:02 +0200 Peter Zijlstra
[EMAIL PROTECTED]
wrote:
oom-killings, or
On Sun, 30 Sep 2007 05:09:28 +1000 Nick Piggin [EMAIL PROTECTED] wrote:
On Sunday 30 September 2007 05:20, Andrew Morton wrote:
On Sat, 29 Sep 2007 06:19:33 +1000 Nick Piggin [EMAIL PROTECTED]
wrote:
On Saturday 29 September 2007 19:27, Andrew Morton wrote:
On Sat, 29 Sep 2007
On Monday 01 October 2007 06:12, Andrew Morton wrote:
On Sun, 30 Sep 2007 05:09:28 +1000 Nick Piggin [EMAIL PROTECTED]
wrote:
On Sunday 30 September 2007 05:20, Andrew Morton wrote:
We can't run out of unfragmented memory for an order-2 GFP_KERNEL
allocation in this workload. We go and
On Fri, 28 Sep 2007 20:25:50 +0200 Peter Zijlstra [EMAIL PROTECTED] wrote:
On Fri, 2007-09-28 at 11:20 -0700, Christoph Lameter wrote:
start 2 processes that each mmap a separate 64M file, and which does
sequential writes on them. start a 3th process that does the same with
64M
On Fri, 2007-09-28 at 11:20 -0700, Christoph Lameter wrote:
Really? That means we can no longer even allocate stacks for forking.
I think I'm running with 4k stacks...
-
To unsubscribe from this list: send the line unsubscribe linux-fsdevel in
the body of a message to [EMAIL PROTECTED]
More
On Sat, 2007-09-29 at 01:13 -0700, Andrew Morton wrote:
On Fri, 28 Sep 2007 20:25:50 +0200 Peter Zijlstra [EMAIL PROTECTED] wrote:
On Fri, 2007-09-28 at 11:20 -0700, Christoph Lameter wrote:
start 2 processes that each mmap a separate 64M file, and which does
sequential writes
On Sat, 2007-09-29 at 10:47 +0200, Peter Zijlstra wrote:
Ah, right, that was the detail... all this lumpy reclaim is useless for
atomic allocations. And with SLUB using higher order pages, atomic !0
order allocations will be very very common.
One I can remember was:
On Sat, 29 Sep 2007 10:47:12 +0200 Peter Zijlstra [EMAIL PROTECTED] wrote:
On Sat, 2007-09-29 at 01:13 -0700, Andrew Morton wrote:
On Fri, 28 Sep 2007 20:25:50 +0200 Peter Zijlstra [EMAIL PROTECTED] wrote:
On Fri, 2007-09-28 at 11:20 -0700, Christoph Lameter wrote:
start 2
On Sat, 29 Sep 2007 10:53:41 +0200 Peter Zijlstra [EMAIL PROTECTED] wrote:
On Sat, 2007-09-29 at 10:47 +0200, Peter Zijlstra wrote:
Ah, right, that was the detail... all this lumpy reclaim is useless for
atomic allocations. And with SLUB using higher order pages, atomic !0
order
On Sat, 2007-09-29 at 02:01 -0700, Andrew Morton wrote:
On Sat, 29 Sep 2007 10:53:41 +0200 Peter Zijlstra [EMAIL PROTECTED] wrote:
On Sat, 2007-09-29 at 10:47 +0200, Peter Zijlstra wrote:
Ah, right, that was the detail... all this lumpy reclaim is useless for
atomic allocations.
On Sat, 29 Sep 2007 11:14:02 +0200 Peter Zijlstra [EMAIL PROTECTED] wrote:
oom-killings, or page allocation failures? The latter, one hopes.
Linux version 2.6.23-rc4-mm1-dirty ([EMAIL PROTECTED]) (gcc version 4.1.2
(Ubuntu 4.1.2-0ubuntu4)) #27 Tue Sep 18 15:40:35 CEST 2007
...
On Saturday 29 September 2007 19:27, Andrew Morton wrote:
On Sat, 29 Sep 2007 11:14:02 +0200 Peter Zijlstra [EMAIL PROTECTED]
wrote:
oom-killings, or page allocation failures? The latter, one hopes.
Linux version 2.6.23-rc4-mm1-dirty ([EMAIL PROTECTED]) (gcc version 4.1.2
(Ubuntu
On Saturday 29 September 2007 04:41, Christoph Lameter wrote:
On Fri, 28 Sep 2007, Peter Zijlstra wrote:
memory got massively fragemented, as anti-frag gets easily defeated.
setting min_free_kbytes to 12M does seem to solve it - it forces 2 max
order blocks to stay available, so we don't
On Sat, 29 Sep 2007 06:19:33 +1000 Nick Piggin [EMAIL PROTECTED] wrote:
On Saturday 29 September 2007 19:27, Andrew Morton wrote:
On Sat, 29 Sep 2007 11:14:02 +0200 Peter Zijlstra [EMAIL PROTECTED]
wrote:
oom-killings, or page allocation failures? The latter, one hopes.
Linux
On Wednesday 19 September 2007 13:36, Christoph Lameter wrote:
SLAB_VFALLBACK can be specified for selected slab caches. If fallback is
available then the conservative settings for higher order allocations are
overridden. We then request an order that can accomodate at mininum
100 objects. The
On Fri, 28 Sep 2007, Nick Piggin wrote:
On Wednesday 19 September 2007 13:36, Christoph Lameter wrote:
SLAB_VFALLBACK can be specified for selected slab caches. If fallback is
available then the conservative settings for higher order allocations are
overridden. We then request an order
On Fri, 2007-09-28 at 10:33 -0700, Christoph Lameter wrote:
Again I have not seen any fallbacks to vmalloc in my testing. What we are
doing here is mainly to address your theoretical cases that we so far have
never seen to be a problem and increase the reliability of allocations of
page
On Fri, 28 Sep 2007, Peter Zijlstra wrote:
On Fri, 2007-09-28 at 10:33 -0700, Christoph Lameter wrote:
Again I have not seen any fallbacks to vmalloc in my testing. What we are
doing here is mainly to address your theoretical cases that we so far have
never seen to be a problem and
On Fri, 2007-09-28 at 11:20 -0700, Christoph Lameter wrote:
start 2 processes that each mmap a separate 64M file, and which does
sequential writes on them. start a 3th process that does the same with
64M anonymous.
wait for a while, and you'll see order=1 failures.
Really? That
On Fri, 28 Sep 2007, Peter Zijlstra wrote:
memory got massively fragemented, as anti-frag gets easily defeated.
setting min_free_kbytes to 12M does seem to solve it - it forces 2 max
order blocks to stay available, so we don't mix types. however 12M on
128M is rather a lot.
Yes, strict
On (28/09/07 20:25), Peter Zijlstra didst pronounce:
On Fri, 2007-09-28 at 11:20 -0700, Christoph Lameter wrote:
start 2 processes that each mmap a separate 64M file, and which does
sequential writes on them. start a 3th process that does the same with
64M anonymous.
wait for
On (28/09/07 10:33), Christoph Lameter didst pronounce:
On Fri, 28 Sep 2007, Nick Piggin wrote:
On Wednesday 19 September 2007 13:36, Christoph Lameter wrote:
SLAB_VFALLBACK can be specified for selected slab caches. If fallback is
available then the conservative settings for higher
On (28/09/07 11:41), Christoph Lameter didst pronounce:
On Fri, 28 Sep 2007, Peter Zijlstra wrote:
memory got massively fragemented, as anti-frag gets easily defeated.
setting min_free_kbytes to 12M does seem to solve it - it forces 2 max
order blocks to stay available, so we don't mix
On Saturday 29 September 2007 03:33, Christoph Lameter wrote:
On Fri, 28 Sep 2007, Nick Piggin wrote:
On Wednesday 19 September 2007 13:36, Christoph Lameter wrote:
SLAB_VFALLBACK can be specified for selected slab caches. If fallback
is available then the conservative settings for higher
SLAB_VFALLBACK can be specified for selected slab caches. If fallback is
available then the conservative settings for higher order allocations are
overridden. We then request an order that can accomodate at mininum
100 objects. The size of an individual slab allocation is allowed to reach
up to
SLAB_VFALLBACK can be specified for selected slab caches. If fallback is
available then the conservative settings for higher order allocations are
overridden. We then request an order that can accomodate at mininum
100 objects. The size of an individual slab allocation is allowed to reach
up to
36 matches
Mail list logo