On Wednesday 10 October 2007 11:26, Christoph Lameter wrote:
> On Tue, 9 Oct 2007, Nick Piggin wrote:
> > > We already use 32k stacks on IA64. So the memory argument fail there.
> >
> > I'm talking about generic code.
>
> The stack size is set in arch code not in generic code.
Generic code must
On Tue, 9 Oct 2007, Nick Piggin wrote:
> > We already use 32k stacks on IA64. So the memory argument fail there.
>
> I'm talking about generic code.
The stack size is set in arch code not in generic code.
> > > The solution has until now always been to fix the problems so they don't
> > > use
On Wednesday 10 October 2007 04:39, Christoph Lameter wrote:
> On Mon, 8 Oct 2007, Nick Piggin wrote:
> > The tight memory restrictions on stack usage do not come about because
> > of the difficulty in increasing the stack size :) It is because we want
> > to keep stack sizes small!
> >
> >
On Mon, 8 Oct 2007, Nick Piggin wrote:
> The tight memory restrictions on stack usage do not come about because
> of the difficulty in increasing the stack size :) It is because we want to
> keep stack sizes small!
>
> Increasing the stack size 4K uses another 4MB of memory for every 1000
>
On Mon, 8 Oct 2007, Nick Piggin wrote:
The tight memory restrictions on stack usage do not come about because
of the difficulty in increasing the stack size :) It is because we want to
keep stack sizes small!
Increasing the stack size 4K uses another 4MB of memory for every 1000
threads
On Wednesday 10 October 2007 04:39, Christoph Lameter wrote:
On Mon, 8 Oct 2007, Nick Piggin wrote:
The tight memory restrictions on stack usage do not come about because
of the difficulty in increasing the stack size :) It is because we want
to keep stack sizes small!
Increasing the
On Tue, 9 Oct 2007, Nick Piggin wrote:
We already use 32k stacks on IA64. So the memory argument fail there.
I'm talking about generic code.
The stack size is set in arch code not in generic code.
The solution has until now always been to fix the problems so they don't
use so much
On Wednesday 10 October 2007 11:26, Christoph Lameter wrote:
On Tue, 9 Oct 2007, Nick Piggin wrote:
We already use 32k stacks on IA64. So the memory argument fail there.
I'm talking about generic code.
The stack size is set in arch code not in generic code.
Generic code must assume a 4K
On Tuesday 09 October 2007 03:36, Christoph Lameter wrote:
> On Sun, 7 Oct 2007, Nick Piggin wrote:
> > > The problem can become non-rare on special low memory machines doing
> > > wild swapping things though.
> >
> > But only your huge systems will be using huge stacks?
>
> I have no idea who
On Sun, 7 Oct 2007, Nick Piggin wrote:
> > The problem can become non-rare on special low memory machines doing wild
> > swapping things though.
>
> But only your huge systems will be using huge stacks?
I have no idea who else would be using such a feature. Relaxing the tight
memory
On Sun, 7 Oct 2007, Nick Piggin wrote:
The problem can become non-rare on special low memory machines doing wild
swapping things though.
But only your huge systems will be using huge stacks?
I have no idea who else would be using such a feature. Relaxing the tight
memory restrictions on
On Tuesday 09 October 2007 03:36, Christoph Lameter wrote:
On Sun, 7 Oct 2007, Nick Piggin wrote:
The problem can become non-rare on special low memory machines doing
wild swapping things though.
But only your huge systems will be using huge stacks?
I have no idea who else would be
On Friday 05 October 2007 07:20, Christoph Lameter wrote:
> On Thu, 4 Oct 2007, Rik van Riel wrote:
> > > Well we can now address the rarity. That is the whole point of the
> > > patchset.
> >
> > Introducing complexity to fight a very rare problem with a good
> > fallback (refusing to fork more
On Friday 05 October 2007 07:20, Christoph Lameter wrote:
On Thu, 4 Oct 2007, Rik van Riel wrote:
Well we can now address the rarity. That is the whole point of the
patchset.
Introducing complexity to fight a very rare problem with a good
fallback (refusing to fork more tasks, as well
Rik van Riel wrote:
On Thu, 4 Oct 2007 12:20:50 -0700 (PDT)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
On Thu, 4 Oct 2007, Andi Kleen wrote:
We've known for ages that it is possible. But it has been always so
rare that it was ignored.
Well we can now address the rarity. That is the whole
Rik van Riel wrote:
On Thu, 4 Oct 2007 12:20:50 -0700 (PDT)
Christoph Lameter [EMAIL PROTECTED] wrote:
On Thu, 4 Oct 2007, Andi Kleen wrote:
We've known for ages that it is possible. But it has been always so
rare that it was ignored.
Well we can now address the rarity. That is the whole
On Thu, 4 Oct 2007, Rik van Riel wrote:
> > Well we can now address the rarity. That is the whole point of the
> > patchset.
>
> Introducing complexity to fight a very rare problem with a good
> fallback (refusing to fork more tasks, as well as lumpy reclaim)
> somehow does not seem like a good
On Thu, 4 Oct 2007 12:20:50 -0700 (PDT)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Thu, 4 Oct 2007, Andi Kleen wrote:
>
> > We've known for ages that it is possible. But it has been always so
> > rare that it was ignored.
>
> Well we can now address the rarity. That is the whole point of
On Thu, 4 Oct 2007, Andi Kleen wrote:
> We've known for ages that it is possible. But it has been always so rare
> that it was ignored.
Well we can now address the rarity. That is the whole point of the
patchset.
> Is there any evidence this is more common now than it used to be?
It will be
On Thu, 4 Oct 2007, Andi Kleen wrote:
> > The order-1 allocation failures where GFP_ATOMIC, because SLUB uses !0
> > order for everything.
>
> slub is wrong then. Can it be fixed?
SLUB in mm kernels was using higher order allocations for some slabs
for the last 6 months or so. Not true for
On Thu, 2007-10-04 at 14:25 +0200, Andi Kleen wrote:
> > The order-1 allocation failures where GFP_ATOMIC, because SLUB uses !0
> > order for everything.
>
> slub is wrong then. Can it be fixed?
I think mainline slub doesn't do this, just -mm.
See DEFAULT_MAX_ORDER in mm/slub.c
> > Kernel
> The order-1 allocation failures where GFP_ATOMIC, because SLUB uses !0
> order for everything.
slub is wrong then. Can it be fixed?
> Kernel stack allocation is GFP_KERNEL I presume.
Of course.
> Also, I use 4k stacks on all my machines.
You don't have any x86-64 machines?
-Andi
-
To
On Thu, 2007-10-04 at 13:56 +0200, Andi Kleen wrote:
> On Thursday 04 October 2007 05:59:48 Christoph Lameter wrote:
> > Peter Zijlstra has recently demonstrated that we can have order 1 allocation
> > failures under memory pressure with small memory configurations. The
> > x86_64 stack has a size
On Thursday 04 October 2007 05:59:48 Christoph Lameter wrote:
> Peter Zijlstra has recently demonstrated that we can have order 1 allocation
> failures under memory pressure with small memory configurations. The
> x86_64 stack has a size of 8k and thus requires a order 1 allocation.
We've known
On Thursday 04 October 2007 05:59:48 Christoph Lameter wrote:
Peter Zijlstra has recently demonstrated that we can have order 1 allocation
failures under memory pressure with small memory configurations. The
x86_64 stack has a size of 8k and thus requires a order 1 allocation.
We've known for
On Thu, 2007-10-04 at 13:56 +0200, Andi Kleen wrote:
On Thursday 04 October 2007 05:59:48 Christoph Lameter wrote:
Peter Zijlstra has recently demonstrated that we can have order 1 allocation
failures under memory pressure with small memory configurations. The
x86_64 stack has a size of 8k
On Thu, 2007-10-04 at 14:25 +0200, Andi Kleen wrote:
The order-1 allocation failures where GFP_ATOMIC, because SLUB uses !0
order for everything.
slub is wrong then. Can it be fixed?
I think mainline slub doesn't do this, just -mm.
See DEFAULT_MAX_ORDER in mm/slub.c
Kernel stack
The order-1 allocation failures where GFP_ATOMIC, because SLUB uses !0
order for everything.
slub is wrong then. Can it be fixed?
Kernel stack allocation is GFP_KERNEL I presume.
Of course.
Also, I use 4k stacks on all my machines.
You don't have any x86-64 machines?
-Andi
-
To
On Thu, 4 Oct 2007, Andi Kleen wrote:
The order-1 allocation failures where GFP_ATOMIC, because SLUB uses !0
order for everything.
slub is wrong then. Can it be fixed?
SLUB in mm kernels was using higher order allocations for some slabs
for the last 6 months or so. Not true for upstream.
On Thu, 4 Oct 2007, Andi Kleen wrote:
We've known for ages that it is possible. But it has been always so rare
that it was ignored.
Well we can now address the rarity. That is the whole point of the
patchset.
Is there any evidence this is more common now than it used to be?
It will be more
On Thu, 4 Oct 2007 12:20:50 -0700 (PDT)
Christoph Lameter [EMAIL PROTECTED] wrote:
On Thu, 4 Oct 2007, Andi Kleen wrote:
We've known for ages that it is possible. But it has been always so
rare that it was ignored.
Well we can now address the rarity. That is the whole point of the
On Thu, 4 Oct 2007, Rik van Riel wrote:
Well we can now address the rarity. That is the whole point of the
patchset.
Introducing complexity to fight a very rare problem with a good
fallback (refusing to fork more tasks, as well as lumpy reclaim)
somehow does not seem like a good
Peter Zijlstra has recently demonstrated that we can have order 1 allocation
failures under memory pressure with small memory configurations. The
x86_64 stack has a size of 8k and thus requires a order 1 allocation.
This patch adds a virtual fallback capability for the stack. The system may
Peter Zijlstra has recently demonstrated that we can have order 1 allocation
failures under memory pressure with small memory configurations. The
x86_64 stack has a size of 8k and thus requires a order 1 allocation.
This patch adds a virtual fallback capability for the stack. The system may
34 matches
Mail list logo