On Sun, 12 Aug 2007, Daniel Phillips wrote:
> > Because we get to the code of interest when we have no memory on the
> > buddy free lists...
>
> Ah wait, that statement is incorrect and may well be the crux of your
> misunderstanding. Buddy free lists are not exhausted until the entire
> memal
On Friday 10 August 2007 10:46, Christoph Lameter wrote:
> On Fri, 10 Aug 2007, Daniel Phillips wrote:
> > It is quite clear what is in your patch. Instead of just grabbing
> > a page off the buddy free lists in a critical allocation situation
> > you go invoke shrink_caches. Why oh why? All the
On 8/10/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
> The idea of adding code to deal with "I have no memory" situations
> in a kernel that based on have as much memory as possible in use at all
> times is plainly the wrong approach.
No. It is you who have read the patches wrongly, because w
On Fri, 10 Aug 2007, Daniel Phillips wrote:
> It is quite clear what is in your patch. Instead of just grabbing a
> page off the buddy free lists in a critical allocation situation you
> go invoke shrink_caches. Why oh why? All the memory needed to get
Because we get to the code of interest wh
On 8/9/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > If you believe that the deadlock problems we address here can be
> > better fixed by making reclaim more intelligent then please post a
> > patch and we will test it. I am highly skeptical, but the proof is in
> > the patch.
>
> Then plea
On Thu, 9 Aug 2007, Daniel Phillips wrote:
> If you believe that the deadlock problems we address here can be
> better fixed by making reclaim more intelligent then please post a
> patch and we will test it. I am highly skeptical, but the proof is in
> the patch.
Then please test the patch that
On 8/9/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
> The allocations problems that this patch addresses can be fixed by making
> reclaim
> more intelligent.
If you believe that the deadlock problems we address here can be
better fixed by making reclaim more intelligent then please post a
pat
On Thu, 9 Aug 2007, Daniel Phillips wrote:
> You can fix reclaim as much as you want and the basic deadlock will
> still not go away. When you finally do get to writing something out,
> memory consumers in the writeout path are going to cause problems,
> which this patch set fixes.
We currently
On 8/9/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Thu, 9 Aug 2007, Daniel Phillips wrote:
> > On 8/8/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > > On Wed, 8 Aug 2007, Daniel Phillips wrote:
> > > Maybe we need to kill PF_MEMALLOC
> > Shrink_caches needs to be able to recurse
On Thu, 9 Aug 2007, Daniel Phillips wrote:
> On 8/8/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > On Wed, 8 Aug 2007, Daniel Phillips wrote:
> > Maybe we need to kill PF_MEMALLOC
>
> Shrink_caches needs to be able to recurse into filesystems at least,
> and for the duration of the recu
On 8/8/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Wed, 8 Aug 2007, Daniel Phillips wrote:
> Maybe we need to kill PF_MEMALLOC
Shrink_caches needs to be able to recurse into filesystems at least,
and for the duration of the recursion the filesystem must have
privileged access to rese
On Wed, 8 Aug 2007, Daniel Phillips wrote:
> 1. If the allocation can be satisified in the usual way, do that.
> 2. Otherwise, if the GFP flags do not include __GFP_MEMALLOC or
> PF_MEMALLOC is not set, fail the allocation
> 3. Otherwise, if the memcache's reserve quota is not reached,
> sat
On Wed, 8 Aug 2007, Peter Zijlstra wrote:
> Christoph, does this all explain the situation?
Sort of. I am still very sceptical that this will work reliably. I'd
rather look at alternate solution like fixing reclaim. Could you have a
look at Andrew's and my comments on the slub patch?
-
To unsub
On 8/7/07, Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > > AFAICT: This patchset is not throttling processes but failing
> > > allocations.
> >
> > Failing allocations? Where do you see that? As far as I can see,
> > Peter's patch set allows allocations to fail exactly where the user has
> > a
On Tue, 2007-08-07 at 15:18 -0700, Christoph Lameter wrote:
> On Mon, 6 Aug 2007, Daniel Phillips wrote:
>
> > > AFAICT: This patchset is not throttling processes but failing
> > > allocations.
> >
> > Failing allocations? Where do you see that? As far as I can see,
> > Peter's patch set allow
On Mon, 6 Aug 2007, Daniel Phillips wrote:
> > AFAICT: This patchset is not throttling processes but failing
> > allocations.
>
> Failing allocations? Where do you see that? As far as I can see,
> Peter's patch set allows allocations to fail exactly where the user has
> always specified they
On Monday 06 August 2007 16:14, Christoph Lameter wrote:
> On Mon, 6 Aug 2007, Daniel Phillips wrote:
> > Correct. That is what the throttling part of these patches is
> > about.
>
> Where are those patches?
Here is one user:
http://zumastor.googlecode.com/svn/trunk/ddsnap/kernel/dm-ddsnap.c
On Monday 06 August 2007 13:27, Andrew Morton wrote:
> On Mon, 6 Aug 2007 13:19:26 -0700 (PDT) Christoph Lameter wrote:
> > The solution may be as simple as configuring the reserves right and
> > avoid the unbounded memory allocations. That is possible if one
> > would make sure that the network la
On Mon, 6 Aug 2007, Daniel Phillips wrote:
> Correct. That is what the throttling part of these patches is about.
Where are those patches?
> In order to fix the vm writeout deadlock problem properly, two things
> are necessary:
>
> 1) Throttle the vm writeout path to use a bounded amount
On Monday 06 August 2007 14:05, Christoph Lameter wrote:
> > > That is possible if one
> > > would make sure that the network layer triggers reclaim once in a
> > > while.
> >
> > This does not make sense, we cannot reclaim from reclaim.
>
> But we should limit the amounts of allocation we do while
(What Peter already wrote, but in different words)
On Monday 06 August 2007 13:19, Christoph Lameter wrote:
> The solution may be as simple as configuring the reserves right and
> avoid the unbounded memory allocations.
Exactly. That is what this patch set is about. This is the part that
provi
On Mon, 6 Aug 2007, Peter Zijlstra wrote:
> > The solution may be as simple as configuring the reserves right and
> > avoid the unbounded memory allocations.
>
> Which is what the next series of patches will be doing. Please do look
> in detail at these networked swap patches I've been posting
On Mon, 6 Aug 2007 13:19:26 -0700 (PDT) Christoph Lameter <[EMAIL PROTECTED]>
wrote:
> On Mon, 6 Aug 2007, Matt Mackall wrote:
>
> > > > Because a block device may have deadlocked here, leaving the system
> > > > unable to clean dirty memory, or unable to load executables over the
> > > > netw
On Mon, 2007-08-06 at 13:19 -0700, Christoph Lameter wrote:
> On Mon, 6 Aug 2007, Matt Mackall wrote:
>
> > > > Because a block device may have deadlocked here, leaving the system
> > > > unable to clean dirty memory, or unable to load executables over the
> > > > network for example.
> > >
> >
On Mon, 6 Aug 2007, Matt Mackall wrote:
> > > Because a block device may have deadlocked here, leaving the system
> > > unable to clean dirty memory, or unable to load executables over the
> > > network for example.
> >
> > So this is a locking problem that has not been taken care of?
>
> No.
On Mon, 6 Aug 2007, Peter Zijlstra wrote:
> The functionality this is aimed at is swap over network, and I doubt
> you'll be enabling that on these machines.
So add #ifdefs around it?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PRO
On Mon, Aug 06, 2007 at 11:51:45AM -0700, Christoph Lameter wrote:
> On Mon, 6 Aug 2007, Daniel Phillips wrote:
>
> > On Monday 06 August 2007 11:42, Christoph Lameter wrote:
> > > On Mon, 6 Aug 2007, Daniel Phillips wrote:
> > > > Currently your system likely would have died here, so ending up
>
On Mon, 2007-08-06 at 12:11 -0700, Christoph Lameter wrote:
> On Mon, 6 Aug 2007, Peter Zijlstra wrote:
>
> > > > Shudder. That can just be a desaster for NUMA. Both performance wise
> > > > and logic wise. One cpuset being low on memory should not affect
> > > > applications in other cpusets.
> >
On Monday 06 August 2007 11:51, Christoph Lameter wrote:
> On Mon, 6 Aug 2007, Daniel Phillips wrote:
> > On Monday 06 August 2007 11:42, Christoph Lameter wrote:
> > > On Mon, 6 Aug 2007, Daniel Phillips wrote:
> > > > Currently your system likely would have died here, so ending up
> > > > with a
On Mon, 6 Aug 2007, Peter Zijlstra wrote:
> > > Shudder. That can just be a desaster for NUMA. Both performance wise
> > > and logic wise. One cpuset being low on memory should not affect
> > > applications in other cpusets.
>
> Do note that these are only PF_MEMALLOC allocations that will break
On Mon, 6 Aug 2007, Daniel Phillips wrote:
> On Monday 06 August 2007 11:42, Christoph Lameter wrote:
> > On Mon, 6 Aug 2007, Daniel Phillips wrote:
> > > Currently your system likely would have died here, so ending up
> > > with a reserve page temporarily on the wrong node is already an
> > > imp
On Monday 06 August 2007 11:42, Christoph Lameter wrote:
> On Mon, 6 Aug 2007, Daniel Phillips wrote:
> > Currently your system likely would have died here, so ending up
> > with a reserve page temporarily on the wrong node is already an
> > improvement.
>
> The system would have died? Why?
Becaus
On Monday 06 August 2007 11:31, Peter Zijlstra wrote:
> > I agree that the reserve pool should be per-node in the end, but I
> > do not think that serves the interest of simplifying the initial
> > patch set. How about a numa performance patch that adds onto the
> > end of Peter's series?
>
> Trou
On Mon, 6 Aug 2007, Daniel Phillips wrote:
> Currently your system likely would have died here, so ending up with a
> reserve page temporarily on the wrong node is already an improvement.
The system would have died? Why? The application in the cpuset that
ran out of memory should have died not
On Mon, 2007-08-06 at 11:21 -0700, Daniel Phillips wrote:
> On Monday 06 August 2007 11:11, Christoph Lameter wrote:
> > On Mon, 6 Aug 2007, Peter Zijlstra wrote:
> > > Change ALLOC_NO_WATERMARK page allocation such that dipping into
> > > the reserves becomes a system wide event.
> >
> > Shudder.
On Monday 06 August 2007 11:11, Christoph Lameter wrote:
> On Mon, 6 Aug 2007, Peter Zijlstra wrote:
> > Change ALLOC_NO_WATERMARK page allocation such that dipping into
> > the reserves becomes a system wide event.
>
> Shudder. That can just be a desaster for NUMA. Both performance wise
> and logi
On Mon, 6 Aug 2007, Peter Zijlstra wrote:
> Change ALLOC_NO_WATERMARK page allocation such that dipping into the reserves
> becomes a system wide event.
Shudder. That can just be a desaster for NUMA. Both performance wise and
logic wise. One cpuset being low on memory should not affect applicati
Change ALLOC_NO_WATERMARK page allocation such that dipping into the reserves
becomes a system wide event.
This has the advantage that logic dealing with reserve pages need not be node
aware (when we're this low on memory speed is usually not an issue).
Signed-off-by: Peter Zijlstra <[EMAIL PROTE
38 matches
Mail list logo