On Friday 26 October 2007 10:44, Peter wrote:
> > ...the way the watermarks work they will be evenly distributed
> > over the appropriate zones. ..
Hi Peter,
The term is "highwater mark" not "high watermark". A watermark is an
anti-counterfeiting device printed on paper money. "Highwater" is
On Friday 26 October 2007 10:55, Christoph Lameter wrote:
> On Fri, 26 Oct 2007, Pavel Machek wrote:
> > > And, _no_, it does not necessarily mean global serialisation. By
> > > simply saying there must be N pages available I say nothing about
> > > on which node they should be available, and the
On Friday 26 October 2007 10:55, Christoph Lameter wrote:
On Fri, 26 Oct 2007, Pavel Machek wrote:
And, _no_, it does not necessarily mean global serialisation. By
simply saying there must be N pages available I say nothing about
on which node they should be available, and the way the
On Friday 26 October 2007 10:44, Peter wrote:
...the way the watermarks work they will be evenly distributed
over the appropriate zones. ..
Hi Peter,
The term is highwater mark not high watermark. A watermark is an
anti-counterfeiting device printed on paper money. Highwater is how
high
On Fri, 26 Oct 2007, Pavel Machek wrote:
> > And, _no_, it does not necessarily mean global serialisation. By simply
> > saying there must be N pages available I say nothing about on which node
> > they should be available, and the way the watermarks work they will be
> > evenly distributed over
Hi!
> > > or
> > >
> > > - have a global reserve and selectively serves sockets
> > > (what I've been doing)
> >
> > That is a scalability problem on large systems! Global means global
> > serialization, cacheline bouncing and possibly livelocks. If we get into
> > this global shortage
Hi!
or
- have a global reserve and selectively serves sockets
(what I've been doing)
That is a scalability problem on large systems! Global means global
serialization, cacheline bouncing and possibly livelocks. If we get into
this global shortage then all cpus may
On Fri, 26 Oct 2007, Pavel Machek wrote:
And, _no_, it does not necessarily mean global serialisation. By simply
saying there must be N pages available I say nothing about on which node
they should be available, and the way the watermarks work they will be
evenly distributed over the
On Tue, 18 Sep 2007 09:56:06 -0700 Daniel Phillips <[EMAIL PROTECTED]>
wrote:
> On Tuesday 18 September 2007 02:58, Peter Zijlstra wrote:
> > On Mon, 17 Sep 2007 22:11:25 -0700 Daniel Phillips wrote:
> > > > I've been using Avi Kivity's patch from some time ago:
> > > >
On Tuesday 18 September 2007 02:58, Peter Zijlstra wrote:
> On Mon, 17 Sep 2007 22:11:25 -0700 Daniel Phillips wrote:
> > > I've been using Avi Kivity's patch from some time ago:
> > > http://lkml.org/lkml/2004/7/26/68
> >
> > Yes. Ddsnap includes a bit of code almost identical to that, which
> >
On Mon, 17 Sep 2007 22:11:25 -0700 Daniel Phillips <[EMAIL PROTECTED]>
wrote:
> > I've been using Avi Kivity's patch from some time ago:
> > http://lkml.org/lkml/2004/7/26/68
>
> Yes. Ddsnap includes a bit of code almost identical to that, which we wrote
> independently. Seems wild and crazy
On Mon, 17 Sep 2007 23:27:25 -0400 "Mike Snitzer" <[EMAIL PROTECTED]>
wrote:
> I'm going to try adding all the things I've learned into the mix all
> at once; including both of peterz's patchsets. Peter, do you have a
> git repo or website/ftp site for you r latest per-bdi and network
> deadlock
On Mon, Sep 17, 2007 at 10:11:25PM -0700, Daniel Phillips wrote:
> On Monday 17 September 2007 20:27, Mike Snitzer wrote:
> > > - Statically prove bounded memory use of all code in the writeout
> > > path.
> > >
> > > - Implement any special measures required to be able to make such
> > >
On Mon, Sep 17, 2007 at 10:11:25PM -0700, Daniel Phillips wrote:
On Monday 17 September 2007 20:27, Mike Snitzer wrote:
- Statically prove bounded memory use of all code in the writeout
path.
- Implement any special measures required to be able to make such
a proof.
On Mon, 17 Sep 2007 22:11:25 -0700 Daniel Phillips [EMAIL PROTECTED]
wrote:
I've been using Avi Kivity's patch from some time ago:
http://lkml.org/lkml/2004/7/26/68
Yes. Ddsnap includes a bit of code almost identical to that, which we wrote
independently. Seems wild and crazy at first
On Mon, 17 Sep 2007 23:27:25 -0400 Mike Snitzer [EMAIL PROTECTED]
wrote:
I'm going to try adding all the things I've learned into the mix all
at once; including both of peterz's patchsets. Peter, do you have a
git repo or website/ftp site for you r latest per-bdi and network
deadlock
On Tuesday 18 September 2007 02:58, Peter Zijlstra wrote:
On Mon, 17 Sep 2007 22:11:25 -0700 Daniel Phillips wrote:
I've been using Avi Kivity's patch from some time ago:
http://lkml.org/lkml/2004/7/26/68
Yes. Ddsnap includes a bit of code almost identical to that, which
we wrote
On Tue, 18 Sep 2007 09:56:06 -0700 Daniel Phillips [EMAIL PROTECTED]
wrote:
On Tuesday 18 September 2007 02:58, Peter Zijlstra wrote:
On Mon, 17 Sep 2007 22:11:25 -0700 Daniel Phillips wrote:
I've been using Avi Kivity's patch from some time ago:
http://lkml.org/lkml/2004/7/26/68
(Reposted for completeness. Previously rejected by vger due to
accidental send as html mail. CC's except for Mike and vger deleted)
On Monday 17 September 2007 20:27, Mike Snitzer wrote:
> To give you context for where I'm coming from; I'm looking to get NBD
> to survive the mke2fs hell I
On 9/17/07, Daniel Phillips <[EMAIL PROTECTED]> wrote:
> On Friday 07 September 2007 22:12, Mike Snitzer wrote:
> > Can you be specific about which changes to existing mainline code
> > were needed to make recursive reclaim "work" in your tests (albeit
> > less ideally than peterz's patchset in
On Friday 07 September 2007 22:12, Mike Snitzer wrote:
> Can you be specific about which changes to existing mainline code
> were needed to make recursive reclaim "work" in your tests (albeit
> less ideally than peterz's patchset in your view)?
Sorry, I was incommunicado out on the high seas all
On Friday 07 September 2007 22:12, Mike Snitzer wrote:
Can you be specific about which changes to existing mainline code
were needed to make recursive reclaim work in your tests (albeit
less ideally than peterz's patchset in your view)?
Sorry, I was incommunicado out on the high seas all last
On 9/17/07, Daniel Phillips [EMAIL PROTECTED] wrote:
On Friday 07 September 2007 22:12, Mike Snitzer wrote:
Can you be specific about which changes to existing mainline code
were needed to make recursive reclaim work in your tests (albeit
less ideally than peterz's patchset in your view)?
(Reposted for completeness. Previously rejected by vger due to
accidental send as html mail. CC's except for Mike and vger deleted)
On Monday 17 September 2007 20:27, Mike Snitzer wrote:
To give you context for where I'm coming from; I'm looking to get NBD
to survive the mke2fs hell I
On Thu, 2007-09-13 at 11:32 -0700, Christoph Lameter wrote:
> On Thu, 13 Sep 2007, Peter Zijlstra wrote:
>
> >
> > > > Every user of memory relies on the VM, and we only get into trouble if
> > > > the VM in turn relies on one of these users. Traditionally that has only
> > > > been the block
On Thu, 13 Sep 2007, Peter Zijlstra wrote:
>
> > > Every user of memory relies on the VM, and we only get into trouble if
> > > the VM in turn relies on one of these users. Traditionally that has only
> > > been the block layer, and we special cased that using mempools and
> > > PF_MEMALLOC.
> >
On Wed, 2007-09-12 at 15:47 -0700, Christoph Lameter wrote:
> On Wed, 12 Sep 2007, Peter Zijlstra wrote:
>
> > > assumes single critical user of memory. There are other consumers of
> > > memory and if you have a load that depends on other things than
> > > networking
> > > then you should not
On Wed, 2007-09-12 at 15:47 -0700, Christoph Lameter wrote:
On Wed, 12 Sep 2007, Peter Zijlstra wrote:
assumes single critical user of memory. There are other consumers of
memory and if you have a load that depends on other things than
networking
then you should not kill the
On Thu, 13 Sep 2007, Peter Zijlstra wrote:
Every user of memory relies on the VM, and we only get into trouble if
the VM in turn relies on one of these users. Traditionally that has only
been the block layer, and we special cased that using mempools and
PF_MEMALLOC.
Why do
On Thu, 2007-09-13 at 11:32 -0700, Christoph Lameter wrote:
On Thu, 13 Sep 2007, Peter Zijlstra wrote:
Every user of memory relies on the VM, and we only get into trouble if
the VM in turn relies on one of these users. Traditionally that has only
been the block layer, and we
On Wed, 12 Sep 2007, Peter Zijlstra wrote:
> > assumes single critical user of memory. There are other consumers of
> > memory and if you have a load that depends on other things than networking
> > then you should not kill the other things that want memory.
>
> The VM is a _critical_ user of
On Tue, 21 Aug 2007, Nick Piggin wrote:
> The thing I don't much like about your patches is the addition of more
> of these global reserve type things in the allocators. They kind of
> suck (not your code, just the concept of them in general -- ie. including
> the PF_MEMALLOC reserve). I'd like
On Wed, 2007-09-05 at 05:14 -0700, Christoph Lameter wrote:
> Using the VM to throttle networking is a pretty bad thing because it
> assumes single critical user of memory. There are other consumers of
> memory and if you have a load that depends on other things than networking
> then you
On Tue, 21 Aug 2007, Nick Piggin wrote:
The thing I don't much like about your patches is the addition of more
of these global reserve type things in the allocators. They kind of
suck (not your code, just the concept of them in general -- ie. including
the PF_MEMALLOC reserve). I'd like to
On Wed, 12 Sep 2007, Peter Zijlstra wrote:
assumes single critical user of memory. There are other consumers of
memory and if you have a load that depends on other things than networking
then you should not kill the other things that want memory.
The VM is a _critical_ user of memory.
On Wed, 2007-09-05 at 05:14 -0700, Christoph Lameter wrote:
Using the VM to throttle networking is a pretty bad thing because it
assumes single critical user of memory. There are other consumers of
memory and if you have a load that depends on other things than networking
then you should
On Mon, Sep 10, 2007 at 12:29:32PM -0700, Christoph Lameter wrote:
> On Wed, 5 Sep 2007, Nick Piggin wrote:
>
> > Implementation issues aside, the problem is there and I would like to
> > see it fixed regardless if some/most/or all users in practice don't
> > hit it.
>
> I am all for fixing the
On Mon, Sep 10, 2007 at 12:29:32PM -0700, Christoph Lameter wrote:
On Wed, 5 Sep 2007, Nick Piggin wrote:
Implementation issues aside, the problem is there and I would like to
see it fixed regardless if some/most/or all users in practice don't
hit it.
I am all for fixing the problem
On Mon, 2007-09-10 at 13:22 -0700, Christoph Lameter wrote:
> On Mon, 10 Sep 2007, Peter Zijlstra wrote:
>
> > On Mon, 2007-09-10 at 12:25 -0700, Christoph Lameter wrote:
> >
> > > Of course boundless allocations from interrupt / reclaim context will
> > > ultimately crash the system. To fix
On Mon, 2007-09-10 at 13:17 -0700, Christoph Lameter wrote:
> On Mon, 10 Sep 2007, Peter Zijlstra wrote:
>
> > > Allright maybe you can get the kernel to be stable in the face of having
> > > no memory and debug all the fallback paths in the kernel when an OOM
> > > condition occurs.
> > >
> >
On Mon, 10 Sep 2007, Peter Zijlstra wrote:
> On Mon, 2007-09-10 at 12:25 -0700, Christoph Lameter wrote:
>
> > Of course boundless allocations from interrupt / reclaim context will
> > ultimately crash the system. To fix that you need to stop the networking
> > layer from performing these.
>
On Mon, 10 Sep 2007, Peter Zijlstra wrote:
> > Allright maybe you can get the kernel to be stable in the face of having
> > no memory and debug all the fallback paths in the kernel when an OOM
> > condition occurs.
> >
> > But system calls will fail? Like fork/exec? etc? There may be daemons
On Mon, 2007-09-10 at 12:41 -0700, Christoph Lameter wrote:
> On Mon, 10 Sep 2007, Peter Zijlstra wrote:
>
> > > Peter's approach establishes the
> > > limit by failing PF_MEMALLOC allocations.
> >
> > I'm not failing PF_MEMALLOC allocations. I'm more stringent in failing !
> > PF_MEMALLOC
On Mon, 2007-09-10 at 12:25 -0700, Christoph Lameter wrote:
> Of course boundless allocations from interrupt / reclaim context will
> ultimately crash the system. To fix that you need to stop the networking
> layer from performing these.
Trouble is, I don't only need a network layer to not
On Mon, 10 Sep 2007, Peter Zijlstra wrote:
> > Peter's approach establishes the
> > limit by failing PF_MEMALLOC allocations.
>
> I'm not failing PF_MEMALLOC allocations. I'm more stringent in failing !
> PF_MEMALLOC allocations.
Right you are failing other allocations.
> > If that occurs
On Mon, 2007-09-10 at 12:29 -0700, Christoph Lameter wrote:
> On Wed, 5 Sep 2007, Nick Piggin wrote:
>
> > Implementation issues aside, the problem is there and I would like to
> > see it fixed regardless if some/most/or all users in practice don't
> > hit it.
>
> I am all for fixing the problem
On Wed, 5 Sep 2007, Nick Piggin wrote:
> Implementation issues aside, the problem is there and I would like to
> see it fixed regardless if some/most/or all users in practice don't
> hit it.
I am all for fixing the problem but the solution can be much simpler and
more universal. F.e. the amount
On Wed, 5 Sep 2007, Daniel Phillips wrote:
> > Na, that cannot be the case since it only activates when an OOM
> > condition would otherwise result.
>
> I did not express myself clearly then. Compared to our current
> anti-deadlock patch set, you patch set is a regression. Because
> without
On Wed, 5 Sep 2007, Daniel Phillips wrote:
Na, that cannot be the case since it only activates when an OOM
condition would otherwise result.
I did not express myself clearly then. Compared to our current
anti-deadlock patch set, you patch set is a regression. Because
without help
On Wed, 5 Sep 2007, Nick Piggin wrote:
Implementation issues aside, the problem is there and I would like to
see it fixed regardless if some/most/or all users in practice don't
hit it.
I am all for fixing the problem but the solution can be much simpler and
more universal. F.e. the amount of
On Mon, 2007-09-10 at 12:29 -0700, Christoph Lameter wrote:
On Wed, 5 Sep 2007, Nick Piggin wrote:
Implementation issues aside, the problem is there and I would like to
see it fixed regardless if some/most/or all users in practice don't
hit it.
I am all for fixing the problem but the
On Mon, 10 Sep 2007, Peter Zijlstra wrote:
Peter's approach establishes the
limit by failing PF_MEMALLOC allocations.
I'm not failing PF_MEMALLOC allocations. I'm more stringent in failing !
PF_MEMALLOC allocations.
Right you are failing other allocations.
If that occurs then other
On Mon, 2007-09-10 at 12:41 -0700, Christoph Lameter wrote:
On Mon, 10 Sep 2007, Peter Zijlstra wrote:
Peter's approach establishes the
limit by failing PF_MEMALLOC allocations.
I'm not failing PF_MEMALLOC allocations. I'm more stringent in failing !
PF_MEMALLOC allocations.
On Mon, 2007-09-10 at 12:25 -0700, Christoph Lameter wrote:
Of course boundless allocations from interrupt / reclaim context will
ultimately crash the system. To fix that you need to stop the networking
layer from performing these.
Trouble is, I don't only need a network layer to not
On Mon, 10 Sep 2007, Peter Zijlstra wrote:
Allright maybe you can get the kernel to be stable in the face of having
no memory and debug all the fallback paths in the kernel when an OOM
condition occurs.
But system calls will fail? Like fork/exec? etc? There may be daemons
running
On Mon, 10 Sep 2007, Peter Zijlstra wrote:
On Mon, 2007-09-10 at 12:25 -0700, Christoph Lameter wrote:
Of course boundless allocations from interrupt / reclaim context will
ultimately crash the system. To fix that you need to stop the networking
layer from performing these.
Trouble
On Mon, 2007-09-10 at 13:17 -0700, Christoph Lameter wrote:
On Mon, 10 Sep 2007, Peter Zijlstra wrote:
Allright maybe you can get the kernel to be stable in the face of having
no memory and debug all the fallback paths in the kernel when an OOM
condition occurs.
But system
On Mon, 2007-09-10 at 13:22 -0700, Christoph Lameter wrote:
On Mon, 10 Sep 2007, Peter Zijlstra wrote:
On Mon, 2007-09-10 at 12:25 -0700, Christoph Lameter wrote:
Of course boundless allocations from interrupt / reclaim context will
ultimately crash the system. To fix that you need
On 9/5/07, Daniel Phillips <[EMAIL PROTECTED]> wrote:
> On Wednesday 05 September 2007 03:42, Christoph Lameter wrote:
> > On Wed, 5 Sep 2007, Daniel Phillips wrote:
> > > If we remove our anti-deadlock measures, including the
> > > ddsnap.vm.fixes (a roll-up of Peter's patch set) and the request
On 9/5/07, Daniel Phillips [EMAIL PROTECTED] wrote:
On Wednesday 05 September 2007 03:42, Christoph Lameter wrote:
On Wed, 5 Sep 2007, Daniel Phillips wrote:
If we remove our anti-deadlock measures, including the
ddsnap.vm.fixes (a roll-up of Peter's patch set) and the request
On Wednesday 05 September 2007 03:42, Christoph Lameter wrote:
> On Wed, 5 Sep 2007, Daniel Phillips wrote:
> > If we remove our anti-deadlock measures, including the
> > ddsnap.vm.fixes (a roll-up of Peter's patch set) and the request
> > throttling code in dm-ddsnap.c, and apply your patch set
On Wed, Sep 05, 2007 at 05:14:06AM -0700, Christoph Lameter wrote:
> On Wed, 5 Sep 2007, Nick Piggin wrote:
>
> > However I really have an aversion to the near enough is good enough way of
> > thinking. Especially when it comes to fundamental deadlocks in the VM. I
> > don't know whether Peter's
On Wed, 5 Sep 2007, Nick Piggin wrote:
> However I really have an aversion to the near enough is good enough way of
> thinking. Especially when it comes to fundamental deadlocks in the VM. I
> don't know whether Peter's patch is completely clean yet, but fixing the
> fundamentally broken code has
On Wed, Sep 05, 2007 at 03:42:35AM -0700, Christoph Lameter wrote:
> On Wed, 5 Sep 2007, Daniel Phillips wrote:
>
> > If we remove our anti-deadlock measures, including the ddsnap.vm.fixes
> > (a roll-up of Peter's patch set) and the request throttling code in
> > dm-ddsnap.c, and apply your
On Wed, 5 Sep 2007, Daniel Phillips wrote:
> If we remove our anti-deadlock measures, including the ddsnap.vm.fixes
> (a roll-up of Peter's patch set) and the request throttling code in
> dm-ddsnap.c, and apply your patch set instead, we hit deadlock on the
> socket write path after a few
On Tuesday 14 August 2007 07:21, Christoph Lameter wrote:
> The following patchset implements recursive reclaim. Recursive
> reclaim is necessary if we run out of memory in the writeout patch
> from reclaim.
>
> This is f.e. important for stacked filesystems or anything that does
> complicated
On Tuesday 14 August 2007 07:21, Christoph Lameter wrote:
The following patchset implements recursive reclaim. Recursive
reclaim is necessary if we run out of memory in the writeout patch
from reclaim.
This is f.e. important for stacked filesystems or anything that does
complicated
On Wed, 5 Sep 2007, Daniel Phillips wrote:
If we remove our anti-deadlock measures, including the ddsnap.vm.fixes
(a roll-up of Peter's patch set) and the request throttling code in
dm-ddsnap.c, and apply your patch set instead, we hit deadlock on the
socket write path after a few hours
On Wed, Sep 05, 2007 at 03:42:35AM -0700, Christoph Lameter wrote:
On Wed, 5 Sep 2007, Daniel Phillips wrote:
If we remove our anti-deadlock measures, including the ddsnap.vm.fixes
(a roll-up of Peter's patch set) and the request throttling code in
dm-ddsnap.c, and apply your patch set
On Wed, 5 Sep 2007, Nick Piggin wrote:
However I really have an aversion to the near enough is good enough way of
thinking. Especially when it comes to fundamental deadlocks in the VM. I
don't know whether Peter's patch is completely clean yet, but fixing the
fundamentally broken code has my
On Wed, Sep 05, 2007 at 05:14:06AM -0700, Christoph Lameter wrote:
On Wed, 5 Sep 2007, Nick Piggin wrote:
However I really have an aversion to the near enough is good enough way of
thinking. Especially when it comes to fundamental deadlocks in the VM. I
don't know whether Peter's patch is
On Wednesday 05 September 2007 03:42, Christoph Lameter wrote:
On Wed, 5 Sep 2007, Daniel Phillips wrote:
If we remove our anti-deadlock measures, including the
ddsnap.vm.fixes (a roll-up of Peter's patch set) and the request
throttling code in dm-ddsnap.c, and apply your patch set instead,
On Tue, Aug 21, 2007 at 05:29:27PM +0200, Peter Zijlstra wrote:
> [ now with CCs ]
>
> On Tue, 2007-08-21 at 02:28 +0200, Nick Piggin wrote:
>
> > I do of course. There is one thing to have a real lock deadlock
> > in some core path, and another to have this memory deadlock in a
> >
On Tue, Aug 21, 2007 at 05:29:27PM +0200, Peter Zijlstra wrote:
[ now with CCs ]
On Tue, 2007-08-21 at 02:28 +0200, Nick Piggin wrote:
I do of course. There is one thing to have a real lock deadlock
in some core path, and another to have this memory deadlock in a
known-to-be-dodgy
[ now with CCs ]
On Tue, 2007-08-21 at 02:28 +0200, Nick Piggin wrote:
> I do of course. There is one thing to have a real lock deadlock
> in some core path, and another to have this memory deadlock in a
> known-to-be-dodgy configuration (Linus said last year that he didn't
> want to go out of
[ now with CCs ]
On Tue, 2007-08-21 at 02:28 +0200, Nick Piggin wrote:
I do of course. There is one thing to have a real lock deadlock
in some core path, and another to have this memory deadlock in a
known-to-be-dodgy configuration (Linus said last year that he didn't
want to go out of our
On Mon, Aug 20, 2007 at 12:15:01PM -0700, Christoph Lameter wrote:
> On Mon, 20 Aug 2007, Peter Zijlstra wrote:
>
> > > > <> What Christoph is proposing is doing recursive reclaim and not
> > > > initiating writeout. This will only work _IFF_ there are clean pages
> > > > about. Which in the
On Mon, Aug 20, 2007 at 05:51:34AM +0200, Peter Zijlstra wrote:
> On Thu, 2007-08-16 at 05:29 +0200, Nick Piggin wrote:
> > Well perhaps it doesn't work for networked swap, because dirty accounting
> > doesn't work the same way with anonymous memory... but for _filesystems_,
> > right?
> >
> > I
On Mon, 20 Aug 2007, Peter Zijlstra wrote:
> > > <> What Christoph is proposing is doing recursive reclaim and not
> > > initiating writeout. This will only work _IFF_ there are clean pages
> > > about. Which in the general case need not be true (memory might be
> > > packed with anonymous pages
On Mon, 20 Aug 2007, Peter Zijlstra wrote:
What Christoph is proposing is doing recursive reclaim and not
initiating writeout. This will only work _IFF_ there are clean pages
about. Which in the general case need not be true (memory might be
packed with anonymous pages - consider an
On Mon, Aug 20, 2007 at 05:51:34AM +0200, Peter Zijlstra wrote:
On Thu, 2007-08-16 at 05:29 +0200, Nick Piggin wrote:
Well perhaps it doesn't work for networked swap, because dirty accounting
doesn't work the same way with anonymous memory... but for _filesystems_,
right?
I mean, it
On Mon, Aug 20, 2007 at 12:15:01PM -0700, Christoph Lameter wrote:
On Mon, 20 Aug 2007, Peter Zijlstra wrote:
What Christoph is proposing is doing recursive reclaim and not
initiating writeout. This will only work _IFF_ there are clean pages
about. Which in the general case need
On Thu, 2007-08-16 at 05:29 +0200, Nick Piggin wrote:
> On Wed, Aug 15, 2007 at 03:12:06PM +0200, Peter Zijlstra wrote:
> > On Wed, 2007-08-15 at 14:22 +0200, Nick Piggin wrote:
> > > On Tue, Aug 14, 2007 at 07:21:03AM -0700, Christoph Lameter wrote:
> > > > The following patchset implements
On Thu, 2007-08-16 at 05:29 +0200, Nick Piggin wrote:
On Wed, Aug 15, 2007 at 03:12:06PM +0200, Peter Zijlstra wrote:
On Wed, 2007-08-15 at 14:22 +0200, Nick Piggin wrote:
On Tue, Aug 14, 2007 at 07:21:03AM -0700, Christoph Lameter wrote:
The following patchset implements recursive
On Thu, 16 Aug 2007, Nick Piggin wrote:
> > Honestly, I don't. They very much do not solve the problem, they just
> > displace it.
>
> Well perhaps it doesn't work for networked swap, because dirty accounting
> doesn't work the same way with anonymous memory... but for _filesystems_,
> right?
On Thu, 16 Aug 2007, Nick Piggin wrote:
Honestly, I don't. They very much do not solve the problem, they just
displace it.
Well perhaps it doesn't work for networked swap, because dirty accounting
doesn't work the same way with anonymous memory... but for _filesystems_,
right?
Regular
On Wed, Aug 15, 2007 at 03:12:06PM +0200, Peter Zijlstra wrote:
> On Wed, 2007-08-15 at 14:22 +0200, Nick Piggin wrote:
> > On Tue, Aug 14, 2007 at 07:21:03AM -0700, Christoph Lameter wrote:
> > > The following patchset implements recursive reclaim. Recursive reclaim
> > > is necessary if we run
On Wed, 15 Aug 2007, Peter Zijlstra wrote:
> The thing I strongly objected to was the 20%.
Well then set it to 10%. We have min_free_kbytes now and so we are used
to these limits.
> Also his approach misses the threshold - the extra condition needed to
> break out of the various network
On Wed, 15 Aug 2007, Peter Zijlstra wrote:
> Christoph's suggestion to set min_free_kbytes to 20% is ridiculous - nor
> does it solve all deadlocks :-(
Only if min_free_kbytes is really the mininum number of free pages and not
the mininum number of clean pages as I suggested.
All deadlocks?
> That is his second patch-set, and I do worry about the irq latency that
> that will introduce. It very much has the potential to ruin everything
> that cares about interactiveness or latency.
I proposed a way to avoid increasing interrupt latency
in a simple way.
-Andi
-
To unsubscribe from
On Wed, 2007-08-15 at 16:15 +0200, Andi Kleen wrote:
> Peter Zijlstra <[EMAIL PROTECTED]> writes:
> >
> > Christoph's suggestion to set min_free_kbytes to 20% is ridiculous - nor
> > does it solve all deadlocks :-(
>
> A minimum enforced reclaimable non dirty threshold wouldn't be
> that
Peter Zijlstra <[EMAIL PROTECTED]> writes:
>
> Christoph's suggestion to set min_free_kbytes to 20% is ridiculous - nor
> does it solve all deadlocks :-(
A minimum enforced reclaimable non dirty threshold wouldn't be
that ridiculous though. So the memory could be used, just not
for dirty data.
On Wed, 2007-08-15 at 14:22 +0200, Nick Piggin wrote:
> On Tue, Aug 14, 2007 at 07:21:03AM -0700, Christoph Lameter wrote:
> > The following patchset implements recursive reclaim. Recursive reclaim
> > is necessary if we run out of memory in the writeout patch from reclaim.
> >
> > This is f.e.
On Tue, Aug 14, 2007 at 07:21:03AM -0700, Christoph Lameter wrote:
> The following patchset implements recursive reclaim. Recursive reclaim
> is necessary if we run out of memory in the writeout patch from reclaim.
>
> This is f.e. important for stacked filesystems or anything that does
>
On Tue, Aug 14, 2007 at 07:21:03AM -0700, Christoph Lameter wrote:
The following patchset implements recursive reclaim. Recursive reclaim
is necessary if we run out of memory in the writeout patch from reclaim.
This is f.e. important for stacked filesystems or anything that does
complicated
On Wed, 2007-08-15 at 14:22 +0200, Nick Piggin wrote:
On Tue, Aug 14, 2007 at 07:21:03AM -0700, Christoph Lameter wrote:
The following patchset implements recursive reclaim. Recursive reclaim
is necessary if we run out of memory in the writeout patch from reclaim.
This is f.e. important
Peter Zijlstra [EMAIL PROTECTED] writes:
Christoph's suggestion to set min_free_kbytes to 20% is ridiculous - nor
does it solve all deadlocks :-(
A minimum enforced reclaimable non dirty threshold wouldn't be
that ridiculous though. So the memory could be used, just not
for dirty data.
His
On Wed, 2007-08-15 at 16:15 +0200, Andi Kleen wrote:
Peter Zijlstra [EMAIL PROTECTED] writes:
Christoph's suggestion to set min_free_kbytes to 20% is ridiculous - nor
does it solve all deadlocks :-(
A minimum enforced reclaimable non dirty threshold wouldn't be
that ridiculous though.
That is his second patch-set, and I do worry about the irq latency that
that will introduce. It very much has the potential to ruin everything
that cares about interactiveness or latency.
I proposed a way to avoid increasing interrupt latency
in a simple way.
-Andi
-
To unsubscribe from this
On Wed, 15 Aug 2007, Peter Zijlstra wrote:
Christoph's suggestion to set min_free_kbytes to 20% is ridiculous - nor
does it solve all deadlocks :-(
Only if min_free_kbytes is really the mininum number of free pages and not
the mininum number of clean pages as I suggested.
All deadlocks?
1 - 100 of 112 matches
Mail list logo