Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-19 Thread KOSAKI Motohiro
> Did those jobs share nodes -- sometimes two or more jobs using the same
> nodes?  I am sure SGI has such users too, though such job mixes make
> the runtimes of specific jobs less obvious, so customers are more
> tolerant of variations and some inefficiencies, as they get hidden in
> the mix.

Hm
our dedicated ndoe user set memory limit to machine physical memory
size (minus a bit).

I think don't have so much share/dedicate and watch user-defined/swap.
am i misundestand?


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-19 Thread Paul Jackson
Kosaki-san wrote:
> Yes.
> Fujitsu HPC middleware watching sum of memory consumption of the job
> and, if over-consumption happened, kill process and remove job schedule.

Did those jobs share nodes -- sometimes two or more jobs using the same
nodes?  I am sure SGI has such users too, though such job mixes make
the runtimes of specific jobs less obvious, so customers are more
tolerant of variations and some inefficiencies, as they get hidden in
the mix.

In other words, Rik, both yes and no ;).  Both sorts of HPC loads
exist, sharing nodes and a dedicated set of nodes for each job.

-- 
  I won't rest till it's the best ...
  Programmer, Linux Scalability
  Paul Jackson <[EMAIL PROTECTED]> 1.940.382.4214
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-19 Thread Paul Jackson
Rik wrote:
> In that case the user is better off having that job killed and
> restarted elsewhere, than having all of the jobs on that node
> crawl to a halt due to swapping.
> 
> Paul, is this guess correct? :)

Not for the loads I focus on.  Each job gets exclusive use of its own
dedicated set of nodes, for the duration of the job.  With that comes a
quite specific upper limit on how much memory, in total, including node
local kernel data, that job is allowed to use.

One problem with swapping is that nodes aren't entirely isolated.
They share buses, i/o channels, disk arms, kernel data cache lines and
kernel locks with other nodes, running other jobs.   A job thrashing
its swap is a drag on the rest of the system.

Another problem with swapping is that it's a waste of resources.  Once
a pure compute bound job goes into swapping when it shouldn't, that job
has near zero hope of continuing with the intended performance, as it
has just slowed from main memory speeds to disk speeds, which are
thousands of times slower.  Best to get it out of there, immediately.

-- 
  I won't rest till it's the best ...
  Programmer, Linux Scalability
  Paul Jackson <[EMAIL PROTECTED]> 1.940.382.4214
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-19 Thread KOSAKI Motohiro
Hi Rik

> > Sounds like a job for memory limits (ulimit?), not for OOM
> > notification, right?
> 
> I suspect one problem could be that an HPC job scheduling program
> does not know exactly how much memory each job can take, so it can
> sometimes end up making a mistake and overcommitting the memory on
> one HPC node.
> 
> In that case the user is better off having that job killed and
> restarted elsewhere, than having all of the jobs on that node
> crawl to a halt due to swapping.
> 
> Paul, is this guess correct? :)

Yes.
Fujitsu HPC middleware watching sum of memory consumption of the job
and, if over-consumption happened, kill process and remove job schedule.

I think that is common hpc requirement.
but we watching to user defined memory limit, not swap.

Thanks.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-19 Thread Rik van Riel
On Tue, 19 Feb 2008 23:28:28 +0100
Pavel Machek <[EMAIL PROTECTED]> wrote:

> Sounds like a job for memory limits (ulimit?), not for OOM
> notification, right?

I suspect one problem could be that an HPC job scheduling program
does not know exactly how much memory each job can take, so it can
sometimes end up making a mistake and overcommitting the memory on
one HPC node.

In that case the user is better off having that job killed and
restarted elsewhere, than having all of the jobs on that node
crawl to a halt due to swapping.

Paul, is this guess correct? :)

-- 
All rights reversed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-19 Thread Paul Jackson
Pavel, responding to pj:
> > There is not much my customers HPC jobs can do with notification before
> > swap.  Their jobs either have the main memory they need to perform the
> > requested calculations with the desired performance, or their job is
> > useless and should be killed.  Unlike the applications you describe,
> > my customers jobs have no way, once running, to adapt to less
> > memory.
> 
> Sounds like a job for memory limits (ulimit?), not for OOM
> notification, right?

Er eh -- which one?

The only one I see that might help keep a multi-threaded job
using various kinds of memory on multiple nodes confined could
be the resident set size (RLIMIT_RSS; ulimit -m).  So far as
I can tell, that one is a pure no-op in Linux.

Here's the bash list of all available ulimit (setrlimit) options:

  -a All current limits are reported
  -c The maximum size of core files created
  -d The maximum size of a process's data segment
  -e The maximum scheduling priority ("nice")
  -f The maximum size of files written by the shell and its 
children
  -i The maximum number of pending signals
  -l The maximum size that may be locked into memory
  -m The maximum resident set size
  -n The maximum number of open file descriptors (most systems 
do not allow this value to be set)
  -p The pipe size in 512-byte blocks (this may not be set)
  -q The maximum number of bytes in POSIX message queues
  -r The maximum real-time scheduling priority
  -s The maximum stack size
  -t The maximum amount of cpu time in seconds
  -u The maximum number of processes available to a single user
  -v The maximum amount of virtual memory available to the shell
  -x The maximum number of file locks

Did I miss seeing one that would be useful?

Actually, given the chronic problem we've had over the years accounting
for how much memory in total (including text, data, stack, mapped
files, locked pages, kernel memory structures that an application is
using many of, ...  I'd be suprised if any such ulimit existed that
actually worked for this purpose (confining an HPC jobs to using almost
exactly all the memory available to it, but no more.)

-- 
  I won't rest till it's the best ...
  Programmer, Linux Scalability
  Paul Jackson <[EMAIL PROTECTED]> 1.940.382.4214
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-19 Thread Pavel Machek
On Tue 2008-02-19 09:00:08, Paul Jackson wrote:
> Kosaki-san wrote:
> > Thank you for wonderful interestings comment.
> 
> You're most welcome.  The pleasure is all mine.
> 
> > you think kill the process just after swap, right?
> > but unfortunately, almost user hope receive notification before swap ;-)
> > because avoid swap.
> 
> There is not much my customers HPC jobs can do with notification before
> swap.  Their jobs either have the main memory they need to perform the
> requested calculations with the desired performance, or their job is
> useless and should be killed.  Unlike the applications you describe,
> my customers jobs have no way, once running, to adapt to less
> memory.

Sounds like a job for memory limits (ulimit?), not for OOM
notification, right?
Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-19 Thread Paul Jackson
pj, talking to himself:
> Of course
> for embedded use, I'd have to adapt it to a non-cpuset based mechanism
> (not difficult), as embedded definitely doesn't do cpusets.

I'm forgetting an important detail here.  Kosaki-san has clearly stated
that this hook, at vmscan's writepage, is too late for his embedded needs,
and that they need the feedback a bit earlier, when the page moves from
the active list to the inactive list.

However, except for the placement of such hooks in three or four
places, rather than just one, it may well be (if cpusets could be
factored out) that one mechanism would meet all needs ... except for
that pesky HPC need for throttling to more or less zero the swapping
from select cpusets.

-- 
  I won't rest till it's the best ...
  Programmer, Linux Scalability
  Paul Jackson <[EMAIL PROTECTED]> 1.940.382.4214
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-19 Thread Paul Jackson
Rik wrote:
> Basically in all situations, the kernel needs to warn at the same point
> in time: when the system is about to run out of RAM for anonymous pages.
>
> ...
> 
> In the HPC case, it leads to swapping (and a management program can kill or
> restart something else).

Thanks for stopping by ...

Perhaps with the cgroup based memory controller in progress, or with
other work I'm overlooking, this is or will no longer be a problem,
but on 2.6.16 kernels (the latest ones I have in major production HPC
use) this is not sufficient.

As of at least that point, we don't (didn't ?) have sufficiently
accurate numbers of when we were "about to run out".  We can only
detect when "we just did run out", as evidenced by entering the direct
reclaim code, or as by slightly later events such as starting to push
Anon pages to the swap device from direct reclaim.

Actually, even the point that we enter direct reclaim, near the bottom
of __alloc_pages(), isn't adequate either, as we could be there because
some thread in that cpuset is trying to write out a results file that
is larger than that cpusets memory.   In that case, we really don't want
to kill the job ... it just needs to be (and routinely is) throttled
back to disk speeds as it completes the write out of dirty file system
pages.

So the first clear spot that we -know- serious swapping is commencing
is where the direct reclaim code calls a writepage op with an Anon
page. At that point, having a management program intervene is entirely
too late.  Even having the task at that instant, inline, tag itself
with a SIGKILL, as it queues that first Anon page to a swap device, is
too late.  The direct reclaim code can loop, pushing hundreds or
thousand of pages, on big memory systems, to the swapper, in the
current reclaim loop, before it pops the stack far enough back to even
notice that it has a SIGKILL pending on it.  The suppression of pushing
pages to the swapper has to happen right there, inline in some
mm/vmscan.c code, as part of the direct reclaim loops.

(Hopefully I said something stupid in that last paragraph, and you will
be able to correct it ... it sure would be useful ;).

A year or two ago, I added the 'memory_pressure' per-cpuset meter to
Linux, in an effort to realize just what you suggest, Rik.  My colleagues
at SGI (mostly) and myself (a little) have proven to ourselves that this
doesn't work, for our HPC needs, for two reasons:

 1) once swapping begins, issuing a SIGKILL, no matter how instantly,
is too late, as explained above, and

 2) that memory_pressure combines and confuses memory pressure due to
dirty file system buffers filling memory, with memory pressure due
to anonymous swappable pages filling memory, also as explained above.

I do have a patch in my playarea that adds two more of the
memory_pressure meters, one for swapouts, and one for flushing dirty
file system buffers, both hooking into the spot in the vmscan reclaim
code where the writepage op is called.  This patch ~might~ address the
desktop need here.  It nicely generates two clean, sharp indicators
that we're getting throttled by direct reclaim of dirty file buffers,
and that we're starting to reclaim anon pages to the swappers.  Of course
for embedded use, I'd have to adapt it to a non-cpuset based mechanism
(not difficult), as embedded definitely doesn't do cpusets.

> Don't forget the "hooks for desktop" :)

I'll agree (perhaps out of ignorance) that the desktop (and normal sized
server) cases are like the embedded case ... in that they need to
distribute some event to user space tasks that want to know that memory
is short, so that that user space code can do what it will (reclaim
some user space memory or restart or kill or throttle something?)

However, I'm still stuck carrying a patch out of the community kernel,
to get my HPC customers the "instant kill on direct reclaim swap" they
need, as this still seems to be the special case.  Which is rather
unfortunate, from the business perspective of my good employer, as it
the -only- out of mainline patch, so far as I know, that we have been
having to carry, continuously, for several years now.  But for that
single long standing issue (and now and then various more short term
issues), a vanilla distribution kernel, using the vanilla distribution
config and build, runs production on our one or two thousand CPU,
several terabyte big honkin NUMA boxes.

Part of my motivation for engaging Kosaki-san in this discussion was
to reinvigorate this discussion, as it's getting to be time I took
another shot at getting something in the community kernel that addresses
this.  The more overlap the HPC fix here is with the other 99.978% of
the worlds Linux systems that are desktop, laptop, (ordinary sized)
server or embedded, the better my chances (he said hopefully.)

(Now if I could just get us to consider systems in proportion to
how much power & cooling they need, rather than in proportion to
unit sales ... ;)

-- 

Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-19 Thread Rik van Riel
On Tue, 19 Feb 2008 09:00:08 -0600
Paul Jackson <[EMAIL PROTECTED]> wrote:

> Depending on what we're trying to do:
>  1) warn applications of swap coming soon (your case),
>  2) show how close we are to swapping,
>  3) show how much swap has happened already,
>  4) kill instantly if try to swap (my hpc case),
>  5) measure file i/o caused by memory pressure, or
>  6) perhaps other goals,
> we will need to hook different places in the kernel.
> 
> It may well be that your hooks for embedded are simply in different
> places than my hooks for HPC.  If so, that's fine.

Don't forget the "hooks for desktop" :)

Basically in all situations, the kernel needs to warn at the same point
in time: when the system is about to run out of RAM for anonymous pages.

In the desktop case, that leads to swapping (and programs can free memory).

In the embedded case, it leads to OOM (and a management program can kill or
restart something else, or a program can restart itself).

In the HPC case, it leads to swapping (and a management program can kill or
restart something else).

I do not see the kernel side being different between these situations, only
userspace reacts differently in the different scenarios.

Am I overlooking something?

-- 
All Rights Reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-19 Thread Paul Jackson
Kosaki-san wrote:
> Thank you for wonderful interestings comment.

You're most welcome.  The pleasure is all mine.

> you think kill the process just after swap, right?
> but unfortunately, almost user hope receive notification before swap ;-)
> because avoid swap.

There is not much my customers HPC jobs can do with notification before
swap.  Their jobs either have the main memory they need to perform the
requested calculations with the desired performance, or their job is
useless and should be killed.  Unlike the applications you describe,
my customers jobs have no way, once running, to adapt to less memory.
They can only adapt to less memory by being restarted with a different
set of resource requests to the job scheduler (the application that
manages job requests, assigns them CPU, memory and other resources,
and monitors, starts, stops and pauses jobs.)

The primary difficulty my HPC customers have is killing such jobs fast
enough, before a bad job (one that attempts to use more memory than it
signed up for) can harm the performance of other users and the rest of
the system.

I don't mind if a pages are slowly or occassionally written to swap;
but as soon as the task wants to reclaim big chunks of memory by
writing thousands of pages at once to swap, it must die, and die
before it can queue more than a handful of those pages to the swapper.

> but embedded people strongly dislike bloat code size.
> I think they never turn on CPUSET.
> 
> I hope mem_notify works fine without CPUSET.

Yes - understood and agreed - as I guessed, cpusets are not configured
in embedded systems.

> Please don't think I reject your idea.
> your proposal is large different of past our discussion

Yes - I agree that my ideas were quite different.  Please don't
hesitate to reject every one of them, like a Samurai slicing through
air with his favorite sword .

> Disagreed. that [my direct reclaim hook at mapping->a_ops->writepage()]
> is too late.

For your work, yes that hook is too late.  Agreed.

Depending on what we're trying to do:
 1) warn applications of swap coming soon (your case),
 2) show how close we are to swapping,
 3) show how much swap has happened already,
 4) kill instantly if try to swap (my hpc case),
 5) measure file i/o caused by memory pressure, or
 6) perhaps other goals,
we will need to hook different places in the kernel.

It may well be that your hooks for embedded are simply in different
places than my hooks for HPC.  If so, that's fine.

I look forward to your further thoughts.

-- 
  I won't rest till it's the best ...
  Programmer, Linux Scalability
  Paul Jackson <[EMAIL PROTECTED]> 1.940.382.4214
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-18 Thread KOSAKI Motohiro
Hi Paul,

Thank you for wonderful interestings comment.
your comment is really nice.

I was HPC guy with large NUMA box at past. 
I promise i don't ignroe hpc user.
but unfortunately I didn't have experience of use CPUSET
because at that point, it was under development yet.

I hope discuss you that CPUSET usage case and mem_notify requirement.
to be honest, I thought hpc user doesn't use mem_notify, sorry.


> I have what seems, intuitively, a similar problem at the opposite
> end of the world, on big-honkin NUMA boxes (hundreds or thousands of
> CPUs, terabytes of main memory.)  The problem there is often best
> resolved if we can kill the offending task, rather than shrink its
> memory footprint.  The situation is that several compute intensive
> multi-threaded jobs are running, each in their own dedicated cpuset.

agreed.

> So we like to identify such jobs as soon as they begin to swap,
> and kill them very very quickly (before the direct reclaim code
> in mm/vmscan.c can push more than a few pages to the swap device.)

you think kill the process just after swap, right?
but unfortunately, almost user hope receive notification before swap ;-)
because avoid swap.

I think we need discuss this point more.


> For a much earlier, unsuccessful, attempt to accomplish this, see:
> 
>   [Patch] cpusets policy kill no swap
>   http://lkml.org/lkml/2005/3/19/148
> 
> Now, it may well be that we are too far apart to share any part of
> a solution; one seldom uses the same technology to build a Tour de
> France bicycle as one uses to build a Lockheed C-5A Galaxy heavy
> cargo transport.
> 
> One clear difference is the policy of what action we desire to take
> when under memory pressure: do we invite user space to free memory so
> as to avoid the wrath of the oom killer, or do we go to the opposite
> extreme, seeking a nearly instantant killing, faster than the oom
> killer can even begin its search for a victim.

Hmm, sorry
I understand your patch yet, because I don't know CPUSET so much.

I learn CPUSET more, about this week and I'll reply again about next week ;-)


> Another clear difference is the use of cpusets, which are a major and
> vital part of administering the big NUMA boxes, and I presume are not
> even compiled into embedded kernels (correct?).  This difference maybe
> unbridgeable ... these big NUMA systems require per-cpuset mechanisms,
> whereas embedded may require builds without cpusets.

Yes, some embedded distribution(i.e. monta vista) distribute as source.
but embedded people strongly dislike bloat code size.
I think they never turn on CPUSET.

I hope mem_notify works fine without CPUSET.


> 1) You have a little bit of code in the kernel to throttle the
>thundering herd problem.  Perhaps this could be moved to user space
>... one user daemon that is always notified of such memory pressure
>alarms, and in turn notifies interested applications.  This might
>avoid the need to add poll_wait_exclusive() to the kernel.  And it
>moves any fussy details of how to tame the thundering herd out of
>the kernel.

I think you talk about user space oom manager.
it and many user process are obviously different.

I doubt memory manager daemon model doesn't works on desktop and
typical server.
thus, current implementaion optimize to no manager environment.

of course, it doesn't mean i refuse add to code for oom manager.
it is very interesting idea.

i hope discussion it more.


> 2) Another possible mechanism for communicating events from
>the kernel to user space is inotify.  For example, I added
>the line:
> 
>   fsnotify_modify(dentry);   # dentry is current tasks cpuset

Excellent!
that is really good idea.

thaks.


> 3) Perhaps, instead of sending simple events, one could update
>a meter of the rate of recent such events, such as the per-cpuset
>'memory_pressure' mechanism does.  This might lead to addressing
>Andrew Morton's comment:
> 
>   If this feature is useful then I'd expect that some
>   applications would want notification at different times, or at
>   different levels of VM distress.  So this semi-randomly-chosen
>   notification point just won't be strong enough in real-world
>   use.

Hmmm, I don't think so.
I think timing of memmory_pressure_notify(1) is already best.

the page move active list to inactive list indicate swap I/O happen
a bit after.

but memmory_pressure_notify(0) is a bit messy.
I'll try to improve more simplify.


> 4) A place that I found well suited for my purposes (watching for
>swapping from direct reclaim) was just before the lines in the
>pageout() routine in mm/vmscan.c:
> 
>   if (clear_page_dirty_for_io(page)) {
>   ...
>   res = mapping->a_ops->writepage(page, &wbc);
> 
>It seemed that testing "PageAnon(page)" here allowed me to easily
>distinguish between dirty pages going back to the file system, and
>pages going to swap (this detail is

Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-17 Thread Paul Jackson
I just noticed this patchset, kosaki-san.  It looks quite interesting;
my apologies for not commenting earlier.

I see mention somewhere that mem_notify is of particular interest to
embedded systems.

I have what seems, intuitively, a similar problem at the opposite
end of the world, on big-honkin NUMA boxes (hundreds or thousands of
CPUs, terabytes of main memory.)  The problem there is often best
resolved if we can kill the offending task, rather than shrink its
memory footprint.  The situation is that several compute intensive
multi-threaded jobs are running, each in their own dedicated cpuset.

If one of these jobs tries to use more memory than is available in
its cpuset, then

  (1) we quickly loose any hope of that job continuing at the excellent
  performance needed of it, and

  (2) we rapidly get increased risk of that job starting to swap and
  unintentionally impact shared resources (kernel locks, disk
  channels, disk heads).

So we like to identify such jobs as soon as they begin to swap,
and kill them very very quickly (before the direct reclaim code
in mm/vmscan.c can push more than a few pages to the swap device.)

For a much earlier, unsuccessful, attempt to accomplish this, see:

[Patch] cpusets policy kill no swap
http://lkml.org/lkml/2005/3/19/148

Now, it may well be that we are too far apart to share any part of
a solution; one seldom uses the same technology to build a Tour de
France bicycle as one uses to build a Lockheed C-5A Galaxy heavy
cargo transport.

One clear difference is the policy of what action we desire to take
when under memory pressure: do we invite user space to free memory so
as to avoid the wrath of the oom killer, or do we go to the opposite
extreme, seeking a nearly instantant killing, faster than the oom
killer can even begin its search for a victim.

Another clear difference is the use of cpusets, which are a major and
vital part of administering the big NUMA boxes, and I presume are not
even compiled into embedded kernels (correct?).  This difference maybe
unbridgeable ... these big NUMA systems require per-cpuset mechanisms,
whereas embedded may require builds without cpusets.

However ... there might be some useful cross pollination of ideas.

I see in the latest posts to your mem_notify patchset v6, responding
to comments by Andrew and Andi on Feb 12 and 13, that you decided to
think more about the design of this, so perhaps this is a good time
for some random ideas from myself, even though I'm clearly coming from
a quite different problem space in some ways.

1) You have a little bit of code in the kernel to throttle the
   thundering herd problem.  Perhaps this could be moved to user space
   ... one user daemon that is always notified of such memory pressure
   alarms, and in turn notifies interested applications.  This might
   avoid the need to add poll_wait_exclusive() to the kernel.  And it
   moves any fussy details of how to tame the thundering herd out of
   the kernel.

2) Another possible mechanism for communicating events from
   the kernel to user space is inotify.  For example, I added
   the line:

fsnotify_modify(dentry);   # dentry is current tasks cpuset

   at an interesting spot in vmscan.c, and using inotify-tools
could easily watch all cpusets
   for these events from one user space daemon.

   At this point, I have no idea whether this odd use of inotify
   is better or worse than what your patchset has.  However using
   inotify did require less new kernel code, and with such user space
   mechanisms as inotify-tools already well developed, it made the
   problem I had, of watching an entire hierarcy of special files
   (beneath /dev/cpuset) very easy to implement.  At least inotify
   also presents events on a file descriptor that can be consumed
   using a poll() loop.

3) Perhaps, instead of sending simple events, one could update
   a meter of the rate of recent such events, such as the per-cpuset
   'memory_pressure' mechanism does.  This might lead to addressing
   Andrew Morton's comment:

If this feature is useful then I'd expect that some
applications would want notification at different times, or at
different levels of VM distress.  So this semi-randomly-chosen
notification point just won't be strong enough in real-world
use.

4) A place that I found well suited for my purposes (watching for
   swapping from direct reclaim) was just before the lines in the
   pageout() routine in mm/vmscan.c:

if (clear_page_dirty_for_io(page)) {
...
res = mapping->a_ops->writepage(page, &wbc);

   It seemed that testing "PageAnon(page)" here allowed me to easily
   distinguish between dirty pages going back to the file system, and
   pages going to swap (this detail is from work on a 2.6.16 kernel;
   things might have changed.)

   One possible advantage of the above hook in the direct reclaim
   code path in vmscan.c i

Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-11 Thread KOSAKI Motohiro
> > the Linux Today article is very nice description. (great works by Jake Edge)
> > http://www.linuxworld.com/news/2008/020508-kernel.html
>
> Just for future reference...the above-mentioned article is from LWN,
> syndicated onto LinuxWorld.  It has, so far as I know, never been near
> Linux Today.
>
> Glad you liked it, though :)

Oops, sorry.
I had serious misunderstand ;-)

sorry, again.
and thank you for your helpful message.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 0/8][for -mm] mem_notify v6

2008-02-11 Thread Jonathan Corbet
> the Linux Today article is very nice description. (great works by Jake Edge)
> http://www.linuxworld.com/news/2008/020508-kernel.html

Just for future reference...the above-mentioned article is from LWN,
syndicated onto LinuxWorld.  It has, so far as I know, never been near
Linux Today.

Glad you liked it, though :)

Thanks,

jon
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-09 Thread KOSAKI Motohiro
Hi Rik

> More importantly, all gtk+ programs, as well as most databases and other
> system daemons have a poll() loop as their main loop.

not only gtk+, may be all modern GUI program :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-09 Thread Rik van Riel
On Sun, 10 Feb 2008 01:33:49 +0900
"KOSAKI Motohiro" <[EMAIL PROTECTED]> wrote:

> > Where is the netlink interface? Polling an FD is so last century :)
> 
> to be honest, I don't know anyone use netlink and why hope receive
> low memory notify by netlink.
> 
> poll() is old way, but it works good enough.

More importantly, all gtk+ programs, as well as most databases and other
system daemons have a poll() loop as their main loop.

A file descriptor fits that main loop perfectly.

-- 
All rights reversed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-09 Thread KOSAKI Motohiro
Hi

> Interesting patch series (I am being yuppie and reading this thread
> from my iPhone on a treadmill at the gym - so further comments later).
> I think that this is broadly along the lines that I was thinking, but
> this should be an RFC only patch series for now.

sorry, I fixed at next post.


> Some initial questions:

Thank you.
welcome to any discussion.

> Where is the netlink interface? Polling an FD is so last century :)

to be honest, I don't know anyone use netlink and why hope receive
low memory notify by netlink.

poll() is old way, but it works good enough.

and, netlink have a bit weak point.
end up, netlink philosophy is read/write model.

I afraid to many low-mem message queued in netlink buffer
at under heavy pressure.
it cause degrade memory pressure.


> Still, it is good to start with some code - eventually we might just
> have a full reservation API created. Rik and I and others have bounced
> ideas around for a while and I hope we can pitch in. I will play with
> these patches later.

Great.
Welcome to any idea and any discussion.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/8][for -mm] mem_notify v6

2008-02-09 Thread Jon Masters

Yo,

Interesting patch series (I am being yuppie and reading this thread  
from my iPhone on a treadmill at the gym - so further comments later).  
I think that this is broadly along the lines that I was thinking, but  
this should be an RFC only patch series for now.


Some initial questions:

Where is the netlink interface? Polling an FD is so last century :)

What testing have you done?

Still, it is good to start with some code - eventually we might just  
have a full reservation API created. Rik and I and others have bounced  
ideas around for a while and I hope we can pitch in. I will play with  
these patches later.


Jon.



On Feb 9, 2008, at 10:19, "KOSAKI Motohiro" <[EMAIL PROTECTED] 
> wrote:



Hi

The /dev/mem_notify is low memory notification device.
it can avoid swappness and oom by cooperationg with the user process.

the Linux Today article is very nice description. (great works by  
Jake Edge)

http://www.linuxworld.com/news/2008/020508-kernel.html


When memory gets tight, it is quite possible that applications have  
memory
allocated—often caches for better performance—that they could fre 
e.
After all, it is generally better to lose some performance than to  
face the

consequences of being chosen by the OOM killer.
But, currently, there is no way for a process to know that the  
kernel is

feeling memory pressure.
The patch provides a way for interested programs to monitor the /dev/ 
mem_notify

file to be notified if memory starts to run low.



You need not be annoyed by OOM any longer :)
please any comments!

patch list
  [1/8] introduce poll_wait_exclusive() new API
  [2/8] introduce wake_up_locked_nr() new API
  [3/8] introduce /dev/mem_notify new device (the core of this
patch series)
  [4/8] memory_pressure_notify() caller
  [5/8] add new mem_notify field to /proc/zoneinfo
  [6/8] (optional) fixed incorrect shrink_zone
  [7/8] ignore very small zone for prevent incorrect low mem  
notify.

  [8/8] support fasync feature


related discussion:
--
LKML OOM notifications requirement discussion
   
http://www.gossamer-threads.com/lists/linux/kernel/832802?nohighlight=1#832802
OOM notifications patch [Marcelo Tosatti]
   http://marc.info/?l=linux-kernel&m=119273914027743&w=2
mem notifications v3 [Marcelo Tosatti]
   http://marc.info/?l=linux-mm&m=119852828327044&w=2
Thrashing notification patch  [Daniel Spang]
   http://marc.info/?l=linux-mm&m=119427416315676&w=2
mem notification v4
   http://marc.info/?l=linux-mm&m=120035840523718&w=2
mem notification v5
   http://marc.info/?l=linux-mm&m=120114835421602&w=2

Changelog
-
v5 -> v6 (by KOSAKI Motohiro)
  o rebase to 2.6.24-mm1
  o fixed thundering herd guard formula.

v4 -> v5 (by KOSAKI Motohiro)
  o rebase to 2.6.24-rc8-mm1
  o change display order of /proc/zoneinfo
  o ignore very small zone
  o support fcntl(F_SETFL, FASYNC)
  o fixed some trivial bugs.

v3 -> v4 (by KOSAKI Motohiro)
  o rebase to 2.6.24-rc6-mm1
  o avoid wake up all.
  o add judgement point to __free_one_page().
  o add zone awareness.

v2 -> v3 (by Marcelo Tosatti)
  o changes the notification point to happen whenever
the VM moves an anonymous page to the inactive list.
  o implement notification rate limit.

v1(oom notify) -> v2 (by Marcelo Tosatti)
  o name change
  o notify timing change from just swap thrashing to
just before thrashing.
  o also works with swapless device.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 0/8][for -mm] mem_notify v6

2008-02-09 Thread KOSAKI Motohiro
Hi

The /dev/mem_notify is low memory notification device.
it can avoid swappness and oom by cooperationg with the user process.

the Linux Today article is very nice description. (great works by Jake Edge)
http://www.linuxworld.com/news/2008/020508-kernel.html


When memory gets tight, it is quite possible that applications have memory
allocated—often caches for better performance—that they could free.
After all, it is generally better to lose some performance than to face the
consequences of being chosen by the OOM killer.
But, currently, there is no way for a process to know that the kernel is
feeling memory pressure.
The patch provides a way for interested programs to monitor the /dev/mem_notify
 file to be notified if memory starts to run low.



You need not be annoyed by OOM any longer :)
please any comments!

patch list
   [1/8] introduce poll_wait_exclusive() new API
   [2/8] introduce wake_up_locked_nr() new API
   [3/8] introduce /dev/mem_notify new device (the core of this
patch series)
   [4/8] memory_pressure_notify() caller
   [5/8] add new mem_notify field to /proc/zoneinfo
   [6/8] (optional) fixed incorrect shrink_zone
   [7/8] ignore very small zone for prevent incorrect low mem notify.
   [8/8] support fasync feature


related discussion:
--
 LKML OOM notifications requirement discussion

http://www.gossamer-threads.com/lists/linux/kernel/832802?nohighlight=1#832802
 OOM notifications patch [Marcelo Tosatti]
http://marc.info/?l=linux-kernel&m=119273914027743&w=2
 mem notifications v3 [Marcelo Tosatti]
http://marc.info/?l=linux-mm&m=119852828327044&w=2
 Thrashing notification patch  [Daniel Spang]
http://marc.info/?l=linux-mm&m=119427416315676&w=2
 mem notification v4
http://marc.info/?l=linux-mm&m=120035840523718&w=2
 mem notification v5
http://marc.info/?l=linux-mm&m=120114835421602&w=2

Changelog
-
 v5 -> v6 (by KOSAKI Motohiro)
   o rebase to 2.6.24-mm1
   o fixed thundering herd guard formula.

 v4 -> v5 (by KOSAKI Motohiro)
   o rebase to 2.6.24-rc8-mm1
   o change display order of /proc/zoneinfo
   o ignore very small zone
   o support fcntl(F_SETFL, FASYNC)
   o fixed some trivial bugs.

 v3 -> v4 (by KOSAKI Motohiro)
   o rebase to 2.6.24-rc6-mm1
   o avoid wake up all.
   o add judgement point to __free_one_page().
   o add zone awareness.

 v2 -> v3 (by Marcelo Tosatti)
   o changes the notification point to happen whenever
 the VM moves an anonymous page to the inactive list.
   o implement notification rate limit.

 v1(oom notify) -> v2 (by Marcelo Tosatti)
   o name change
   o notify timing change from just swap thrashing to
 just before thrashing.
   o also works with swapless device.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/