On Fri 14-06-13 03:12:52, David Rientjes wrote:
> On Fri, 14 Jun 2013, Kamezawa Hiroyuki wrote:
>
> > Reading your discussion, I think I understand your requirements.
> > The problem is that I can't think you took into all options into
> > accounts and found the best way is this new oom_delay.
On Fri 14-06-13 03:12:52, David Rientjes wrote:
On Fri, 14 Jun 2013, Kamezawa Hiroyuki wrote:
Reading your discussion, I think I understand your requirements.
The problem is that I can't think you took into all options into
accounts and found the best way is this new oom_delay. IOW, I
On Tue, 25 Jun 2013, Kamezawa Hiroyuki wrote:
> Considering only memcg, bypassing all charge-limit-check will work.
> But as you say, that will not work against global-oom.
I think it will since we have per-zone memory reserves that can be
bypassed in the page allocator, not to the level of
On Tue, 25 Jun 2013, Kamezawa Hiroyuki wrote:
Considering only memcg, bypassing all charge-limit-check will work.
But as you say, that will not work against global-oom.
I think it will since we have per-zone memory reserves that can be
bypassed in the page allocator, not to the level of
(2013/06/14 19:12), David Rientjes wrote:
On Fri, 14 Jun 2013, Kamezawa Hiroyuki wrote:
Reading your discussion, I think I understand your requirements.
The problem is that I can't think you took into all options into
accounts and found the best way is this new oom_delay. IOW, I can't
convice
(2013/06/14 19:12), David Rientjes wrote:
On Fri, 14 Jun 2013, Kamezawa Hiroyuki wrote:
Reading your discussion, I think I understand your requirements.
The problem is that I can't think you took into all options into
accounts and found the best way is this new oom_delay. IOW, I can't
convice
On Fri, 14 Jun 2013, David Rientjes wrote:
> Even with all of the above (which is not actually that invasive of a
> patch), I still think we need memory.oom_delay_millisecs. I probably made
> a mistake in describing what that is addressing if it seems like it's
> trying to address any of the
On Fri, 14 Jun 2013, David Rientjes wrote:
Even with all of the above (which is not actually that invasive of a
patch), I still think we need memory.oom_delay_millisecs. I probably made
a mistake in describing what that is addressing if it seems like it's
trying to address any of the
On Fri, 14 Jun 2013, Kamezawa Hiroyuki wrote:
> Reading your discussion, I think I understand your requirements.
> The problem is that I can't think you took into all options into
> accounts and found the best way is this new oom_delay. IOW, I can't
> convice oom-delay is the best way to handle
On Fri, 14 Jun 2013, Kamezawa Hiroyuki wrote:
Reading your discussion, I think I understand your requirements.
The problem is that I can't think you took into all options into
accounts and found the best way is this new oom_delay. IOW, I can't
convice oom-delay is the best way to handle your
(2013/06/14 7:25), David Rientjes wrote:
On Thu, 13 Jun 2013, Michal Hocko wrote:
That's not at all the objective, the changelog quite explicitly states
this is a deadlock as the result of userspace having disabled the oom
killer so that its userspace oom handler can resolve the condition and
On Thu, 13 Jun 2013, Michal Hocko wrote:
> > That's not at all the objective, the changelog quite explicitly states
> > this is a deadlock as the result of userspace having disabled the oom
> > killer so that its userspace oom handler can resolve the condition and it
> > being unresponsive or
On Wed 12-06-13 14:27:05, David Rientjes wrote:
> On Wed, 12 Jun 2013, Michal Hocko wrote:
>
> > But the objective is to handle oom deadlocks gracefully and you cannot
> > possibly miss those as they are, well, _deadlocks_.
>
> That's not at all the objective, the changelog quite explicitly
On Wed 12-06-13 14:27:05, David Rientjes wrote:
On Wed, 12 Jun 2013, Michal Hocko wrote:
But the objective is to handle oom deadlocks gracefully and you cannot
possibly miss those as they are, well, _deadlocks_.
That's not at all the objective, the changelog quite explicitly states
On Thu, 13 Jun 2013, Michal Hocko wrote:
That's not at all the objective, the changelog quite explicitly states
this is a deadlock as the result of userspace having disabled the oom
killer so that its userspace oom handler can resolve the condition and it
being unresponsive or unable
(2013/06/14 7:25), David Rientjes wrote:
On Thu, 13 Jun 2013, Michal Hocko wrote:
That's not at all the objective, the changelog quite explicitly states
this is a deadlock as the result of userspace having disabled the oom
killer so that its userspace oom handler can resolve the condition and
On Wed, 12 Jun 2013, Michal Hocko wrote:
> But the objective is to handle oom deadlocks gracefully and you cannot
> possibly miss those as they are, well, _deadlocks_.
That's not at all the objective, the changelog quite explicitly states
this is a deadlock as the result of userspace having
On Tue 11-06-13 13:33:40, David Rientjes wrote:
> On Mon, 10 Jun 2013, Michal Hocko wrote:
[...]
> > Your only objection against userspace handling so far was that admin
> > has no control over sub-hierarchies. But that is hardly a problem. A
> > periodic tree scan would solve this easily (just to
On Tue 11-06-13 13:33:40, David Rientjes wrote:
On Mon, 10 Jun 2013, Michal Hocko wrote:
[...]
Your only objection against userspace handling so far was that admin
has no control over sub-hierarchies. But that is hardly a problem. A
periodic tree scan would solve this easily (just to make
On Wed, 12 Jun 2013, Michal Hocko wrote:
But the objective is to handle oom deadlocks gracefully and you cannot
possibly miss those as they are, well, _deadlocks_.
That's not at all the objective, the changelog quite explicitly states
this is a deadlock as the result of userspace having
On Mon, 10 Jun 2013, Michal Hocko wrote:
> > > > I don't think you yet understand the problem, which is probably my
> > > > fault.
> > > > I'm not insisting this be implemented in the kernel, I'm saying it's
> > > > not
> > > > possible to do it in userspace.
> > >
> > > Because you still
On Mon, 10 Jun 2013, Michal Hocko wrote:
I don't think you yet understand the problem, which is probably my
fault.
I'm not insisting this be implemented in the kernel, I'm saying it's
not
possible to do it in userspace.
Because you still insist on having this
On Wed 05-06-13 17:09:17, David Rientjes wrote:
> On Wed, 5 Jun 2013, Michal Hocko wrote:
>
> > > For the aforementioned reason that we give users the ability to
> > > manipulate
> > > their own memcg trees and userspace is untrusted. Their oom notifiers
> > > cannot be run as root, not only
On Wed 05-06-13 17:09:17, David Rientjes wrote:
On Wed, 5 Jun 2013, Michal Hocko wrote:
For the aforementioned reason that we give users the ability to
manipulate
their own memcg trees and userspace is untrusted. Their oom notifiers
cannot be run as root, not only because of
On Wed, 5 Jun 2013, Michal Hocko wrote:
> > For the aforementioned reason that we give users the ability to manipulate
> > their own memcg trees and userspace is untrusted. Their oom notifiers
> > cannot be run as root, not only because of security but also because it
> > would not properly
On Tue 04-06-13 14:48:52, Johannes Weiner wrote:
> On Tue, Jun 04, 2013 at 11:17:49AM +0200, Michal Hocko wrote:
[...]
> > Now that I am looking at this again I've realized that this
> > is not correct. The task which triggers memcg OOM will not
> > have memcg_oom.memcg set so it would trigger a
On Tue 04-06-13 23:40:16, David Rientjes wrote:
> On Tue, 4 Jun 2013, Michal Hocko wrote:
>
> > > I'm not sure a userspace oom notifier would want to keep a
> > > preallocated buffer around that is mlocked in memory for all possible
> > > lengths of this file.
> >
> > Well, an oom handler which
On Tue, 4 Jun 2013, Michal Hocko wrote:
> > I'm not sure a userspace oom notifier would want to keep a
> > preallocated buffer around that is mlocked in memory for all possible
> > lengths of this file.
>
> Well, an oom handler which allocates memory under the same restricted
> memory doesn't
On Tue, 4 Jun 2013, Michal Hocko wrote:
I'm not sure a userspace oom notifier would want to keep a
preallocated buffer around that is mlocked in memory for all possible
lengths of this file.
Well, an oom handler which allocates memory under the same restricted
memory doesn't make much
On Tue 04-06-13 23:40:16, David Rientjes wrote:
On Tue, 4 Jun 2013, Michal Hocko wrote:
I'm not sure a userspace oom notifier would want to keep a
preallocated buffer around that is mlocked in memory for all possible
lengths of this file.
Well, an oom handler which allocates
On Tue 04-06-13 14:48:52, Johannes Weiner wrote:
On Tue, Jun 04, 2013 at 11:17:49AM +0200, Michal Hocko wrote:
[...]
Now that I am looking at this again I've realized that this
is not correct. The task which triggers memcg OOM will not
have memcg_oom.memcg set so it would trigger a global
On Wed, 5 Jun 2013, Michal Hocko wrote:
For the aforementioned reason that we give users the ability to manipulate
their own memcg trees and userspace is untrusted. Their oom notifiers
cannot be run as root, not only because of security but also because it
would not properly isolate
On Tue 04-06-13 14:48:52, Johannes Weiner wrote:
> On Tue, Jun 04, 2013 at 11:17:49AM +0200, Michal Hocko wrote:
[...]
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 6dc1882..ff5e2d7 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -1815,7 +1815,7 @@ long
On Tue, Jun 04, 2013 at 11:17:49AM +0200, Michal Hocko wrote:
> On Mon 03-06-13 14:30:18, Johannes Weiner wrote:
> > On Mon, Jun 03, 2013 at 12:48:39PM -0400, Johannes Weiner wrote:
> > > On Mon, Jun 03, 2013 at 05:34:32PM +0200, Michal Hocko wrote:
> [...]
> > > > I am just afraid about all the
On Mon 03-06-13 14:17:54, David Rientjes wrote:
> On Mon, 3 Jun 2013, Michal Hocko wrote:
>
> > > What do you suggest when you read the "tasks" file and it returns -ENOMEM
> > > because kmalloc() fails because the userspace oom handler's memcg is also
> > > oom?
> >
> > That would require
On Mon 03-06-13 14:30:18, Johannes Weiner wrote:
> On Mon, Jun 03, 2013 at 12:48:39PM -0400, Johannes Weiner wrote:
> > On Mon, Jun 03, 2013 at 05:34:32PM +0200, Michal Hocko wrote:
[...]
> > > I am just afraid about all the other archs that do not support (from
> > > quick grep it looks like:
On Mon 03-06-13 14:30:18, Johannes Weiner wrote:
On Mon, Jun 03, 2013 at 12:48:39PM -0400, Johannes Weiner wrote:
On Mon, Jun 03, 2013 at 05:34:32PM +0200, Michal Hocko wrote:
[...]
I am just afraid about all the other archs that do not support (from
quick grep it looks like: blackfin,
On Mon 03-06-13 14:17:54, David Rientjes wrote:
On Mon, 3 Jun 2013, Michal Hocko wrote:
What do you suggest when you read the tasks file and it returns -ENOMEM
because kmalloc() fails because the userspace oom handler's memcg is also
oom?
That would require that you track
On Tue, Jun 04, 2013 at 11:17:49AM +0200, Michal Hocko wrote:
On Mon 03-06-13 14:30:18, Johannes Weiner wrote:
On Mon, Jun 03, 2013 at 12:48:39PM -0400, Johannes Weiner wrote:
On Mon, Jun 03, 2013 at 05:34:32PM +0200, Michal Hocko wrote:
[...]
I am just afraid about all the other archs
On Tue 04-06-13 14:48:52, Johannes Weiner wrote:
On Tue, Jun 04, 2013 at 11:17:49AM +0200, Michal Hocko wrote:
[...]
diff --git a/mm/memory.c b/mm/memory.c
index 6dc1882..ff5e2d7 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1815,7 +1815,7 @@ long __get_user_pages(struct
On Mon, Jun 03, 2013 at 12:09:22PM -0700, David Rientjes wrote:
> On Mon, 3 Jun 2013, Johannes Weiner wrote:
>
> > > It's not necessarily harder if you assign the userspace oom handlers to
> > > the root of your subtree with access to more memory than the children.
> > > There is no
> From: Johannes Weiner
> Subject: [PATCH] memcg: do not sleep on OOM waitqueue with full charge context
>
> The memcg OOM handling is incredibly fragile because once a memcg goes
> OOM, one task (kernel or userspace) is responsible for resolving the
> situation. Every other task that gets
On Mon, 3 Jun 2013, Michal Hocko wrote:
> > What do you suggest when you read the "tasks" file and it returns -ENOMEM
> > because kmalloc() fails because the userspace oom handler's memcg is also
> > oom?
>
> That would require that you track kernel allocations which is currently
> done only
On Mon 03-06-13 11:18:09, David Rientjes wrote:
> On Sat, 1 Jun 2013, Michal Hocko wrote:
[...]
> > I still do not see why you cannot simply read tasks file into a
> > preallocated buffer. This would be few pages even for thousands of pids.
> > You do not have to track processes as they come and
On Mon, 3 Jun 2013, Johannes Weiner wrote:
> > It's not necessarily harder if you assign the userspace oom handlers to
> > the root of your subtree with access to more memory than the children.
> > There is no "inability" to write a proper handler, but when you have
> > dozens of individual
On Mon, Jun 03, 2013 at 11:18:09AM -0700, David Rientjes wrote:
> On Sat, 1 Jun 2013, Michal Hocko wrote:
> > OK, I assume those groups are generally untrusted, right? So you cannot
> > let them register their oom handler even via an admin interface. This
> > makes it a bit complicated because it
On Mon, Jun 03, 2013 at 12:48:39PM -0400, Johannes Weiner wrote:
> On Mon, Jun 03, 2013 at 05:34:32PM +0200, Michal Hocko wrote:
> > On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
> > [...]
> > > From: Johannes Weiner
> > > Subject: [PATCH] memcg: more robust oom handling
> > >
> > > The memcg
On Sat, 1 Jun 2013, Michal Hocko wrote:
> > Users obviously don't have the ability to attach processes to the root
> > memcg. They are constrained to their own subtree of memcgs.
>
> OK, I assume those groups are generally untrusted, right? So you cannot
> let them register their oom handler
On Mon 03-06-13 12:48:39, Johannes Weiner wrote:
> On Mon, Jun 03, 2013 at 05:34:32PM +0200, Michal Hocko wrote:
> > On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
> > [...]
> > > From: Johannes Weiner
> > > Subject: [PATCH] memcg: more robust oom handling
> > >
> > > The memcg OOM handling is
On Fri, 31 May 2013, Andrew Morton wrote:
> > Admins may set the oom killer delay using the new interface:
> >
> > # echo 6 > memory.oom_delay_millisecs
> >
> > This will defer oom killing to the kernel only after 60 seconds has
> > elapsed by putting the task to sleep for 60 seconds.
>
On Mon, Jun 03, 2013 at 06:31:34PM +0200, Michal Hocko wrote:
> On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
> > @@ -2076,6 +2077,7 @@ static void memcg_wakeup_oom(struct mem_cgroup *memcg)
> > {
> > /* for filtering, pass "memcg" as argument. */
> > __wake_up(_oom_waitq, TASK_NORMAL,
On Mon, Jun 03, 2013 at 05:34:32PM +0200, Michal Hocko wrote:
> On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
> [...]
> > From: Johannes Weiner
> > Subject: [PATCH] memcg: more robust oom handling
> >
> > The memcg OOM handling is incredibly fragile because once a memcg goes
> > OOM, one task
On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
> @@ -2076,6 +2077,7 @@ static void memcg_wakeup_oom(struct mem_cgroup *memcg)
> {
> /* for filtering, pass "memcg" as argument. */
> __wake_up(_oom_waitq, TASK_NORMAL, 0, memcg);
> + atomic_inc(>oom_wakeups);
> }
>
> static
On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
[...]
> From: Johannes Weiner
> Subject: [PATCH] memcg: more robust oom handling
>
> The memcg OOM handling is incredibly fragile because once a memcg goes
> OOM, one task (kernel or userspace) is responsible for resolving the
> situation. Every
On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
[...]
From: Johannes Weiner han...@cmpxchg.org
Subject: [PATCH] memcg: more robust oom handling
The memcg OOM handling is incredibly fragile because once a memcg goes
OOM, one task (kernel or userspace) is responsible for resolving the
On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
@@ -2076,6 +2077,7 @@ static void memcg_wakeup_oom(struct mem_cgroup *memcg)
{
/* for filtering, pass memcg as argument. */
__wake_up(memcg_oom_waitq, TASK_NORMAL, 0, memcg);
+ atomic_inc(memcg-oom_wakeups);
}
static
On Mon, Jun 03, 2013 at 05:34:32PM +0200, Michal Hocko wrote:
On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
[...]
From: Johannes Weiner han...@cmpxchg.org
Subject: [PATCH] memcg: more robust oom handling
The memcg OOM handling is incredibly fragile because once a memcg goes
OOM,
On Mon, Jun 03, 2013 at 06:31:34PM +0200, Michal Hocko wrote:
On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
@@ -2076,6 +2077,7 @@ static void memcg_wakeup_oom(struct mem_cgroup *memcg)
{
/* for filtering, pass memcg as argument. */
__wake_up(memcg_oom_waitq, TASK_NORMAL, 0,
On Fri, 31 May 2013, Andrew Morton wrote:
Admins may set the oom killer delay using the new interface:
# echo 6 memory.oom_delay_millisecs
This will defer oom killing to the kernel only after 60 seconds has
elapsed by putting the task to sleep for 60 seconds.
How often
On Mon 03-06-13 12:48:39, Johannes Weiner wrote:
On Mon, Jun 03, 2013 at 05:34:32PM +0200, Michal Hocko wrote:
On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
[...]
From: Johannes Weiner han...@cmpxchg.org
Subject: [PATCH] memcg: more robust oom handling
The memcg OOM handling
On Sat, 1 Jun 2013, Michal Hocko wrote:
Users obviously don't have the ability to attach processes to the root
memcg. They are constrained to their own subtree of memcgs.
OK, I assume those groups are generally untrusted, right? So you cannot
let them register their oom handler even via
On Mon, Jun 03, 2013 at 12:48:39PM -0400, Johannes Weiner wrote:
On Mon, Jun 03, 2013 at 05:34:32PM +0200, Michal Hocko wrote:
On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
[...]
From: Johannes Weiner han...@cmpxchg.org
Subject: [PATCH] memcg: more robust oom handling
The
On Mon, Jun 03, 2013 at 11:18:09AM -0700, David Rientjes wrote:
On Sat, 1 Jun 2013, Michal Hocko wrote:
OK, I assume those groups are generally untrusted, right? So you cannot
let them register their oom handler even via an admin interface. This
makes it a bit complicated because it makes
On Mon, 3 Jun 2013, Johannes Weiner wrote:
It's not necessarily harder if you assign the userspace oom handlers to
the root of your subtree with access to more memory than the children.
There is no inability to write a proper handler, but when you have
dozens of individual users
On Mon 03-06-13 11:18:09, David Rientjes wrote:
On Sat, 1 Jun 2013, Michal Hocko wrote:
[...]
I still do not see why you cannot simply read tasks file into a
preallocated buffer. This would be few pages even for thousands of pids.
You do not have to track processes as they come and go.
On Mon, 3 Jun 2013, Michal Hocko wrote:
What do you suggest when you read the tasks file and it returns -ENOMEM
because kmalloc() fails because the userspace oom handler's memcg is also
oom?
That would require that you track kernel allocations which is currently
done only for
From: Johannes Weiner han...@cmpxchg.org
Subject: [PATCH] memcg: do not sleep on OOM waitqueue with full charge context
The memcg OOM handling is incredibly fragile because once a memcg goes
OOM, one task (kernel or userspace) is responsible for resolving the
situation. Every other task
On Mon, Jun 03, 2013 at 12:09:22PM -0700, David Rientjes wrote:
On Mon, 3 Jun 2013, Johannes Weiner wrote:
It's not necessarily harder if you assign the userspace oom handlers to
the root of your subtree with access to more memory than the children.
There is no inability to write a
On Sat, Jun 01, 2013 at 12:29:05PM +0200, Michal Hocko wrote:
> On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
> [...]
> > I'm currently messing around with the below patch. When a task faults
> > and charges under OOM, the memcg is remembered in the task struct and
> > then made to sleep on
On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
[...]
> I'm currently messing around with the below patch. When a task faults
> and charges under OOM, the memcg is remembered in the task struct and
> then made to sleep on the memcg's OOM waitqueue only after unwinding
> the page fault stack.
On Fri 31-05-13 12:29:17, David Rientjes wrote:
> On Fri, 31 May 2013, Michal Hocko wrote:
>
> > > We allow users to control their own memcgs by chowning them, so they must
> > > be run in the same hierarchy if they want to run their own userspace oom
> > > handler. There's nothing in the
On Fri, May 31, 2013 at 12:29:17PM -0700, David Rientjes wrote:
> On Fri, 31 May 2013, Michal Hocko wrote:
> > > It's too easy to simply do even a "ps ax" in an oom memcg and make that
> > > thread completely unresponsive because it allocates memory.
> >
> > Yes, but we are talking about oom
On Fri, May 31, 2013 at 12:29:17PM -0700, David Rientjes wrote:
On Fri, 31 May 2013, Michal Hocko wrote:
It's too easy to simply do even a ps ax in an oom memcg and make that
thread completely unresponsive because it allocates memory.
Yes, but we are talking about oom handler and that
On Fri 31-05-13 12:29:17, David Rientjes wrote:
On Fri, 31 May 2013, Michal Hocko wrote:
We allow users to control their own memcgs by chowning them, so they must
be run in the same hierarchy if they want to run their own userspace oom
handler. There's nothing in the kernel that
On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
[...]
I'm currently messing around with the below patch. When a task faults
and charges under OOM, the memcg is remembered in the task struct and
then made to sleep on the memcg's OOM waitqueue only after unwinding
the page fault stack. With
On Sat, Jun 01, 2013 at 12:29:05PM +0200, Michal Hocko wrote:
On Sat 01-06-13 02:11:51, Johannes Weiner wrote:
[...]
I'm currently messing around with the below patch. When a task faults
and charges under OOM, the memcg is remembered in the task struct and
then made to sleep on the
On Wed, 29 May 2013 18:18:10 -0700 (PDT) David Rientjes
wrote:
> Completely disabling the oom killer for a memcg is problematic if
> userspace is unable to address the condition itself, usually because it
> is unresponsive. This scenario creates a memcg deadlock: tasks are
> sitting in
On Fri, 31 May 2013, Michal Hocko wrote:
> > We allow users to control their own memcgs by chowning them, so they must
> > be run in the same hierarchy if they want to run their own userspace oom
> > handler. There's nothing in the kernel that prevents that and the user
> > has no other
On Fri 31-05-13 03:22:59, David Rientjes wrote:
> On Fri, 31 May 2013, Michal Hocko wrote:
>
> > I have always discouraged people from running oom handler in the same
> > memcg (or even in the same hierarchy).
> >
>
> We allow users to control their own memcgs by chowning them, so they must
>
On Fri 31-05-13 03:22:59, David Rientjes wrote:
> On Fri, 31 May 2013, Michal Hocko wrote:
[...]
> > > If the oom notifier is in the oom cgroup, it may not be able to
> > > successfully read the memcg "tasks" file to even determine the set of
> > > eligible processes.
> >
> > It would
On Fri, 31 May 2013, Michal Hocko wrote:
> I have always discouraged people from running oom handler in the same
> memcg (or even in the same hierarchy).
>
We allow users to control their own memcgs by chowning them, so they must
be run in the same hierarchy if they want to run their own
On Thu 30-05-13 13:47:30, David Rientjes wrote:
> On Thu, 30 May 2013, Michal Hocko wrote:
>
> > > Completely disabling the oom killer for a memcg is problematic if
> > > userspace is unable to address the condition itself, usually because it
> > > is unresponsive.
> >
> > Isn't this a bug in
On Thu 30-05-13 13:47:30, David Rientjes wrote:
On Thu, 30 May 2013, Michal Hocko wrote:
Completely disabling the oom killer for a memcg is problematic if
userspace is unable to address the condition itself, usually because it
is unresponsive.
Isn't this a bug in the userspace
On Fri, 31 May 2013, Michal Hocko wrote:
I have always discouraged people from running oom handler in the same
memcg (or even in the same hierarchy).
We allow users to control their own memcgs by chowning them, so they must
be run in the same hierarchy if they want to run their own
On Fri 31-05-13 03:22:59, David Rientjes wrote:
On Fri, 31 May 2013, Michal Hocko wrote:
[...]
If the oom notifier is in the oom cgroup, it may not be able to
successfully read the memcg tasks file to even determine the set of
eligible processes.
It would have to use
On Fri 31-05-13 03:22:59, David Rientjes wrote:
On Fri, 31 May 2013, Michal Hocko wrote:
I have always discouraged people from running oom handler in the same
memcg (or even in the same hierarchy).
We allow users to control their own memcgs by chowning them, so they must
be run in
On Fri, 31 May 2013, Michal Hocko wrote:
We allow users to control their own memcgs by chowning them, so they must
be run in the same hierarchy if they want to run their own userspace oom
handler. There's nothing in the kernel that prevents that and the user
has no other option but to
On Wed, 29 May 2013 18:18:10 -0700 (PDT) David Rientjes rient...@google.com
wrote:
Completely disabling the oom killer for a memcg is problematic if
userspace is unable to address the condition itself, usually because it
is unresponsive. This scenario creates a memcg deadlock: tasks are
On Thu, 30 May 2013, Michal Hocko wrote:
> > Completely disabling the oom killer for a memcg is problematic if
> > userspace is unable to address the condition itself, usually because it
> > is unresponsive.
>
> Isn't this a bug in the userspace oom handler? Why is it unresponsive? It
>
On Wed 29-05-13 18:18:10, David Rientjes wrote:
> Completely disabling the oom killer for a memcg is problematic if
> userspace is unable to address the condition itself, usually because it
> is unresponsive.
Isn't this a bug in the userspace oom handler? Why is it unresponsive? It
shouldn't
On Wed 29-05-13 18:18:10, David Rientjes wrote:
Completely disabling the oom killer for a memcg is problematic if
userspace is unable to address the condition itself, usually because it
is unresponsive.
Isn't this a bug in the userspace oom handler? Why is it unresponsive? It
shouldn't
On Thu, 30 May 2013, Michal Hocko wrote:
Completely disabling the oom killer for a memcg is problematic if
userspace is unable to address the condition itself, usually because it
is unresponsive.
Isn't this a bug in the userspace oom handler? Why is it unresponsive? It
shouldn't
Completely disabling the oom killer for a memcg is problematic if
userspace is unable to address the condition itself, usually because it
is unresponsive. This scenario creates a memcg deadlock: tasks are
sitting in TASK_KILLABLE waiting for the limit to be increased, a task to
exit or move, or
Completely disabling the oom killer for a memcg is problematic if
userspace is unable to address the condition itself, usually because it
is unresponsive. This scenario creates a memcg deadlock: tasks are
sitting in TASK_KILLABLE waiting for the limit to be increased, a task to
exit or move, or
94 matches
Mail list logo