Hello, Tim.
On Thu, Dec 12, 2013 at 04:23:18PM -0800, Tim Hockin wrote:
> Just to be clear - I say this because it doesn't feel right to impose
> my craziness on others, and it sucks when we try and are met with
> "you're crazy, go away". And you have to admit that happens to
> Google. :)
Hello, Tim.
On Thu, Dec 12, 2013 at 04:23:18PM -0800, Tim Hockin wrote:
Just to be clear - I say this because it doesn't feel right to impose
my craziness on others, and it sucks when we try and are met with
you're crazy, go away. And you have to admit that happens to
Google. :) Punching an
On Thu, Dec 12, 2013 at 11:23 AM, Tejun Heo wrote:
> Hello, Tim.
>
> On Thu, Dec 12, 2013 at 10:42:20AM -0800, Tim Hockin wrote:
>> Yeah sorry. Replying from my phone is awkward at best. I know better :)
>
> Heh, sorry about being bitchy. :)
>
>> In my mind, the ONLY point of pulling system-OOM
Hello, Tim.
On Thu, Dec 12, 2013 at 10:42:20AM -0800, Tim Hockin wrote:
> Yeah sorry. Replying from my phone is awkward at best. I know better :)
Heh, sorry about being bitchy. :)
> In my mind, the ONLY point of pulling system-OOM handling into
> userspace is to make it easier for crazy
On Thu, Dec 12, 2013 at 6:21 AM, Tejun Heo wrote:
> Hey, Tim.
>
> Sidenote: Please don't top-post with the whole body quoted below
> unless you're adding new cc's. Please selectively quote the original
> message's body to remind the readers of the context and reply below
> it. It's a basic lkml
On Thu 12-12-13 09:21:56, Tejun Heo wrote:
[...]
> There'd still be all the bells and whistles to configure and monitor
> system-level OOM and if there's justified need for improvements, we
> surely can and should do that;
You weren't on the CC of the original thread which has started here
Hello, Michal.
On Thu, Dec 12, 2013 at 05:32:22PM +0100, Michal Hocko wrote:
> You weren't on the CC of the original thread which has started here
> https://lkml.org/lkml/2013/11/19/191. And the original request for
> discussion was more about user defined _policies_ for the global
> OOM rather
Hey, Tim.
Sidenote: Please don't top-post with the whole body quoted below
unless you're adding new cc's. Please selectively quote the original
message's body to remind the readers of the context and reply below
it. It's a basic lkml etiquette and one with good reasons. If you
have to top-post
Hey, Tim.
Sidenote: Please don't top-post with the whole body quoted below
unless you're adding new cc's. Please selectively quote the original
message's body to remind the readers of the context and reply below
it. It's a basic lkml etiquette and one with good reasons. If you
have to top-post
Hello, Michal.
On Thu, Dec 12, 2013 at 05:32:22PM +0100, Michal Hocko wrote:
You weren't on the CC of the original thread which has started here
https://lkml.org/lkml/2013/11/19/191. And the original request for
discussion was more about user defined _policies_ for the global
OOM rather than
On Thu 12-12-13 09:21:56, Tejun Heo wrote:
[...]
There'd still be all the bells and whistles to configure and monitor
system-level OOM and if there's justified need for improvements, we
surely can and should do that;
You weren't on the CC of the original thread which has started here
On Thu, Dec 12, 2013 at 6:21 AM, Tejun Heo t...@kernel.org wrote:
Hey, Tim.
Sidenote: Please don't top-post with the whole body quoted below
unless you're adding new cc's. Please selectively quote the original
message's body to remind the readers of the context and reply below
it. It's a
Hello, Tim.
On Thu, Dec 12, 2013 at 10:42:20AM -0800, Tim Hockin wrote:
Yeah sorry. Replying from my phone is awkward at best. I know better :)
Heh, sorry about being bitchy. :)
In my mind, the ONLY point of pulling system-OOM handling into
userspace is to make it easier for crazy people
On Thu, Dec 12, 2013 at 11:23 AM, Tejun Heo t...@kernel.org wrote:
Hello, Tim.
On Thu, Dec 12, 2013 at 10:42:20AM -0800, Tim Hockin wrote:
Yeah sorry. Replying from my phone is awkward at best. I know better :)
Heh, sorry about being bitchy. :)
In my mind, the ONLY point of pulling
The immediate problem I see with setting aside reserves "off the top"
is that we don't really know a priori how much memory the kernel
itself is going to use, which could still land us in an overcommitted
state.
In other words, if I have your 128 MB machine, and I set aside 8 MB
for OOM handling,
Yo,
On Tue, Dec 10, 2013 at 03:55:48PM -0800, David Rientjes wrote:
> > Well, the gotcha there is that you won't be able to do that with
> > system level OOM handler either unless you create a separately
> > reserved memory, which, again, can be achieved using hierarchical
> > memcg setup
On Tue, Dec 10, 2013 at 03:55:48PM -0800, David Rientjes wrote:
> > Okay, are you saying that userland OOM handlers will be able to dip
> > into kernel reserve memory? Maybe I'm mistaken but you realize that
> > that reserve is there to make things like task exits work under OOM
> > conditions,
On Tue, Dec 10, 2013 at 03:55:48PM -0800, David Rientjes wrote:
Okay, are you saying that userland OOM handlers will be able to dip
into kernel reserve memory? Maybe I'm mistaken but you realize that
that reserve is there to make things like task exits work under OOM
conditions, right?
Yo,
On Tue, Dec 10, 2013 at 03:55:48PM -0800, David Rientjes wrote:
Well, the gotcha there is that you won't be able to do that with
system level OOM handler either unless you create a separately
reserved memory, which, again, can be achieved using hierarchical
memcg setup already. Am I
The immediate problem I see with setting aside reserves off the top
is that we don't really know a priori how much memory the kernel
itself is going to use, which could still land us in an overcommitted
state.
In other words, if I have your 128 MB machine, and I set aside 8 MB
for OOM handling,
On Tue, 10 Dec 2013, Tejun Heo wrote:
> > Indeed. The setup I'm specifically trying to attack is where the sum of
> > the limits of all non-oom handling memcgs (A/b in my model, A in yours)
> > exceed the amount of RAM. If the system has 256MB,
> >
> > /=256MB
> >
Hey, David.
On Mon, Dec 09, 2013 at 12:10:44PM -0800, David Rientjes wrote:
> Indeed. The setup I'm specifically trying to attack is where the sum of
> the limits of all non-oom handling memcgs (A/b in my model, A in yours)
> exceed the amount of RAM. If the system has 256MB,
>
>
Hey, David.
On Mon, Dec 09, 2013 at 12:10:44PM -0800, David Rientjes wrote:
Indeed. The setup I'm specifically trying to attack is where the sum of
the limits of all non-oom handling memcgs (A/b in my model, A in yours)
exceed the amount of RAM. If the system has 256MB,
On Tue, 10 Dec 2013, Tejun Heo wrote:
Indeed. The setup I'm specifically trying to attack is where the sum of
the limits of all non-oom handling memcgs (A/b in my model, A in yours)
exceed the amount of RAM. If the system has 256MB,
/=256MB
A=126MB
On Mon, Dec 09, 2013 at 12:10:44PM -0800, David Rientjes wrote:
> On Fri, 6 Dec 2013, Tejun Heo wrote:
>
> > > Tejun, how are you?
> >
> > Doing pretty good. How's yourself? :)
> >
>
> Not bad, busy with holidays and all that.
>
> > > I agree that we wouldn't need such support if we are only
On Fri, 6 Dec 2013, Tejun Heo wrote:
> > Tejun, how are you?
>
> Doing pretty good. How's yourself? :)
>
Not bad, busy with holidays and all that.
> > I agree that we wouldn't need such support if we are only addressing memcg
> > oom conditions. We could do things like
On Fri, 6 Dec 2013, Tejun Heo wrote:
Tejun, how are you?
Doing pretty good. How's yourself? :)
Not bad, busy with holidays and all that.
I agree that we wouldn't need such support if we are only addressing memcg
oom conditions. We could do things like A/memory.limit_in_bytes ==
On Mon, Dec 09, 2013 at 12:10:44PM -0800, David Rientjes wrote:
On Fri, 6 Dec 2013, Tejun Heo wrote:
Tejun, how are you?
Doing pretty good. How's yourself? :)
Not bad, busy with holidays and all that.
I agree that we wouldn't need such support if we are only addressing
On Sat, Dec 07, 2013 at 10:12:19AM -0800, Tim Hockin wrote:
> You more or less described the fundamental change - a score per memcg, with
> a recursive OOM killer which evaluates scores between siblings at the same
> level.
>
> It gets a bit complicated because we have need if wider scoring
Hello Tim!
On Sat, Dec 07, 2013 at 08:38:20AM -0800, Tim Hockin wrote:
> We actually started with kernel patches all h these lines - per-memcg
> scores and all of our crazy policy requirements.
>
> It turns out that changing policies is hard.
>
> When David offered the opportunity to manage it
Hello Tim!
On Sat, Dec 07, 2013 at 08:38:20AM -0800, Tim Hockin wrote:
We actually started with kernel patches all h these lines - per-memcg
scores and all of our crazy policy requirements.
It turns out that changing policies is hard.
When David offered the opportunity to manage it all in
On Sat, Dec 07, 2013 at 10:12:19AM -0800, Tim Hockin wrote:
You more or less described the fundamental change - a score per memcg, with
a recursive OOM killer which evaluates scores between siblings at the same
level.
It gets a bit complicated because we have need if wider scoring ranges
Yo, David.
On Thu, Dec 05, 2013 at 03:49:57PM -0800, David Rientjes wrote:
> Tejun, how are you?
Doing pretty good. How's yourself? :)
> > Umm.. without delving into details, aren't you basically creating a
> > memory cgroup inside a memory cgroup? Doesn't sound like a
> > particularly well
On Thu, Dec 05, 2013 at 03:49:57PM -0800, David Rientjes wrote:
> On Wed, 4 Dec 2013, Tejun Heo wrote:
>
> > Hello,
> >
>
> Tejun, how are you?
>
> > Umm.. without delving into details, aren't you basically creating a
> > memory cgroup inside a memory cgroup? Doesn't sound like a
> >
On Thu, Dec 05, 2013 at 03:49:57PM -0800, David Rientjes wrote:
On Wed, 4 Dec 2013, Tejun Heo wrote:
Hello,
Tejun, how are you?
Umm.. without delving into details, aren't you basically creating a
memory cgroup inside a memory cgroup? Doesn't sound like a
particularly well
Yo, David.
On Thu, Dec 05, 2013 at 03:49:57PM -0800, David Rientjes wrote:
Tejun, how are you?
Doing pretty good. How's yourself? :)
Umm.. without delving into details, aren't you basically creating a
memory cgroup inside a memory cgroup? Doesn't sound like a
particularly well
On Wed, 4 Dec 2013, Tejun Heo wrote:
> Hello,
>
Tejun, how are you?
> Umm.. without delving into details, aren't you basically creating a
> memory cgroup inside a memory cgroup? Doesn't sound like a
> particularly well thought-out plan to me.
>
I agree that we wouldn't need such support if
On Wed, 4 Dec 2013, Tejun Heo wrote:
Hello,
Tejun, how are you?
Umm.. without delving into details, aren't you basically creating a
memory cgroup inside a memory cgroup? Doesn't sound like a
particularly well thought-out plan to me.
I agree that we wouldn't need such support if we are
Hello,
On Wed, Dec 04, 2013 at 05:49:04PM -0800, David Rientjes wrote:
> That's not what this series is addressing, though, and in fact it's quite
> the opposite. It acknowledges that userspace oom handlers need to
> allocate and that anything else would be too difficult to maintain
>
On Wed, 4 Dec 2013, Johannes Weiner wrote:
> > Now that a per-process flag is available, define it for processes that
> > handle userspace oom notifications. This is an optimization to avoid
> > mantaining a list of such processes attached to a memcg at any given time
> > and iterating it at
On Wed, 4 Dec 2013, Johannes Weiner wrote:
Now that a per-process flag is available, define it for processes that
handle userspace oom notifications. This is an optimization to avoid
mantaining a list of such processes attached to a memcg at any given time
and iterating it at charge
Hello,
On Wed, Dec 04, 2013 at 05:49:04PM -0800, David Rientjes wrote:
That's not what this series is addressing, though, and in fact it's quite
the opposite. It acknowledges that userspace oom handlers need to
allocate and that anything else would be too difficult to maintain
(thereby
On Tue, Dec 03, 2013 at 09:20:17PM -0800, David Rientjes wrote:
> Now that a per-process flag is available, define it for processes that
> handle userspace oom notifications. This is an optimization to avoid
> mantaining a list of such processes attached to a memcg at any given time
> and
Now that a per-process flag is available, define it for processes that
handle userspace oom notifications. This is an optimization to avoid
mantaining a list of such processes attached to a memcg at any given time
and iterating it at charge time.
This flag gets set whenever a process has
Now that a per-process flag is available, define it for processes that
handle userspace oom notifications. This is an optimization to avoid
mantaining a list of such processes attached to a memcg at any given time
and iterating it at charge time.
This flag gets set whenever a process has
On Tue, Dec 03, 2013 at 09:20:17PM -0800, David Rientjes wrote:
Now that a per-process flag is available, define it for processes that
handle userspace oom notifications. This is an optimization to avoid
mantaining a list of such processes attached to a memcg at any given time
and iterating
46 matches
Mail list logo