Michal Hocko wrote:
> On Thu 30-04-15 18:44:25, Tetsuo Handa wrote:
> > Michal Hocko wrote:
> > > I mean we should eventually fail all the allocation types but GFP_NOFS
> > > is coming from _carefully_ handled code paths which is an easier starting
> > > point than a random code path in the
Michal Hocko wrote:
On Thu 30-04-15 18:44:25, Tetsuo Handa wrote:
Michal Hocko wrote:
I mean we should eventually fail all the allocation types but GFP_NOFS
is coming from _carefully_ handled code paths which is an easier starting
point than a random code path in the kernel/drivers. So
On Mon 04-05-15 14:02:10, Johannes Weiner wrote:
> Hi Andrew,
>
> since patches 8 and 9 are still controversial, would you mind picking
> up just 1-7 for now? They're cleaunps nice to have on their own.
Completely agreed.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the
Hi Andrew,
since patches 8 and 9 are still controversial, would you mind picking
up just 1-7 for now? They're cleaunps nice to have on their own.
Thanks,
Johannes
On Mon, Apr 27, 2015 at 03:05:46PM -0400, Johannes Weiner wrote:
> There is a possible deadlock scenario between the page allocator
Hi Andrew,
since patches 8 and 9 are still controversial, would you mind picking
up just 1-7 for now? They're cleaunps nice to have on their own.
Thanks,
Johannes
On Mon, Apr 27, 2015 at 03:05:46PM -0400, Johannes Weiner wrote:
There is a possible deadlock scenario between the page allocator
On Mon 04-05-15 14:02:10, Johannes Weiner wrote:
Hi Andrew,
since patches 8 and 9 are still controversial, would you mind picking
up just 1-7 for now? They're cleaunps nice to have on their own.
Completely agreed.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line
On Thu 30-04-15 18:44:25, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > I mean we should eventually fail all the allocation types but GFP_NOFS
> > is coming from _carefully_ handled code paths which is an easier starting
> > point than a random code path in the kernel/drivers. So can we finally
>
Michal Hocko wrote:
> I mean we should eventually fail all the allocation types but GFP_NOFS
> is coming from _carefully_ handled code paths which is an easier starting
> point than a random code path in the kernel/drivers. So can we finally
> move at least in this direction?
I agree that all the
Michal Hocko wrote:
I mean we should eventually fail all the allocation types but GFP_NOFS
is coming from _carefully_ handled code paths which is an easier starting
point than a random code path in the kernel/drivers. So can we finally
move at least in this direction?
I agree that all the
On Thu 30-04-15 18:44:25, Tetsuo Handa wrote:
Michal Hocko wrote:
I mean we should eventually fail all the allocation types but GFP_NOFS
is coming from _carefully_ handled code paths which is an easier starting
point than a random code path in the kernel/drivers. So can we finally
move at
On Thu 30-04-15 02:27:44, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Wed 29-04-15 08:55:06, Johannes Weiner wrote:
> > > What we can do to mitigate this is tie the timeout to the setting of
> > > TIF_MEMDIE so that the wait is not 5s from the point of calling
> > > out_of_memory() but from
Michal Hocko wrote:
> On Wed 29-04-15 08:55:06, Johannes Weiner wrote:
> > What we can do to mitigate this is tie the timeout to the setting of
> > TIF_MEMDIE so that the wait is not 5s from the point of calling
> > out_of_memory() but from the point of where TIF_MEMDIE was set.
> > Subsequent
On Wed 29-04-15 08:55:06, Johannes Weiner wrote:
> On Wed, Apr 29, 2015 at 12:50:37AM +0900, Tetsuo Handa wrote:
> > Michal Hocko wrote:
> > > On Tue 28-04-15 19:34:47, Tetsuo Handa wrote:
> > > [...]
> > > > [PATCH 8/9] makes the speed of allocating __GFP_FS pages extremely slow
> > > > (5
> > >
On Wed, Apr 29, 2015 at 12:50:37AM +0900, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Tue 28-04-15 19:34:47, Tetsuo Handa wrote:
> > [...]
> > > [PATCH 8/9] makes the speed of allocating __GFP_FS pages extremely slow (5
> > > seconds / page) because out_of_memory() serialized by the oom_lock
On Wed, Apr 29, 2015 at 12:50:37AM +0900, Tetsuo Handa wrote:
Michal Hocko wrote:
On Tue 28-04-15 19:34:47, Tetsuo Handa wrote:
[...]
[PATCH 8/9] makes the speed of allocating __GFP_FS pages extremely slow (5
seconds / page) because out_of_memory() serialized by the oom_lock sleeps
Michal Hocko wrote:
On Wed 29-04-15 08:55:06, Johannes Weiner wrote:
What we can do to mitigate this is tie the timeout to the setting of
TIF_MEMDIE so that the wait is not 5s from the point of calling
out_of_memory() but from the point of where TIF_MEMDIE was set.
Subsequent allocations
On Thu 30-04-15 02:27:44, Tetsuo Handa wrote:
Michal Hocko wrote:
On Wed 29-04-15 08:55:06, Johannes Weiner wrote:
What we can do to mitigate this is tie the timeout to the setting of
TIF_MEMDIE so that the wait is not 5s from the point of calling
out_of_memory() but from the point of
On Wed 29-04-15 08:55:06, Johannes Weiner wrote:
On Wed, Apr 29, 2015 at 12:50:37AM +0900, Tetsuo Handa wrote:
Michal Hocko wrote:
On Tue 28-04-15 19:34:47, Tetsuo Handa wrote:
[...]
[PATCH 8/9] makes the speed of allocating __GFP_FS pages extremely slow
(5
seconds / page)
Michal Hocko wrote:
> On Tue 28-04-15 19:34:47, Tetsuo Handa wrote:
> [...]
> > [PATCH 8/9] makes the speed of allocating __GFP_FS pages extremely slow (5
> > seconds / page) because out_of_memory() serialized by the oom_lock sleeps
> > for
> > 5 seconds before returning true when the OOM victim
On Tue 28-04-15 19:34:47, Tetsuo Handa wrote:
[...]
> [PATCH 8/9] makes the speed of allocating __GFP_FS pages extremely slow (5
> seconds / page) because out_of_memory() serialized by the oom_lock sleeps for
> 5 seconds before returning true when the OOM victim got stuck. This throttling
> also
Johannes Weiner wrote:
> There is a possible deadlock scenario between the page allocator and
> the OOM killer. Most allocations currently retry forever inside the
> page allocator, but when the OOM killer is invoked the chosen victim
> might try taking locks held by the allocating task. This
On Tue 28-04-15 19:34:47, Tetsuo Handa wrote:
[...]
[PATCH 8/9] makes the speed of allocating __GFP_FS pages extremely slow (5
seconds / page) because out_of_memory() serialized by the oom_lock sleeps for
5 seconds before returning true when the OOM victim got stuck. This throttling
also slows
Johannes Weiner wrote:
There is a possible deadlock scenario between the page allocator and
the OOM killer. Most allocations currently retry forever inside the
page allocator, but when the OOM killer is invoked the chosen victim
might try taking locks held by the allocating task. This
Michal Hocko wrote:
On Tue 28-04-15 19:34:47, Tetsuo Handa wrote:
[...]
[PATCH 8/9] makes the speed of allocating __GFP_FS pages extremely slow (5
seconds / page) because out_of_memory() serialized by the oom_lock sleeps
for
5 seconds before returning true when the OOM victim got stuck.
There is a possible deadlock scenario between the page allocator and
the OOM killer. Most allocations currently retry forever inside the
page allocator, but when the OOM killer is invoked the chosen victim
might try taking locks held by the allocating task. This series, on
top of many cleanups
There is a possible deadlock scenario between the page allocator and
the OOM killer. Most allocations currently retry forever inside the
page allocator, but when the OOM killer is invoked the chosen victim
might try taking locks held by the allocating task. This series, on
top of many cleanups
26 matches
Mail list logo