On Thu 29-10-15 16:17:13, mho...@kernel.org wrote:
[...]
> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int
> order,
> if (gfp_mask & __GFP_NORETRY)
> goto noretry;
>
> - /* Keep reclaiming pages as long as there is reasonable progress */
> +
On Thu 29-10-15 16:17:13, mho...@kernel.org wrote:
[...]
> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int
> order,
> if (gfp_mask & __GFP_NORETRY)
> goto noretry;
>
> - /* Keep reclaiming pages as long as there is reasonable progress */
> +
> On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> > On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> > [...]
> > > > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist,
> > > > ac->high_zoneidx, ac->nodemask) {
> > > > + unsigned long free = zone_page_state(zone,
> > > >
On Fri 30-10-15 22:32:27, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > + target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) /
> > MAX_STALL_BACKOFF;
> target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
Ohh, we have a macro for that. Good to know. Thanks. It
Michal Hocko wrote:
> + target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) /
> MAX_STALL_BACKOFF;
target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
Michal Hocko wrote:
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and
On Fri 30-10-15 18:41:30, KAMEZAWA Hiroyuki wrote:
[...]
> >>So, now, 0-order page allocation may fail in a OOM situation ?
> >
> >No they don't normally and this patch doesn't change the logic here.
> >
>
> I understand your patch doesn't change the behavior.
> Looking into
On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> [...]
> > > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist,
> > > ac->high_zoneidx, ac->nodemask) {
> > > + unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > > +
On 2015/10/30 17:23, Michal Hocko wrote:
On Fri 30-10-15 14:23:59, KAMEZAWA Hiroyuki wrote:
On 2015/10/30 0:17, mho...@kernel.org wrote:
[...]
@@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int
order,
if (gfp_mask & __GFP_NORETRY)
goto
On Fri 30-10-15 12:10:15, Hillf Danton wrote:
[...]
> > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist,
> > ac->high_zoneidx, ac->nodemask) {
> > + unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > + unsigned long reclaimable;
> > + unsigned
On Fri 30-10-15 14:23:59, KAMEZAWA Hiroyuki wrote:
> On 2015/10/30 0:17, mho...@kernel.org wrote:
[...]
> > @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int
> > order,
> > if (gfp_mask & __GFP_NORETRY)
> > goto noretry;
> >
> > - /* Keep reclaiming
On Fri 30-10-15 14:23:59, KAMEZAWA Hiroyuki wrote:
> On 2015/10/30 0:17, mho...@kernel.org wrote:
[...]
> > @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int
> > order,
> > if (gfp_mask & __GFP_NORETRY)
> > goto noretry;
> >
> > - /* Keep reclaiming
On Fri 30-10-15 12:10:15, Hillf Danton wrote:
[...]
> > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist,
> > ac->high_zoneidx, ac->nodemask) {
> > + unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > + unsigned long reclaimable;
> > + unsigned
On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> [...]
> > > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist,
> > > ac->high_zoneidx, ac->nodemask) {
> > > + unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > > +
On 2015/10/30 17:23, Michal Hocko wrote:
On Fri 30-10-15 14:23:59, KAMEZAWA Hiroyuki wrote:
On 2015/10/30 0:17, mho...@kernel.org wrote:
[...]
@@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int
order,
if (gfp_mask & __GFP_NORETRY)
goto
> On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> > On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> > [...]
> > > > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist,
> > > > ac->high_zoneidx, ac->nodemask) {
> > > > + unsigned long free = zone_page_state(zone,
> > > >
Michal Hocko wrote:
> + target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) /
> MAX_STALL_BACKOFF;
target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
Michal Hocko wrote:
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and
On Fri 30-10-15 22:32:27, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > + target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) /
> > MAX_STALL_BACKOFF;
> target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
Ohh, we have a macro for that. Good to know. Thanks. It
On Fri 30-10-15 18:41:30, KAMEZAWA Hiroyuki wrote:
[...]
> >>So, now, 0-order page allocation may fail in a OOM situation ?
> >
> >No they don't normally and this patch doesn't change the logic here.
> >
>
> I understand your patch doesn't change the behavior.
> Looking into
On 2015/10/30 0:17, mho...@kernel.org wrote:
> From: Michal Hocko
>
> __alloc_pages_slowpath has traditionally relied on the direct reclaim
> and did_some_progress as an indicator that it makes sense to retry
> allocation rather than declaring OOM. shrink_zones had to rely on
> zone_reclaimable
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct
> reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
> static inline struct page *
>
From: Michal Hocko
__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress to prevent
from
From: Michal Hocko
__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct
> reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
> static inline struct page *
>
On 2015/10/30 0:17, mho...@kernel.org wrote:
> From: Michal Hocko
>
> __alloc_pages_slowpath has traditionally relied on the direct reclaim
> and did_some_progress as an indicator that it makes sense to retry
> allocation rather than declaring OOM. shrink_zones had to rely on
>
24 matches
Mail list logo