Re: [PATCH 0/4] use up highorder free pages before OOM

2016-10-11 Thread Michal Hocko
On Tue 11-10-16 14:06:43, Minchan Kim wrote:
> On Mon, Oct 10, 2016 at 09:47:31AM +0200, Michal Hocko wrote:
[...]
> > that close to OOM usually blows up later or starts trashing very soon.
> > It is true that a particular workload might benefit from ever last
> > allocatable page in the system but it would be better to mention all
> > that in the changelog.
> 
> I don't unerstand what phrase you really want to include the changelog.
> I will add the information which isolate 30M free pages before 4K page
> allocation failure in next version. If you want something to add,
> please say again.

Describe your usecase where the additional 1% of memory can allow a
sustainable workload without OOM. This is not usually the case as I've
tried to explain but it is true that the compression might change the
picture somehow. If your testcase is artificial, try to explain how it
emulates a real workload etc...
-- 
Michal Hocko
SUSE Labs


Re: [PATCH 0/4] use up highorder free pages before OOM

2016-10-11 Thread Michal Hocko
On Tue 11-10-16 14:06:43, Minchan Kim wrote:
> On Mon, Oct 10, 2016 at 09:47:31AM +0200, Michal Hocko wrote:
[...]
> > that close to OOM usually blows up later or starts trashing very soon.
> > It is true that a particular workload might benefit from ever last
> > allocatable page in the system but it would be better to mention all
> > that in the changelog.
> 
> I don't unerstand what phrase you really want to include the changelog.
> I will add the information which isolate 30M free pages before 4K page
> allocation failure in next version. If you want something to add,
> please say again.

Describe your usecase where the additional 1% of memory can allow a
sustainable workload without OOM. This is not usually the case as I've
tried to explain but it is true that the compression might change the
picture somehow. If your testcase is artificial, try to explain how it
emulates a real workload etc...
-- 
Michal Hocko
SUSE Labs


Re: [PATCH 0/4] use up highorder free pages before OOM

2016-10-10 Thread Minchan Kim
On Mon, Oct 10, 2016 at 09:47:31AM +0200, Michal Hocko wrote:
> On Sat 08-10-16 00:04:25, Minchan Kim wrote:
> [...]
> > I can show other log which reserve greater than 1%. See the DMA32 zone
> > free pages. It was GFP_ATOMIC allocation so it's different with I posted
> > but important thing is VM can reserve memory greater than 1% by the race
> > which was really what we want.
> > 
> > in:imklog: page allocation failure: order:0, 
> > mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
> [...]
> > DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB 
> > (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
> > DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 
> > 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB
> 
> Yes, this sounds like a bug. Please add this information to the patch
> which aims to fix the misaccounting.

No problem.

> 
> > > So while I do agree that potential issues - misaccounting and others you
> > > are addressing in the follow up patch - are good to fix but I believe that
> > > draining last 19M is not something that would reliably get you over the
> > > edge. Your workload (93% of memory sitting on anon LRU with swap full)
> > > simply doesn't fit into the amount of memory you have available.
> > 
> > What happens if the workload fit into additional 19M memory?
> > I admit my testing aimed for proving the problem but with this patchset,
> > there is no OOM killing with many free pages and the number of OOM was
> > reduced highly. It is definitely better than old.
> > 
> > Please don't ignore 1% memory in embedded system. 20M memory in 2G system,
> > If we can use those for zram, it is 60~80M memory via compression.
> > You should know how many engineers try to reduce 1M of their driver to
> > cost down of the product, seriously.
> 
> I am definitely not ignoring neither embedded systems nor 1% of the
> memory that might really matter. I just wanted to point out that being

Whew and I thought you were serious.

> that close to OOM usually blows up later or starts trashing very soon.
> It is true that a particular workload might benefit from ever last
> allocatable page in the system but it would be better to mention all
> that in the changelog.

I don't unerstand what phrase you really want to include the changelog.
I will add the information which isolate 30M free pages before 4K page
allocation failure in next version. If you want something to add,
please say again.

Thanks for the review, Michal.

> -- 
> Michal Hocko
> SUSE Labs
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 


Re: [PATCH 0/4] use up highorder free pages before OOM

2016-10-10 Thread Minchan Kim
On Mon, Oct 10, 2016 at 09:47:31AM +0200, Michal Hocko wrote:
> On Sat 08-10-16 00:04:25, Minchan Kim wrote:
> [...]
> > I can show other log which reserve greater than 1%. See the DMA32 zone
> > free pages. It was GFP_ATOMIC allocation so it's different with I posted
> > but important thing is VM can reserve memory greater than 1% by the race
> > which was really what we want.
> > 
> > in:imklog: page allocation failure: order:0, 
> > mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
> [...]
> > DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB 
> > (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
> > DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 
> > 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB
> 
> Yes, this sounds like a bug. Please add this information to the patch
> which aims to fix the misaccounting.

No problem.

> 
> > > So while I do agree that potential issues - misaccounting and others you
> > > are addressing in the follow up patch - are good to fix but I believe that
> > > draining last 19M is not something that would reliably get you over the
> > > edge. Your workload (93% of memory sitting on anon LRU with swap full)
> > > simply doesn't fit into the amount of memory you have available.
> > 
> > What happens if the workload fit into additional 19M memory?
> > I admit my testing aimed for proving the problem but with this patchset,
> > there is no OOM killing with many free pages and the number of OOM was
> > reduced highly. It is definitely better than old.
> > 
> > Please don't ignore 1% memory in embedded system. 20M memory in 2G system,
> > If we can use those for zram, it is 60~80M memory via compression.
> > You should know how many engineers try to reduce 1M of their driver to
> > cost down of the product, seriously.
> 
> I am definitely not ignoring neither embedded systems nor 1% of the
> memory that might really matter. I just wanted to point out that being

Whew and I thought you were serious.

> that close to OOM usually blows up later or starts trashing very soon.
> It is true that a particular workload might benefit from ever last
> allocatable page in the system but it would be better to mention all
> that in the changelog.

I don't unerstand what phrase you really want to include the changelog.
I will add the information which isolate 30M free pages before 4K page
allocation failure in next version. If you want something to add,
please say again.

Thanks for the review, Michal.

> -- 
> Michal Hocko
> SUSE Labs
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 


Re: [PATCH 0/4] use up highorder free pages before OOM

2016-10-10 Thread Michal Hocko
On Sat 08-10-16 00:04:25, Minchan Kim wrote:
[...]
> I can show other log which reserve greater than 1%. See the DMA32 zone
> free pages. It was GFP_ATOMIC allocation so it's different with I posted
> but important thing is VM can reserve memory greater than 1% by the race
> which was really what we want.
> 
> in:imklog: page allocation failure: order:0, 
> mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[...]
> DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB 
> (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
> DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 
> 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB

Yes, this sounds like a bug. Please add this information to the patch
which aims to fix the misaccounting.

> > So while I do agree that potential issues - misaccounting and others you
> > are addressing in the follow up patch - are good to fix but I believe that
> > draining last 19M is not something that would reliably get you over the
> > edge. Your workload (93% of memory sitting on anon LRU with swap full)
> > simply doesn't fit into the amount of memory you have available.
> 
> What happens if the workload fit into additional 19M memory?
> I admit my testing aimed for proving the problem but with this patchset,
> there is no OOM killing with many free pages and the number of OOM was
> reduced highly. It is definitely better than old.
> 
> Please don't ignore 1% memory in embedded system. 20M memory in 2G system,
> If we can use those for zram, it is 60~80M memory via compression.
> You should know how many engineers try to reduce 1M of their driver to
> cost down of the product, seriously.

I am definitely not ignoring neither embedded systems nor 1% of the
memory that might really matter. I just wanted to point out that being
that close to OOM usually blows up later or starts trashing very soon.
It is true that a particular workload might benefit from ever last
allocatable page in the system but it would be better to mention all
that in the changelog.
-- 
Michal Hocko
SUSE Labs


Re: [PATCH 0/4] use up highorder free pages before OOM

2016-10-10 Thread Michal Hocko
On Sat 08-10-16 00:04:25, Minchan Kim wrote:
[...]
> I can show other log which reserve greater than 1%. See the DMA32 zone
> free pages. It was GFP_ATOMIC allocation so it's different with I posted
> but important thing is VM can reserve memory greater than 1% by the race
> which was really what we want.
> 
> in:imklog: page allocation failure: order:0, 
> mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[...]
> DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB 
> (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
> DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 
> 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB

Yes, this sounds like a bug. Please add this information to the patch
which aims to fix the misaccounting.

> > So while I do agree that potential issues - misaccounting and others you
> > are addressing in the follow up patch - are good to fix but I believe that
> > draining last 19M is not something that would reliably get you over the
> > edge. Your workload (93% of memory sitting on anon LRU with swap full)
> > simply doesn't fit into the amount of memory you have available.
> 
> What happens if the workload fit into additional 19M memory?
> I admit my testing aimed for proving the problem but with this patchset,
> there is no OOM killing with many free pages and the number of OOM was
> reduced highly. It is definitely better than old.
> 
> Please don't ignore 1% memory in embedded system. 20M memory in 2G system,
> If we can use those for zram, it is 60~80M memory via compression.
> You should know how many engineers try to reduce 1M of their driver to
> cost down of the product, seriously.

I am definitely not ignoring neither embedded systems nor 1% of the
memory that might really matter. I just wanted to point out that being
that close to OOM usually blows up later or starts trashing very soon.
It is true that a particular workload might benefit from ever last
allocatable page in the system but it would be better to mention all
that in the changelog.
-- 
Michal Hocko
SUSE Labs


Re: [PATCH 0/4] use up highorder free pages before OOM

2016-10-07 Thread Minchan Kim
On Fri, Oct 07, 2016 at 11:16:26AM +0200, Michal Hocko wrote:
> On Fri 07-10-16 14:45:32, Minchan Kim wrote:
> > I got OOM report from production team with v4.4 kernel.
> > It has enough free memory but failed to allocate order-0 page and
> > finally encounter OOM kill.
> > I could reproduce it with my test easily. Look at below.
> > The reason is free pages(19M) of DMA32 zone are reserved for
> > HIGHORDERATOMIC and doesn't unreserved before the OOM.
> 
> Is this really reproducible?

I can reproduce in 1 hour.

> 
> [...]
> > active_anon:383949 inactive_anon:106724 isolated_anon:0
> >  active_file:15 inactive_file:44 isolated_file:0
> >  unevictable:0 dirty:0 writeback:24 unstable:0
> >  slab_reclaimable:2483 slab_unreclaimable:3326
> >  mapped:0 shmem:0 pagetables:1906 bounce:0
> >  free:6898 free_pcp:291 free_cma:0
> [...]
> > Free swap  = 8kB
> > Total swap = 255996kB
> > 524158 pages RAM
> > 0 pages HighMem/MovableOnly
> > 12658 pages reserved
> > 0 pages cma reserved
> > 0 pages hwpoisoned
> 
> From the above you can see that you are pretty much out of memory. There
> is basically no pagecache to reclaim and your anon memory is not 
> reclaimable either because the swap is basically full. It is true that 
> the high atomic reserves consume 19MB which could be reused but this 
> less than 1%, especially when you compare that to the amount of reserved
> memory.

I can show other log which reserve greater than 1%. See the DMA32 zone
free pages. It was GFP_ATOMIC allocation so it's different with I posted
but important thing is VM can reserve memory greater than 1% by the race
which was really what we want.

in:imklog: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
CPU: 0 PID: 476 Comm: in:imklog Tainted: GE   
4.8.0-rc7-00217-g266ef83c51e5-dirty #3135
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Ubuntu-1.8.2-1ubuntu1 04/01/2014
  880077c37590 81389033 
  880077c37618 8117519b 02280020
  81cedb40  0040
Call Trace:
 [] dump_stack+0x63/0x90
 [] warn_alloc_failed+0xdb/0x130
 [] __alloc_pages_nodemask+0x4d6/0xdb0
 [] ? bdev_write_page+0xa9/0xd0
 [] ? __page_check_address+0xd3/0x130
 [] ? deactivate_slab+0x12a/0x3e0
 [] new_slab+0x339/0x490
 [] ___slab_alloc.constprop.74+0x367/0x480
 [] ? alloc_indirect.isra.14+0x1d/0x50
 [] ? default_wake_function+0x12/0x20
 [] __slab_alloc.constprop.73+0x20/0x40
 [] __kmalloc+0x1a4/0x1e0
 [] alloc_indirect.isra.14+0x1d/0x50
 [] virtqueue_add_sgs+0x1c4/0x470
 [] ? __bt_get.isra.8+0xe5/0x1c0
 [] __virtblk_add_req+0xae/0x1f0
 [] ? wake_atomic_t_function+0x60/0x60
 [] ? sched_clock+0x9/0x10
 [] ? __blk_mq_alloc_request+0x10b/0x230
 [] ? blk_rq_map_sg+0x213/0x550
 [] virtio_queue_rq+0x12d/0x290
 [] __blk_mq_run_hw_queue+0x239/0x370
 [] blk_mq_run_hw_queue+0x8f/0xb0
 [] blk_mq_insert_requests+0x18c/0x1a0
 [] blk_mq_flush_plug_list+0x125/0x140
 [] blk_flush_plug_list+0xc7/0x220
 [] blk_finish_plug+0x2c/0x40
 [] __do_page_cache_readahead+0x196/0x230
 [] ? zram_free_page+0x3a/0xb0 [zram]
 [] filemap_fault+0x448/0x4f0
 [] ? alloc_set_pte+0xe4/0x350
 [] ext4_filemap_fault+0x36/0x50
 [] __do_fault+0x75/0x140
 [] handle_mm_fault+0x84d/0xbe0
 [] ? kmsg_read+0x44/0x60
 [] __do_page_fault+0x1dd/0x4d0
 [] trace_do_page_fault+0x43/0x130
 [] do_async_page_fault+0x1a/0xa0
 [] async_page_fault+0x28/0x30
Mem-Info:
active_anon:363826 inactive_anon:121283 isolated_anon:32
 active_file:65 inactive_file:152 isolated_file:0
 unevictable:0 dirty:0 writeback:46 unstable:0
 slab_reclaimable:2778 slab_unreclaimable:3070
 mapped:112 shmem:0 pagetables:1822 bounce:0
 free:9469 free_pcp:231 free_cma:0
Node 0 active_anon:1455304kB inactive_anon:485132kB active_file:260kB 
inactive_file:608kB unevictable:0kB isolated(anon):128kB isolated(file):0kB 
mapped:448kB dirty:0kB writeback:184kB shmem:0kB writeback_tmp:0kB unstable:0kB 
pages_scanned:13641 all_unreclaimable? no
DMA free:7748kB min:44kB low:56kB high:68kB active_anon:7944kB 
inactive_anon:104kB active_file:0kB inactive_file:0kB unevictable:0kB 
writepending:0kB present:15992kB managed:15908kB mlocked:0kB 
slab_reclaimable:0kB slab_unreclaimable:108kB kernel_stack:0kB pagetables:4kB 
bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:30128kB min:5628kB low:7624kB high:9620kB active_anon:1447360kB 
inactive_anon:485028kB active_file:260kB inactive_file:608kB unevictable:0kB 
writepending:184kB present:2080640kB managed:2030132kB mlocked:0kB 
slab_reclaimable:2kB slab_unreclaimable:12172kB kernel_stack:2400kB 
pagetables:7284kB bounce:0kB free_pcp:924kB local_pcp:72kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 
0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 

Re: [PATCH 0/4] use up highorder free pages before OOM

2016-10-07 Thread Minchan Kim
On Fri, Oct 07, 2016 at 11:16:26AM +0200, Michal Hocko wrote:
> On Fri 07-10-16 14:45:32, Minchan Kim wrote:
> > I got OOM report from production team with v4.4 kernel.
> > It has enough free memory but failed to allocate order-0 page and
> > finally encounter OOM kill.
> > I could reproduce it with my test easily. Look at below.
> > The reason is free pages(19M) of DMA32 zone are reserved for
> > HIGHORDERATOMIC and doesn't unreserved before the OOM.
> 
> Is this really reproducible?

I can reproduce in 1 hour.

> 
> [...]
> > active_anon:383949 inactive_anon:106724 isolated_anon:0
> >  active_file:15 inactive_file:44 isolated_file:0
> >  unevictable:0 dirty:0 writeback:24 unstable:0
> >  slab_reclaimable:2483 slab_unreclaimable:3326
> >  mapped:0 shmem:0 pagetables:1906 bounce:0
> >  free:6898 free_pcp:291 free_cma:0
> [...]
> > Free swap  = 8kB
> > Total swap = 255996kB
> > 524158 pages RAM
> > 0 pages HighMem/MovableOnly
> > 12658 pages reserved
> > 0 pages cma reserved
> > 0 pages hwpoisoned
> 
> From the above you can see that you are pretty much out of memory. There
> is basically no pagecache to reclaim and your anon memory is not 
> reclaimable either because the swap is basically full. It is true that 
> the high atomic reserves consume 19MB which could be reused but this 
> less than 1%, especially when you compare that to the amount of reserved
> memory.

I can show other log which reserve greater than 1%. See the DMA32 zone
free pages. It was GFP_ATOMIC allocation so it's different with I posted
but important thing is VM can reserve memory greater than 1% by the race
which was really what we want.

in:imklog: page allocation failure: order:0, 
mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
CPU: 0 PID: 476 Comm: in:imklog Tainted: GE   
4.8.0-rc7-00217-g266ef83c51e5-dirty #3135
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Ubuntu-1.8.2-1ubuntu1 04/01/2014
  880077c37590 81389033 
  880077c37618 8117519b 02280020
  81cedb40  0040
Call Trace:
 [] dump_stack+0x63/0x90
 [] warn_alloc_failed+0xdb/0x130
 [] __alloc_pages_nodemask+0x4d6/0xdb0
 [] ? bdev_write_page+0xa9/0xd0
 [] ? __page_check_address+0xd3/0x130
 [] ? deactivate_slab+0x12a/0x3e0
 [] new_slab+0x339/0x490
 [] ___slab_alloc.constprop.74+0x367/0x480
 [] ? alloc_indirect.isra.14+0x1d/0x50
 [] ? default_wake_function+0x12/0x20
 [] __slab_alloc.constprop.73+0x20/0x40
 [] __kmalloc+0x1a4/0x1e0
 [] alloc_indirect.isra.14+0x1d/0x50
 [] virtqueue_add_sgs+0x1c4/0x470
 [] ? __bt_get.isra.8+0xe5/0x1c0
 [] __virtblk_add_req+0xae/0x1f0
 [] ? wake_atomic_t_function+0x60/0x60
 [] ? sched_clock+0x9/0x10
 [] ? __blk_mq_alloc_request+0x10b/0x230
 [] ? blk_rq_map_sg+0x213/0x550
 [] virtio_queue_rq+0x12d/0x290
 [] __blk_mq_run_hw_queue+0x239/0x370
 [] blk_mq_run_hw_queue+0x8f/0xb0
 [] blk_mq_insert_requests+0x18c/0x1a0
 [] blk_mq_flush_plug_list+0x125/0x140
 [] blk_flush_plug_list+0xc7/0x220
 [] blk_finish_plug+0x2c/0x40
 [] __do_page_cache_readahead+0x196/0x230
 [] ? zram_free_page+0x3a/0xb0 [zram]
 [] filemap_fault+0x448/0x4f0
 [] ? alloc_set_pte+0xe4/0x350
 [] ext4_filemap_fault+0x36/0x50
 [] __do_fault+0x75/0x140
 [] handle_mm_fault+0x84d/0xbe0
 [] ? kmsg_read+0x44/0x60
 [] __do_page_fault+0x1dd/0x4d0
 [] trace_do_page_fault+0x43/0x130
 [] do_async_page_fault+0x1a/0xa0
 [] async_page_fault+0x28/0x30
Mem-Info:
active_anon:363826 inactive_anon:121283 isolated_anon:32
 active_file:65 inactive_file:152 isolated_file:0
 unevictable:0 dirty:0 writeback:46 unstable:0
 slab_reclaimable:2778 slab_unreclaimable:3070
 mapped:112 shmem:0 pagetables:1822 bounce:0
 free:9469 free_pcp:231 free_cma:0
Node 0 active_anon:1455304kB inactive_anon:485132kB active_file:260kB 
inactive_file:608kB unevictable:0kB isolated(anon):128kB isolated(file):0kB 
mapped:448kB dirty:0kB writeback:184kB shmem:0kB writeback_tmp:0kB unstable:0kB 
pages_scanned:13641 all_unreclaimable? no
DMA free:7748kB min:44kB low:56kB high:68kB active_anon:7944kB 
inactive_anon:104kB active_file:0kB inactive_file:0kB unevictable:0kB 
writepending:0kB present:15992kB managed:15908kB mlocked:0kB 
slab_reclaimable:0kB slab_unreclaimable:108kB kernel_stack:0kB pagetables:4kB 
bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:30128kB min:5628kB low:7624kB high:9620kB active_anon:1447360kB 
inactive_anon:485028kB active_file:260kB inactive_file:608kB unevictable:0kB 
writepending:184kB present:2080640kB managed:2030132kB mlocked:0kB 
slab_reclaimable:2kB slab_unreclaimable:12172kB kernel_stack:2400kB 
pagetables:7284kB bounce:0kB free_pcp:924kB local_pcp:72kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 
0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 

Re: [PATCH 0/4] use up highorder free pages before OOM

2016-10-07 Thread Michal Hocko
On Fri 07-10-16 14:45:32, Minchan Kim wrote:
> I got OOM report from production team with v4.4 kernel.
> It has enough free memory but failed to allocate order-0 page and
> finally encounter OOM kill.
> I could reproduce it with my test easily. Look at below.
> The reason is free pages(19M) of DMA32 zone are reserved for
> HIGHORDERATOMIC and doesn't unreserved before the OOM.

Is this really reproducible?

[...]
> active_anon:383949 inactive_anon:106724 isolated_anon:0
>  active_file:15 inactive_file:44 isolated_file:0
>  unevictable:0 dirty:0 writeback:24 unstable:0
>  slab_reclaimable:2483 slab_unreclaimable:3326
>  mapped:0 shmem:0 pagetables:1906 bounce:0
>  free:6898 free_pcp:291 free_cma:0
[...]
> Free swap  = 8kB
> Total swap = 255996kB
> 524158 pages RAM
> 0 pages HighMem/MovableOnly
> 12658 pages reserved
> 0 pages cma reserved
> 0 pages hwpoisoned

>From the above you can see that you are pretty much out of memory. There
is basically no pagecache to reclaim and your anon memory is not 
reclaimable either because the swap is basically full. It is true that 
the high atomic reserves consume 19MB which could be reused but this 
less than 1%, especially when you compare that to the amount of reserved
memory.

So while I do agree that potential issues - misaccounting and others you
are addressing in the follow up patch - are good to fix but I believe that
draining last 19M is not something that would reliably get you over the
edge. Your workload (93% of memory sitting on anon LRU with swap full)
simply doesn't fit into the amount of memory you have available.
-- 
Michal Hocko
SUSE Labs


Re: [PATCH 0/4] use up highorder free pages before OOM

2016-10-07 Thread Michal Hocko
On Fri 07-10-16 14:45:32, Minchan Kim wrote:
> I got OOM report from production team with v4.4 kernel.
> It has enough free memory but failed to allocate order-0 page and
> finally encounter OOM kill.
> I could reproduce it with my test easily. Look at below.
> The reason is free pages(19M) of DMA32 zone are reserved for
> HIGHORDERATOMIC and doesn't unreserved before the OOM.

Is this really reproducible?

[...]
> active_anon:383949 inactive_anon:106724 isolated_anon:0
>  active_file:15 inactive_file:44 isolated_file:0
>  unevictable:0 dirty:0 writeback:24 unstable:0
>  slab_reclaimable:2483 slab_unreclaimable:3326
>  mapped:0 shmem:0 pagetables:1906 bounce:0
>  free:6898 free_pcp:291 free_cma:0
[...]
> Free swap  = 8kB
> Total swap = 255996kB
> 524158 pages RAM
> 0 pages HighMem/MovableOnly
> 12658 pages reserved
> 0 pages cma reserved
> 0 pages hwpoisoned

>From the above you can see that you are pretty much out of memory. There
is basically no pagecache to reclaim and your anon memory is not 
reclaimable either because the swap is basically full. It is true that 
the high atomic reserves consume 19MB which could be reused but this 
less than 1%, especially when you compare that to the amount of reserved
memory.

So while I do agree that potential issues - misaccounting and others you
are addressing in the follow up patch - are good to fix but I believe that
draining last 19M is not something that would reliably get you over the
edge. Your workload (93% of memory sitting on anon LRU with swap full)
simply doesn't fit into the amount of memory you have available.
-- 
Michal Hocko
SUSE Labs


[PATCH 0/4] use up highorder free pages before OOM

2016-10-06 Thread Minchan Kim
I got OOM report from production team with v4.4 kernel.
It has enough free memory but failed to allocate order-0 page and
finally encounter OOM kill.
I could reproduce it with my test easily. Look at below.
The reason is free pages(19M) of DMA32 zone are reserved for
HIGHORDERATOMIC and doesn't unreserved before the OOM.

balloon invoked oom-killer: 
gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0
balloon cpuset=/ mems_allowed=0
CPU: 1 PID: 8473 Comm: balloon Tainted: GW  OE   
4.8.0-rc7-00219-g3f74c9559583-dirty #3161
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Ubuntu-1.8.2-1ubuntu1 04/01/2014
  88007f15bbc8 8138eb13 88007f15bd88
 88005a72a4c0 88007f15bc28 811d2d13 88007f15bc08
 8146a5ca 81c8df60 0015 0206
Call Trace:
 [] dump_stack+0x63/0x90
 [] dump_header+0x5c/0x1ce
 [] ? virtballoon_oom_notify+0x2a/0x80
 [] oom_kill_process+0x22e/0x400
 [] out_of_memory+0x1ac/0x210
 [] __alloc_pages_nodemask+0x101e/0x1040
 [] handle_mm_fault+0xa0a/0xbf0
 [] __do_page_fault+0x1dd/0x4d0
 [] trace_do_page_fault+0x43/0x130
 [] do_async_page_fault+0x1a/0xa0
 [] async_page_fault+0x28/0x30
Mem-Info:
active_anon:383949 inactive_anon:106724 isolated_anon:0
 active_file:15 inactive_file:44 isolated_file:0
 unevictable:0 dirty:0 writeback:24 unstable:0
 slab_reclaimable:2483 slab_unreclaimable:3326
 mapped:0 shmem:0 pagetables:1906 bounce:0
 free:6898 free_pcp:291 free_cma:0
Node 0 active_anon:1535796kB inactive_anon:426896kB active_file:60kB 
inactive_file:176kB unevictable:0kB isolated(anon):0kB isolated(file):0kB 
mapped:0kB dirty:0kB writeback:96kB shmem:0kB writeback_tmp:0kB unstable:0kB 
pages_scanned:1418 all_unreclaimable? no
DMA free:8188kB min:44kB low:56kB high:68kB active_anon:7648kB 
inactive_anon:0kB active_file:0kB inactive_file:4kB unevictable:0kB 
writepending:0kB present:15992kB managed:15908kB mlocked:0kB 
slab_reclaimable:0kB slab_unreclaimable:20kB kernel_stack:0kB pagetables:0kB 
bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:19404kB min:5628kB low:7624kB high:9620kB active_anon:1528148kB 
inactive_anon:426896kB active_file:60kB inactive_file:420kB unevictable:0kB 
writepending:96kB present:2080640kB managed:2030092kB mlocked:0kB 
slab_reclaimable:9932kB slab_unreclaimable:13284kB kernel_stack:2496kB 
pagetables:7624kB bounce:0kB free_pcp:900kB local_pcp:112kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 
2*4096kB (H) = 8192kB
DMA32: 7*4kB (H) 8*8kB (H) 30*16kB (H) 31*32kB (H) 14*64kB (H) 9*128kB (H) 
2*256kB (H) 2*512kB (H) 4*1024kB (H) 5*2048kB (H) 0*4096kB = 19484kB
51131 total pagecache pages
50795 pages in swap cache
Swap cache stats: add 3532405601, delete 3532354806, find 124289150/1822712228
Free swap  = 8kB
Total swap = 255996kB
524158 pages RAM
0 pages HighMem/MovableOnly
12658 pages reserved
0 pages cma reserved
0 pages hwpoisoned

During the investigation, I found some problems with highatomic so
this patch aims to solve the problems and the final goal is to
unreserve every highatomic free pages before the OOM kill.

Patch 1 fixes accounting bug in several places of page allocators
Patch 2 fixes accounting bug caused by subtle race between freeing
function and unreserve_highatomic_pageblock.
Patch 3 changes unreseve scheme to use up every reserved pages
Patch 4 fixes accounting bug caused by mem_section shared by two zones.

Minchan Kim (4):
  mm: adjust reserved highatomic count
  mm: prevent double decrease of nr_reserved_highatomic
  mm: unreserve highatomic free pages fully before OOM
  mm: skip to reserve pageblock crossed zone boundary for HIGHATOMIC

 mm/page_alloc.c | 143 ++--
 1 file changed, 118 insertions(+), 25 deletions(-)

-- 
2.7.4



[PATCH 0/4] use up highorder free pages before OOM

2016-10-06 Thread Minchan Kim
I got OOM report from production team with v4.4 kernel.
It has enough free memory but failed to allocate order-0 page and
finally encounter OOM kill.
I could reproduce it with my test easily. Look at below.
The reason is free pages(19M) of DMA32 zone are reserved for
HIGHORDERATOMIC and doesn't unreserved before the OOM.

balloon invoked oom-killer: 
gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0
balloon cpuset=/ mems_allowed=0
CPU: 1 PID: 8473 Comm: balloon Tainted: GW  OE   
4.8.0-rc7-00219-g3f74c9559583-dirty #3161
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Ubuntu-1.8.2-1ubuntu1 04/01/2014
  88007f15bbc8 8138eb13 88007f15bd88
 88005a72a4c0 88007f15bc28 811d2d13 88007f15bc08
 8146a5ca 81c8df60 0015 0206
Call Trace:
 [] dump_stack+0x63/0x90
 [] dump_header+0x5c/0x1ce
 [] ? virtballoon_oom_notify+0x2a/0x80
 [] oom_kill_process+0x22e/0x400
 [] out_of_memory+0x1ac/0x210
 [] __alloc_pages_nodemask+0x101e/0x1040
 [] handle_mm_fault+0xa0a/0xbf0
 [] __do_page_fault+0x1dd/0x4d0
 [] trace_do_page_fault+0x43/0x130
 [] do_async_page_fault+0x1a/0xa0
 [] async_page_fault+0x28/0x30
Mem-Info:
active_anon:383949 inactive_anon:106724 isolated_anon:0
 active_file:15 inactive_file:44 isolated_file:0
 unevictable:0 dirty:0 writeback:24 unstable:0
 slab_reclaimable:2483 slab_unreclaimable:3326
 mapped:0 shmem:0 pagetables:1906 bounce:0
 free:6898 free_pcp:291 free_cma:0
Node 0 active_anon:1535796kB inactive_anon:426896kB active_file:60kB 
inactive_file:176kB unevictable:0kB isolated(anon):0kB isolated(file):0kB 
mapped:0kB dirty:0kB writeback:96kB shmem:0kB writeback_tmp:0kB unstable:0kB 
pages_scanned:1418 all_unreclaimable? no
DMA free:8188kB min:44kB low:56kB high:68kB active_anon:7648kB 
inactive_anon:0kB active_file:0kB inactive_file:4kB unevictable:0kB 
writepending:0kB present:15992kB managed:15908kB mlocked:0kB 
slab_reclaimable:0kB slab_unreclaimable:20kB kernel_stack:0kB pagetables:0kB 
bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:19404kB min:5628kB low:7624kB high:9620kB active_anon:1528148kB 
inactive_anon:426896kB active_file:60kB inactive_file:420kB unevictable:0kB 
writepending:96kB present:2080640kB managed:2030092kB mlocked:0kB 
slab_reclaimable:9932kB slab_unreclaimable:13284kB kernel_stack:2496kB 
pagetables:7624kB bounce:0kB free_pcp:900kB local_pcp:112kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 
2*4096kB (H) = 8192kB
DMA32: 7*4kB (H) 8*8kB (H) 30*16kB (H) 31*32kB (H) 14*64kB (H) 9*128kB (H) 
2*256kB (H) 2*512kB (H) 4*1024kB (H) 5*2048kB (H) 0*4096kB = 19484kB
51131 total pagecache pages
50795 pages in swap cache
Swap cache stats: add 3532405601, delete 3532354806, find 124289150/1822712228
Free swap  = 8kB
Total swap = 255996kB
524158 pages RAM
0 pages HighMem/MovableOnly
12658 pages reserved
0 pages cma reserved
0 pages hwpoisoned

During the investigation, I found some problems with highatomic so
this patch aims to solve the problems and the final goal is to
unreserve every highatomic free pages before the OOM kill.

Patch 1 fixes accounting bug in several places of page allocators
Patch 2 fixes accounting bug caused by subtle race between freeing
function and unreserve_highatomic_pageblock.
Patch 3 changes unreseve scheme to use up every reserved pages
Patch 4 fixes accounting bug caused by mem_section shared by two zones.

Minchan Kim (4):
  mm: adjust reserved highatomic count
  mm: prevent double decrease of nr_reserved_highatomic
  mm: unreserve highatomic free pages fully before OOM
  mm: skip to reserve pageblock crossed zone boundary for HIGHATOMIC

 mm/page_alloc.c | 143 ++--
 1 file changed, 118 insertions(+), 25 deletions(-)

-- 
2.7.4