Re: Suspicious error for CMA stress test

2016-03-23 Thread Joonsoo Kim
2016-03-23 17:26 GMT+09:00 Vlastimil Babka :
> On 03/23/2016 05:44 AM, Joonsoo Kim wrote:
>>>
>>>
>>> Fixes: 3c605096d315 ("mm/page_alloc: restrict max order of merging on
>>> isolated pageblock")
>>> Link: https://lkml.org/lkml/2016/3/2/280
>>> Reported-by: Hanjun Guo 
>>> Debugged-by: Laura Abbott 
>>> Debugged-by: Joonsoo Kim 
>>> Signed-off-by: Vlastimil Babka 
>>> Cc:  # 3.18+
>>> ---
>>>   mm/page_alloc.c | 46 +-
>>>   1 file changed, 33 insertions(+), 13 deletions(-)
>>
>>
>> Acked-by: Joonsoo Kim 
>>
>> Thanks for taking care of this issue!.
>
>
> Thanks for the review. But I'm now not sure whether we push this to
> mainline+stable now, and later replace with Lucas' approach, or whether that
> approach would be also suitable and non-disruptive enough for stable?

Lucas' approach is for improvement and would be complex rather than
this. I don't think it would be appropriate for stable. IMO, it's better to push
this to mainline + stable now.

Thanks.


Re: Suspicious error for CMA stress test

2016-03-23 Thread Joonsoo Kim
2016-03-23 17:26 GMT+09:00 Vlastimil Babka :
> On 03/23/2016 05:44 AM, Joonsoo Kim wrote:
>>>
>>>
>>> Fixes: 3c605096d315 ("mm/page_alloc: restrict max order of merging on
>>> isolated pageblock")
>>> Link: https://lkml.org/lkml/2016/3/2/280
>>> Reported-by: Hanjun Guo 
>>> Debugged-by: Laura Abbott 
>>> Debugged-by: Joonsoo Kim 
>>> Signed-off-by: Vlastimil Babka 
>>> Cc:  # 3.18+
>>> ---
>>>   mm/page_alloc.c | 46 +-
>>>   1 file changed, 33 insertions(+), 13 deletions(-)
>>
>>
>> Acked-by: Joonsoo Kim 
>>
>> Thanks for taking care of this issue!.
>
>
> Thanks for the review. But I'm now not sure whether we push this to
> mainline+stable now, and later replace with Lucas' approach, or whether that
> approach would be also suitable and non-disruptive enough for stable?

Lucas' approach is for improvement and would be complex rather than
this. I don't think it would be appropriate for stable. IMO, it's better to push
this to mainline + stable now.

Thanks.


Re: Suspicious error for CMA stress test

2016-03-23 Thread Vlastimil Babka

On 03/23/2016 05:44 AM, Joonsoo Kim wrote:


Fixes: 3c605096d315 ("mm/page_alloc: restrict max order of merging on isolated 
pageblock")
Link: https://lkml.org/lkml/2016/3/2/280
Reported-by: Hanjun Guo 
Debugged-by: Laura Abbott 
Debugged-by: Joonsoo Kim 
Signed-off-by: Vlastimil Babka 
Cc:  # 3.18+
---
  mm/page_alloc.c | 46 +-
  1 file changed, 33 insertions(+), 13 deletions(-)


Acked-by: Joonsoo Kim 

Thanks for taking care of this issue!.


Thanks for the review. But I'm now not sure whether we push this to 
mainline+stable now, and later replace with Lucas' approach, or whether 
that approach would be also suitable and non-disruptive enough for stable?




Re: Suspicious error for CMA stress test

2016-03-23 Thread Vlastimil Babka

On 03/23/2016 05:44 AM, Joonsoo Kim wrote:


Fixes: 3c605096d315 ("mm/page_alloc: restrict max order of merging on isolated 
pageblock")
Link: https://lkml.org/lkml/2016/3/2/280
Reported-by: Hanjun Guo 
Debugged-by: Laura Abbott 
Debugged-by: Joonsoo Kim 
Signed-off-by: Vlastimil Babka 
Cc:  # 3.18+
---
  mm/page_alloc.c | 46 +-
  1 file changed, 33 insertions(+), 13 deletions(-)


Acked-by: Joonsoo Kim 

Thanks for taking care of this issue!.


Thanks for the review. But I'm now not sure whether we push this to 
mainline+stable now, and later replace with Lucas' approach, or whether 
that approach would be also suitable and non-disruptive enough for stable?




Re: Suspicious error for CMA stress test

2016-03-22 Thread Joonsoo Kim
On Fri, Mar 18, 2016 at 03:10:09PM +0100, Vlastimil Babka wrote:
> On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
> > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> >>
> >> Okay. I used following slightly optimized version and I need to
> >> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> >> to yours. Please consider it, too.
> >
> > Hmm, this one is not work, I still can see the bug is there after
> > applying
> > this patch, did I miss something?
> 
>  I may find that there is a bug which was introduced by me some time
>  ago. Could you test following change in __free_one_page() on top of
>  Vlastimil's patch?
> 
>  -page_idx = pfn & ((1 << max_order) - 1);
>  +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> >>>
> >>>
> >>> I tested Vlastimil's patch + your change with stress for more than half
> >>> hour, the bug
> >>> I reported is gone :)
> >>
> >>
> >> Oh, ok, will try to send proper patch, once I figure out what to write in
> >> the changelog :)
> > 
> > Thanks in advance!
> > 
> 
> OK, here it is. Hanjun can you please retest this, as I'm not sure if you had
> the same code due to the followup one-liner patches in the thread. Lucas, see 
> if
> it helps with your issue as well. Laura and Joonsoo, please also test and 
> review
> and check changelog if my perception of the problem is accurate :)
> 
> Thanks
> 
> 8<
> From: Vlastimil Babka 
> Date: Fri, 18 Mar 2016 14:22:31 +0100
> Subject: [PATCH] mm/page_alloc: prevent merging between isolated and other
>  pageblocks
> 
> Hanjun Guo has reported that a CMA stress test causes broken accounting of
> CMA and free pages:
> 
> > Before the test, I got:
> > -bash-4.3# cat /proc/meminfo | grep Cma
> > CmaTotal: 204800 kB
> > CmaFree:  195044 kB
> >
> >
> > After running the test:
> > -bash-4.3# cat /proc/meminfo | grep Cma
> > CmaTotal: 204800 kB
> > CmaFree: 6602584 kB
> >
> > So the freed CMA memory is more than total..
> >
> > Also the the MemFree is more than mem total:
> >
> > -bash-4.3# cat /proc/meminfo
> > MemTotal:   16342016 kB
> > MemFree:22367268 kB
> > MemAvailable:   22370528 kB
> 
> Laura Abbott has confirmed the issue and suspected the freepage accounting
> rewrite around 3.18/4.0 by Joonsoo Kim. Joonsoo had a theory that this is
> caused by unexpected merging between MIGRATE_ISOLATE and MIGRATE_CMA
> pageblocks:
> 
> > CMA isolates MAX_ORDER aligned blocks, but, during the process,
> > partialy isolated block exists. If MAX_ORDER is 11 and
> > pageblock_order is 9, two pageblocks make up MAX_ORDER
> > aligned block and I can think following scenario because pageblock
> > (un)isolation would be done one by one.
> >
> > (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
> > MIGRATE_ISOLATE, respectively.
> >
> > CC -> IC -> II (Isolation)
> > II -> CI -> CC (Un-isolation)
> >
> > If some pages are freed at this intermediate state such as IC or CI,
> > that page could be merged to the other page that is resident on
> > different type of pageblock and it will cause wrong freepage count.
> 
> This was supposed to be prevented by CMA operating on MAX_ORDER blocks, but
> since it doesn't hold the zone->lock between pageblocks, a race window does
> exist.
> 
> It's also likely that unexpected merging can occur between MIGRATE_ISOLATE
> and non-CMA pageblocks. This should be prevented in __free_one_page() since
> commit 3c605096d315 ("mm/page_alloc: restrict max order of merging on isolated
> pageblock"). However, we only check the migratetype of the pageblock where
> buddy merging has been initiated, not the migratetype of the buddy pageblock
> (or group of pageblocks) which can be MIGRATE_ISOLATE.
> 
> Joonsoo has suggested checking for buddy migratetype as part of
> page_is_buddy(), but that would add extra checks in allocator hotpath and
> bloat-o-meter has shown significant code bloat (the function is inline).
> 
> This patch reduces the bloat at some expense of more complicated code. The
> buddy-merging while-loop in __free_one_page() is initially bounded to
> pageblock_border and without any migratetype checks. The checks are placed
> outside, bumping the max_order if merging is allowed, and returning to the
> while-loop with a statement which can't be possibly considered harmful.
> 
> This fixes the accounting bug and also removes the arguably weird state in the
> original commit 3c605096d315 where buddies could be left unmerged.
> 
> Fixes: 3c605096d315 ("mm/page_alloc: restrict max order of merging on 
> isolated pageblock")
> Link: https://lkml.org/lkml/2016/3/2/280
> Reported-by: Hanjun Guo 
> Debugged-by: Laura Abbott 
> Debugged-by: Joonsoo Kim 
> Signed-off-by: Vlastimil Babka 
> Cc:  # 3.18+
> ---
>  mm/page_alloc.c | 46 

Re: Suspicious error for CMA stress test

2016-03-22 Thread Joonsoo Kim
On Fri, Mar 18, 2016 at 03:10:09PM +0100, Vlastimil Babka wrote:
> On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
> > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> >>
> >> Okay. I used following slightly optimized version and I need to
> >> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> >> to yours. Please consider it, too.
> >
> > Hmm, this one is not work, I still can see the bug is there after
> > applying
> > this patch, did I miss something?
> 
>  I may find that there is a bug which was introduced by me some time
>  ago. Could you test following change in __free_one_page() on top of
>  Vlastimil's patch?
> 
>  -page_idx = pfn & ((1 << max_order) - 1);
>  +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> >>>
> >>>
> >>> I tested Vlastimil's patch + your change with stress for more than half
> >>> hour, the bug
> >>> I reported is gone :)
> >>
> >>
> >> Oh, ok, will try to send proper patch, once I figure out what to write in
> >> the changelog :)
> > 
> > Thanks in advance!
> > 
> 
> OK, here it is. Hanjun can you please retest this, as I'm not sure if you had
> the same code due to the followup one-liner patches in the thread. Lucas, see 
> if
> it helps with your issue as well. Laura and Joonsoo, please also test and 
> review
> and check changelog if my perception of the problem is accurate :)
> 
> Thanks
> 
> 8<
> From: Vlastimil Babka 
> Date: Fri, 18 Mar 2016 14:22:31 +0100
> Subject: [PATCH] mm/page_alloc: prevent merging between isolated and other
>  pageblocks
> 
> Hanjun Guo has reported that a CMA stress test causes broken accounting of
> CMA and free pages:
> 
> > Before the test, I got:
> > -bash-4.3# cat /proc/meminfo | grep Cma
> > CmaTotal: 204800 kB
> > CmaFree:  195044 kB
> >
> >
> > After running the test:
> > -bash-4.3# cat /proc/meminfo | grep Cma
> > CmaTotal: 204800 kB
> > CmaFree: 6602584 kB
> >
> > So the freed CMA memory is more than total..
> >
> > Also the the MemFree is more than mem total:
> >
> > -bash-4.3# cat /proc/meminfo
> > MemTotal:   16342016 kB
> > MemFree:22367268 kB
> > MemAvailable:   22370528 kB
> 
> Laura Abbott has confirmed the issue and suspected the freepage accounting
> rewrite around 3.18/4.0 by Joonsoo Kim. Joonsoo had a theory that this is
> caused by unexpected merging between MIGRATE_ISOLATE and MIGRATE_CMA
> pageblocks:
> 
> > CMA isolates MAX_ORDER aligned blocks, but, during the process,
> > partialy isolated block exists. If MAX_ORDER is 11 and
> > pageblock_order is 9, two pageblocks make up MAX_ORDER
> > aligned block and I can think following scenario because pageblock
> > (un)isolation would be done one by one.
> >
> > (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
> > MIGRATE_ISOLATE, respectively.
> >
> > CC -> IC -> II (Isolation)
> > II -> CI -> CC (Un-isolation)
> >
> > If some pages are freed at this intermediate state such as IC or CI,
> > that page could be merged to the other page that is resident on
> > different type of pageblock and it will cause wrong freepage count.
> 
> This was supposed to be prevented by CMA operating on MAX_ORDER blocks, but
> since it doesn't hold the zone->lock between pageblocks, a race window does
> exist.
> 
> It's also likely that unexpected merging can occur between MIGRATE_ISOLATE
> and non-CMA pageblocks. This should be prevented in __free_one_page() since
> commit 3c605096d315 ("mm/page_alloc: restrict max order of merging on isolated
> pageblock"). However, we only check the migratetype of the pageblock where
> buddy merging has been initiated, not the migratetype of the buddy pageblock
> (or group of pageblocks) which can be MIGRATE_ISOLATE.
> 
> Joonsoo has suggested checking for buddy migratetype as part of
> page_is_buddy(), but that would add extra checks in allocator hotpath and
> bloat-o-meter has shown significant code bloat (the function is inline).
> 
> This patch reduces the bloat at some expense of more complicated code. The
> buddy-merging while-loop in __free_one_page() is initially bounded to
> pageblock_border and without any migratetype checks. The checks are placed
> outside, bumping the max_order if merging is allowed, and returning to the
> while-loop with a statement which can't be possibly considered harmful.
> 
> This fixes the accounting bug and also removes the arguably weird state in the
> original commit 3c605096d315 where buddies could be left unmerged.
> 
> Fixes: 3c605096d315 ("mm/page_alloc: restrict max order of merging on 
> isolated pageblock")
> Link: https://lkml.org/lkml/2016/3/2/280
> Reported-by: Hanjun Guo 
> Debugged-by: Laura Abbott 
> Debugged-by: Joonsoo Kim 
> Signed-off-by: Vlastimil Babka 
> Cc:  # 3.18+
> ---
>  mm/page_alloc.c | 46 +-
>  1 file changed, 33 insertions(+), 13 deletions(-)

Acked-by: Joonsoo Kim 

Thanks for taking care of this issue!.

Thanks.



Re: Suspicious error for CMA stress test

2016-03-22 Thread Joonsoo Kim
On Tue, Mar 22, 2016 at 03:56:46PM +0100, Lucas Stach wrote:
> Am Montag, den 21.03.2016, 13:42 +0900 schrieb Joonsoo Kim:
> > On Fri, Mar 18, 2016 at 02:32:35PM +0100, Lucas Stach wrote:
> > > Hi Vlastimil, Joonsoo,
> > > 
> > > Am Freitag, den 18.03.2016, 00:52 +0900 schrieb Joonsoo Kim:
> > > > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> > > > > On 03/17/2016 10:24 AM, Hanjun Guo wrote:
> > > > >>
> > > > >> On 2016/3/17 14:54, Joonsoo Kim wrote:
> > > > >>>
> > > > >>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
> > > > 
> > > >  On 2016/3/14 15:18, Joonsoo Kim wrote:
> > > > >
> > > > > On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> > > > >>
> > > > >> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> > > > >>>
> > > > >>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> > > > 
> > > >  On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> > > > 
> > > >  How about something like this? Just and idea, probably buggy
> > > >  (off-by-one etc.).
> > > >  Should keep away cost from  > > >  expense of the
> > > >  relatively fewer >pageblock_order iterations.
> > > > >>>
> > > > >>> Hmm... I tested this and found that it's code size is a little 
> > > > >>> bit
> > > > >>> larger than mine. I'm not sure why this happens exactly but I 
> > > > >>> guess
> > > > >>> it would be
> > > > >>> related to compiler optimization. In this case, I'm in favor of 
> > > > >>> my
> > > > >>> implementation because it looks like well abstraction. It adds 
> > > > >>> one
> > > > >>> unlikely branch to the merge loop but compiler would optimize 
> > > > >>> it to
> > > > >>> check it once.
> > > > >>
> > > > >> I would be surprised if compiler optimized that to check it 
> > > > >> once, as
> > > > >> order increases with each loop iteration. But maybe it's smart
> > > > >> enough to do something like I did by hand? Guess I'll check the
> > > > >> disassembly.
> > > > >
> > > > > Okay. I used following slightly optimized version and I need to
> > > > > add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 
> > > > > 1)'
> > > > > to yours. Please consider it, too.
> > > > 
> > > >  Hmm, this one is not work, I still can see the bug is there after
> > > >  applying
> > > >  this patch, did I miss something?
> > > > >>>
> > > > >>> I may find that there is a bug which was introduced by me some time
> > > > >>> ago. Could you test following change in __free_one_page() on top of
> > > > >>> Vlastimil's patch?
> > > > >>>
> > > > >>> -page_idx = pfn & ((1 << max_order) - 1);
> > > > >>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> > > > >>
> > > > >>
> > > > >> I tested Vlastimil's patch + your change with stress for more than 
> > > > >> half
> > > > >> hour, the bug
> > > > >> I reported is gone :)
> > > > >
> > > > >
> > > > > Oh, ok, will try to send proper patch, once I figure out what to 
> > > > > write in
> > > > > the changelog :)
> > > > 
> > > > Thanks in advance!
> > > 
> > > After digging into the "PFN busy" race in CMA (see [1]), I believe we
> > > should just prevent any buddy merging in isolated ranges. This fixes the
> > > race I'm seeing without the need to hold the zone lock for extend
> > > periods of time.
> > 
> > "PFNs busy" can be caused by other type of race, too. I guess that
> > other cases happens more than buddy merging. Do you have any test case for
> > your problem?
> > 
> I don't have any specific test case, but the etnaviv driver manages to
> hit this race quite often. That's because we allocate/free a large
> number of relatively small buffer from CMA, where allocation and free
> regularly happen on different CPUs.
> 
> So while we also have cases where the "PFN busy" is triggered by other
> factors, like pages locked for get_user_pages(), this race is the number
> one source of CMA retries in my workload.
> 
> > If it is indeed a problem, you can avoid it with simple retry
> > MAX_ORDER times on alloc_contig_range(). This is a rather dirty but
> > the reason I suggest it is that there are other type of race in
> > __alloc_contig_range() and retry could help them, too. For example,
> > if some of pages in the requested range isn't attached to the LRU yet
> > or detached from the LRU but not freed to buddy,
> > test_pages_isolated() can be failed.
> 
> While a retry makes sense (if at all just to avoid a CMA allocation
> failure under CMA pressure), I would like to avoid the associated
> overhead for the common path where CMA is just racing with itself. The
> retry should only be needed in situations where we don't have any means
> to control the race, like a concurrent GUP.

Make sense. When I tried to fix merging issue previously, I worried
about side-effect of unmerged buddy so I tried to reduce unmerged
buddy as much 

Re: Suspicious error for CMA stress test

2016-03-22 Thread Joonsoo Kim
On Tue, Mar 22, 2016 at 03:56:46PM +0100, Lucas Stach wrote:
> Am Montag, den 21.03.2016, 13:42 +0900 schrieb Joonsoo Kim:
> > On Fri, Mar 18, 2016 at 02:32:35PM +0100, Lucas Stach wrote:
> > > Hi Vlastimil, Joonsoo,
> > > 
> > > Am Freitag, den 18.03.2016, 00:52 +0900 schrieb Joonsoo Kim:
> > > > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> > > > > On 03/17/2016 10:24 AM, Hanjun Guo wrote:
> > > > >>
> > > > >> On 2016/3/17 14:54, Joonsoo Kim wrote:
> > > > >>>
> > > > >>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
> > > > 
> > > >  On 2016/3/14 15:18, Joonsoo Kim wrote:
> > > > >
> > > > > On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> > > > >>
> > > > >> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> > > > >>>
> > > > >>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> > > > 
> > > >  On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> > > > 
> > > >  How about something like this? Just and idea, probably buggy
> > > >  (off-by-one etc.).
> > > >  Should keep away cost from  > > >  expense of the
> > > >  relatively fewer >pageblock_order iterations.
> > > > >>>
> > > > >>> Hmm... I tested this and found that it's code size is a little 
> > > > >>> bit
> > > > >>> larger than mine. I'm not sure why this happens exactly but I 
> > > > >>> guess
> > > > >>> it would be
> > > > >>> related to compiler optimization. In this case, I'm in favor of 
> > > > >>> my
> > > > >>> implementation because it looks like well abstraction. It adds 
> > > > >>> one
> > > > >>> unlikely branch to the merge loop but compiler would optimize 
> > > > >>> it to
> > > > >>> check it once.
> > > > >>
> > > > >> I would be surprised if compiler optimized that to check it 
> > > > >> once, as
> > > > >> order increases with each loop iteration. But maybe it's smart
> > > > >> enough to do something like I did by hand? Guess I'll check the
> > > > >> disassembly.
> > > > >
> > > > > Okay. I used following slightly optimized version and I need to
> > > > > add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 
> > > > > 1)'
> > > > > to yours. Please consider it, too.
> > > > 
> > > >  Hmm, this one is not work, I still can see the bug is there after
> > > >  applying
> > > >  this patch, did I miss something?
> > > > >>>
> > > > >>> I may find that there is a bug which was introduced by me some time
> > > > >>> ago. Could you test following change in __free_one_page() on top of
> > > > >>> Vlastimil's patch?
> > > > >>>
> > > > >>> -page_idx = pfn & ((1 << max_order) - 1);
> > > > >>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> > > > >>
> > > > >>
> > > > >> I tested Vlastimil's patch + your change with stress for more than 
> > > > >> half
> > > > >> hour, the bug
> > > > >> I reported is gone :)
> > > > >
> > > > >
> > > > > Oh, ok, will try to send proper patch, once I figure out what to 
> > > > > write in
> > > > > the changelog :)
> > > > 
> > > > Thanks in advance!
> > > 
> > > After digging into the "PFN busy" race in CMA (see [1]), I believe we
> > > should just prevent any buddy merging in isolated ranges. This fixes the
> > > race I'm seeing without the need to hold the zone lock for extend
> > > periods of time.
> > 
> > "PFNs busy" can be caused by other type of race, too. I guess that
> > other cases happens more than buddy merging. Do you have any test case for
> > your problem?
> > 
> I don't have any specific test case, but the etnaviv driver manages to
> hit this race quite often. That's because we allocate/free a large
> number of relatively small buffer from CMA, where allocation and free
> regularly happen on different CPUs.
> 
> So while we also have cases where the "PFN busy" is triggered by other
> factors, like pages locked for get_user_pages(), this race is the number
> one source of CMA retries in my workload.
> 
> > If it is indeed a problem, you can avoid it with simple retry
> > MAX_ORDER times on alloc_contig_range(). This is a rather dirty but
> > the reason I suggest it is that there are other type of race in
> > __alloc_contig_range() and retry could help them, too. For example,
> > if some of pages in the requested range isn't attached to the LRU yet
> > or detached from the LRU but not freed to buddy,
> > test_pages_isolated() can be failed.
> 
> While a retry makes sense (if at all just to avoid a CMA allocation
> failure under CMA pressure), I would like to avoid the associated
> overhead for the common path where CMA is just racing with itself. The
> retry should only be needed in situations where we don't have any means
> to control the race, like a concurrent GUP.

Make sense. When I tried to fix merging issue previously, I worried
about side-effect of unmerged buddy so I tried to reduce unmerged
buddy as much as possible. 

Re: Suspicious error for CMA stress test

2016-03-22 Thread Lucas Stach
Am Montag, den 21.03.2016, 13:42 +0900 schrieb Joonsoo Kim:
> On Fri, Mar 18, 2016 at 02:32:35PM +0100, Lucas Stach wrote:
> > Hi Vlastimil, Joonsoo,
> > 
> > Am Freitag, den 18.03.2016, 00:52 +0900 schrieb Joonsoo Kim:
> > > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> > > > On 03/17/2016 10:24 AM, Hanjun Guo wrote:
> > > >>
> > > >> On 2016/3/17 14:54, Joonsoo Kim wrote:
> > > >>>
> > > >>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
> > > 
> > >  On 2016/3/14 15:18, Joonsoo Kim wrote:
> > > >
> > > > On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> > > >>
> > > >> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> > > >>>
> > > >>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> > > 
> > >  On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> > > 
> > >  How about something like this? Just and idea, probably buggy
> > >  (off-by-one etc.).
> > >  Should keep away cost from  > >  expense of the
> > >  relatively fewer >pageblock_order iterations.
> > > >>>
> > > >>> Hmm... I tested this and found that it's code size is a little bit
> > > >>> larger than mine. I'm not sure why this happens exactly but I 
> > > >>> guess
> > > >>> it would be
> > > >>> related to compiler optimization. In this case, I'm in favor of my
> > > >>> implementation because it looks like well abstraction. It adds one
> > > >>> unlikely branch to the merge loop but compiler would optimize it 
> > > >>> to
> > > >>> check it once.
> > > >>
> > > >> I would be surprised if compiler optimized that to check it once, 
> > > >> as
> > > >> order increases with each loop iteration. But maybe it's smart
> > > >> enough to do something like I did by hand? Guess I'll check the
> > > >> disassembly.
> > > >
> > > > Okay. I used following slightly optimized version and I need to
> > > > add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 
> > > > 1)'
> > > > to yours. Please consider it, too.
> > > 
> > >  Hmm, this one is not work, I still can see the bug is there after
> > >  applying
> > >  this patch, did I miss something?
> > > >>>
> > > >>> I may find that there is a bug which was introduced by me some time
> > > >>> ago. Could you test following change in __free_one_page() on top of
> > > >>> Vlastimil's patch?
> > > >>>
> > > >>> -page_idx = pfn & ((1 << max_order) - 1);
> > > >>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> > > >>
> > > >>
> > > >> I tested Vlastimil's patch + your change with stress for more than half
> > > >> hour, the bug
> > > >> I reported is gone :)
> > > >
> > > >
> > > > Oh, ok, will try to send proper patch, once I figure out what to write 
> > > > in
> > > > the changelog :)
> > > 
> > > Thanks in advance!
> > 
> > After digging into the "PFN busy" race in CMA (see [1]), I believe we
> > should just prevent any buddy merging in isolated ranges. This fixes the
> > race I'm seeing without the need to hold the zone lock for extend
> > periods of time.
> 
> "PFNs busy" can be caused by other type of race, too. I guess that
> other cases happens more than buddy merging. Do you have any test case for
> your problem?
> 
I don't have any specific test case, but the etnaviv driver manages to
hit this race quite often. That's because we allocate/free a large
number of relatively small buffer from CMA, where allocation and free
regularly happen on different CPUs.

So while we also have cases where the "PFN busy" is triggered by other
factors, like pages locked for get_user_pages(), this race is the number
one source of CMA retries in my workload.

> If it is indeed a problem, you can avoid it with simple retry
> MAX_ORDER times on alloc_contig_range(). This is a rather dirty but
> the reason I suggest it is that there are other type of race in
> __alloc_contig_range() and retry could help them, too. For example,
> if some of pages in the requested range isn't attached to the LRU yet
> or detached from the LRU but not freed to buddy,
> test_pages_isolated() can be failed.

While a retry makes sense (if at all just to avoid a CMA allocation
failure under CMA pressure), I would like to avoid the associated
overhead for the common path where CMA is just racing with itself. The
retry should only be needed in situations where we don't have any means
to control the race, like a concurrent GUP.

Regards,
Lucas



Re: Suspicious error for CMA stress test

2016-03-22 Thread Lucas Stach
Am Montag, den 21.03.2016, 13:42 +0900 schrieb Joonsoo Kim:
> On Fri, Mar 18, 2016 at 02:32:35PM +0100, Lucas Stach wrote:
> > Hi Vlastimil, Joonsoo,
> > 
> > Am Freitag, den 18.03.2016, 00:52 +0900 schrieb Joonsoo Kim:
> > > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> > > > On 03/17/2016 10:24 AM, Hanjun Guo wrote:
> > > >>
> > > >> On 2016/3/17 14:54, Joonsoo Kim wrote:
> > > >>>
> > > >>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
> > > 
> > >  On 2016/3/14 15:18, Joonsoo Kim wrote:
> > > >
> > > > On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> > > >>
> > > >> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> > > >>>
> > > >>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> > > 
> > >  On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> > > 
> > >  How about something like this? Just and idea, probably buggy
> > >  (off-by-one etc.).
> > >  Should keep away cost from  > >  expense of the
> > >  relatively fewer >pageblock_order iterations.
> > > >>>
> > > >>> Hmm... I tested this and found that it's code size is a little bit
> > > >>> larger than mine. I'm not sure why this happens exactly but I 
> > > >>> guess
> > > >>> it would be
> > > >>> related to compiler optimization. In this case, I'm in favor of my
> > > >>> implementation because it looks like well abstraction. It adds one
> > > >>> unlikely branch to the merge loop but compiler would optimize it 
> > > >>> to
> > > >>> check it once.
> > > >>
> > > >> I would be surprised if compiler optimized that to check it once, 
> > > >> as
> > > >> order increases with each loop iteration. But maybe it's smart
> > > >> enough to do something like I did by hand? Guess I'll check the
> > > >> disassembly.
> > > >
> > > > Okay. I used following slightly optimized version and I need to
> > > > add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 
> > > > 1)'
> > > > to yours. Please consider it, too.
> > > 
> > >  Hmm, this one is not work, I still can see the bug is there after
> > >  applying
> > >  this patch, did I miss something?
> > > >>>
> > > >>> I may find that there is a bug which was introduced by me some time
> > > >>> ago. Could you test following change in __free_one_page() on top of
> > > >>> Vlastimil's patch?
> > > >>>
> > > >>> -page_idx = pfn & ((1 << max_order) - 1);
> > > >>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> > > >>
> > > >>
> > > >> I tested Vlastimil's patch + your change with stress for more than half
> > > >> hour, the bug
> > > >> I reported is gone :)
> > > >
> > > >
> > > > Oh, ok, will try to send proper patch, once I figure out what to write 
> > > > in
> > > > the changelog :)
> > > 
> > > Thanks in advance!
> > 
> > After digging into the "PFN busy" race in CMA (see [1]), I believe we
> > should just prevent any buddy merging in isolated ranges. This fixes the
> > race I'm seeing without the need to hold the zone lock for extend
> > periods of time.
> 
> "PFNs busy" can be caused by other type of race, too. I guess that
> other cases happens more than buddy merging. Do you have any test case for
> your problem?
> 
I don't have any specific test case, but the etnaviv driver manages to
hit this race quite often. That's because we allocate/free a large
number of relatively small buffer from CMA, where allocation and free
regularly happen on different CPUs.

So while we also have cases where the "PFN busy" is triggered by other
factors, like pages locked for get_user_pages(), this race is the number
one source of CMA retries in my workload.

> If it is indeed a problem, you can avoid it with simple retry
> MAX_ORDER times on alloc_contig_range(). This is a rather dirty but
> the reason I suggest it is that there are other type of race in
> __alloc_contig_range() and retry could help them, too. For example,
> if some of pages in the requested range isn't attached to the LRU yet
> or detached from the LRU but not freed to buddy,
> test_pages_isolated() can be failed.

While a retry makes sense (if at all just to avoid a CMA allocation
failure under CMA pressure), I would like to avoid the associated
overhead for the common path where CMA is just racing with itself. The
retry should only be needed in situations where we don't have any means
to control the race, like a concurrent GUP.

Regards,
Lucas



Re: Suspicious error for CMA stress test

2016-03-22 Thread Lucas Stach
Am Freitag, den 18.03.2016, 21:58 +0100 schrieb Vlastimil Babka:
> On 03/18/2016 03:42 PM, Lucas Stach wrote:
> > Am Freitag, den 18.03.2016, 15:10 +0100 schrieb Vlastimil Babka:
> >> On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
> >> > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> >>
> >> OK, here it is. Hanjun can you please retest this, as I'm not sure if you 
> >> had
> >> the same code due to the followup one-liner patches in the thread. Lucas, 
> >> see if
> >> it helps with your issue as well. Laura and Joonsoo, please also test and 
> >> review
> >> and check changelog if my perception of the problem is accurate :)
> >>
> >
> > This doesn't help for my case, as it is still trying to merge pages in
> > isolated ranges. It even tries extra hard at doing so.
> >
> > With concurrent isolation and frees going on this may lead to the start
> > page of the range to be isolated merging into an higher order buddy page
> > if it isn't already pageblock aligned, leading both test_pages_isolated
> > and isolate_freepages to fail on an otherwise perfectly fine range.
> >
> > What I am arguing is that if a page is freed into an isolated range we
> > should not try merge it with it's buddies at all, by setting max_order =
> > order. If the range is isolated because want to isolate freepages from
> > it, the work to do the merging is wasted, as isolate_freepages will
> > split higher order pages into order-0 pages again.
> >
> > If we already finished isolating freepages and are in the process of
> > undoing the isolation, we don't strictly need to do the merging in
> > __free_one_page, but can defer it to unset_migratetype_isolate, allowing
> > to simplify those code paths by disallowing any merging of isolated
> > pages at all.
> 
> Oh, I think understand now. Yeah, skipping merging for pages in isolated 
> pageblocks might be a rather elegant solution. But still, we would have to 
> check 
> buddy's migratetype at order >= pageblock_order like my patch does, which is 
> annoying. Because even without isolated merging, the buddy might have already 
> had order>=pageblock_order when it was isolated.

> So what if isolation also split existing buddies in the pageblock immediately 
> when it sets the MIGRATETYPE_ISOLATE on the pageblock? Then we would have it 
> guaranteed that there's no isolated buddy - a buddy candidate at order >= 
> pageblock_order either has a smaller order (so it's not a buddy) or is not 
> MIGRATE_ISOLATE so it's safe to merge with.
> 
> Does that make sense?
> 
This might increase the the overhead of isolation a lot. CMA is also
used for small order allocations, so the work of splitting a whole
pageblock to allocate a small number of pages out just to merge a lot of
them again on unisolation might make this unattractive.

My feeling is that checking the buddy migratetype for >=pageblock_order
frees might be lower overhead, but I have no hard numbers to back this
claim.

Then on the other hand moving the work to isolation/unisolation affects
only code paths that are expected to be quite slow anyways, doing the
check in _free_one_page will affect everyone.

Regards,
Lucas



Re: Suspicious error for CMA stress test

2016-03-22 Thread Lucas Stach
Am Freitag, den 18.03.2016, 21:58 +0100 schrieb Vlastimil Babka:
> On 03/18/2016 03:42 PM, Lucas Stach wrote:
> > Am Freitag, den 18.03.2016, 15:10 +0100 schrieb Vlastimil Babka:
> >> On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
> >> > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> >>
> >> OK, here it is. Hanjun can you please retest this, as I'm not sure if you 
> >> had
> >> the same code due to the followup one-liner patches in the thread. Lucas, 
> >> see if
> >> it helps with your issue as well. Laura and Joonsoo, please also test and 
> >> review
> >> and check changelog if my perception of the problem is accurate :)
> >>
> >
> > This doesn't help for my case, as it is still trying to merge pages in
> > isolated ranges. It even tries extra hard at doing so.
> >
> > With concurrent isolation and frees going on this may lead to the start
> > page of the range to be isolated merging into an higher order buddy page
> > if it isn't already pageblock aligned, leading both test_pages_isolated
> > and isolate_freepages to fail on an otherwise perfectly fine range.
> >
> > What I am arguing is that if a page is freed into an isolated range we
> > should not try merge it with it's buddies at all, by setting max_order =
> > order. If the range is isolated because want to isolate freepages from
> > it, the work to do the merging is wasted, as isolate_freepages will
> > split higher order pages into order-0 pages again.
> >
> > If we already finished isolating freepages and are in the process of
> > undoing the isolation, we don't strictly need to do the merging in
> > __free_one_page, but can defer it to unset_migratetype_isolate, allowing
> > to simplify those code paths by disallowing any merging of isolated
> > pages at all.
> 
> Oh, I think understand now. Yeah, skipping merging for pages in isolated 
> pageblocks might be a rather elegant solution. But still, we would have to 
> check 
> buddy's migratetype at order >= pageblock_order like my patch does, which is 
> annoying. Because even without isolated merging, the buddy might have already 
> had order>=pageblock_order when it was isolated.

> So what if isolation also split existing buddies in the pageblock immediately 
> when it sets the MIGRATETYPE_ISOLATE on the pageblock? Then we would have it 
> guaranteed that there's no isolated buddy - a buddy candidate at order >= 
> pageblock_order either has a smaller order (so it's not a buddy) or is not 
> MIGRATE_ISOLATE so it's safe to merge with.
> 
> Does that make sense?
> 
This might increase the the overhead of isolation a lot. CMA is also
used for small order allocations, so the work of splitting a whole
pageblock to allocate a small number of pages out just to merge a lot of
them again on unisolation might make this unattractive.

My feeling is that checking the buddy migratetype for >=pageblock_order
frees might be lower overhead, but I have no hard numbers to back this
claim.

Then on the other hand moving the work to isolation/unisolation affects
only code paths that are expected to be quite slow anyways, doing the
check in _free_one_page will affect everyone.

Regards,
Lucas



Re: Suspicious error for CMA stress test

2016-03-20 Thread Joonsoo Kim
On Fri, Mar 18, 2016 at 02:32:35PM +0100, Lucas Stach wrote:
> Hi Vlastimil, Joonsoo,
> 
> Am Freitag, den 18.03.2016, 00:52 +0900 schrieb Joonsoo Kim:
> > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> > > On 03/17/2016 10:24 AM, Hanjun Guo wrote:
> > >>
> > >> On 2016/3/17 14:54, Joonsoo Kim wrote:
> > >>>
> > >>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
> > 
> >  On 2016/3/14 15:18, Joonsoo Kim wrote:
> > >
> > > On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> > >>
> > >> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> > >>>
> > >>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> > 
> >  On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> > 
> >  How about something like this? Just and idea, probably buggy
> >  (off-by-one etc.).
> >  Should keep away cost from  >  expense of the
> >  relatively fewer >pageblock_order iterations.
> > >>>
> > >>> Hmm... I tested this and found that it's code size is a little bit
> > >>> larger than mine. I'm not sure why this happens exactly but I guess
> > >>> it would be
> > >>> related to compiler optimization. In this case, I'm in favor of my
> > >>> implementation because it looks like well abstraction. It adds one
> > >>> unlikely branch to the merge loop but compiler would optimize it to
> > >>> check it once.
> > >>
> > >> I would be surprised if compiler optimized that to check it once, as
> > >> order increases with each loop iteration. But maybe it's smart
> > >> enough to do something like I did by hand? Guess I'll check the
> > >> disassembly.
> > >
> > > Okay. I used following slightly optimized version and I need to
> > > add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> > > to yours. Please consider it, too.
> > 
> >  Hmm, this one is not work, I still can see the bug is there after
> >  applying
> >  this patch, did I miss something?
> > >>>
> > >>> I may find that there is a bug which was introduced by me some time
> > >>> ago. Could you test following change in __free_one_page() on top of
> > >>> Vlastimil's patch?
> > >>>
> > >>> -page_idx = pfn & ((1 << max_order) - 1);
> > >>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> > >>
> > >>
> > >> I tested Vlastimil's patch + your change with stress for more than half
> > >> hour, the bug
> > >> I reported is gone :)
> > >
> > >
> > > Oh, ok, will try to send proper patch, once I figure out what to write in
> > > the changelog :)
> > 
> > Thanks in advance!
> 
> After digging into the "PFN busy" race in CMA (see [1]), I believe we
> should just prevent any buddy merging in isolated ranges. This fixes the
> race I'm seeing without the need to hold the zone lock for extend
> periods of time.

"PFNs busy" can be caused by other type of race, too. I guess that
other cases happens more than buddy merging. Do you have any test case for
your problem?

If it is indeed a problem, you can avoid it with simple retry
MAX_ORDER times on alloc_contig_range(). This is a rather dirty but
the reason I suggest it is that there are other type of race in
__alloc_contig_range() and retry could help them, too. For example,
if some of pages in the requested range isn't attached to the LRU yet
or detached from the LRU but not freed to buddy,
test_pages_isolated() can be failed.

Thanks.


Re: Suspicious error for CMA stress test

2016-03-20 Thread Joonsoo Kim
On Fri, Mar 18, 2016 at 02:32:35PM +0100, Lucas Stach wrote:
> Hi Vlastimil, Joonsoo,
> 
> Am Freitag, den 18.03.2016, 00:52 +0900 schrieb Joonsoo Kim:
> > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> > > On 03/17/2016 10:24 AM, Hanjun Guo wrote:
> > >>
> > >> On 2016/3/17 14:54, Joonsoo Kim wrote:
> > >>>
> > >>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
> > 
> >  On 2016/3/14 15:18, Joonsoo Kim wrote:
> > >
> > > On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> > >>
> > >> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> > >>>
> > >>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> > 
> >  On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> > 
> >  How about something like this? Just and idea, probably buggy
> >  (off-by-one etc.).
> >  Should keep away cost from  >  expense of the
> >  relatively fewer >pageblock_order iterations.
> > >>>
> > >>> Hmm... I tested this and found that it's code size is a little bit
> > >>> larger than mine. I'm not sure why this happens exactly but I guess
> > >>> it would be
> > >>> related to compiler optimization. In this case, I'm in favor of my
> > >>> implementation because it looks like well abstraction. It adds one
> > >>> unlikely branch to the merge loop but compiler would optimize it to
> > >>> check it once.
> > >>
> > >> I would be surprised if compiler optimized that to check it once, as
> > >> order increases with each loop iteration. But maybe it's smart
> > >> enough to do something like I did by hand? Guess I'll check the
> > >> disassembly.
> > >
> > > Okay. I used following slightly optimized version and I need to
> > > add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> > > to yours. Please consider it, too.
> > 
> >  Hmm, this one is not work, I still can see the bug is there after
> >  applying
> >  this patch, did I miss something?
> > >>>
> > >>> I may find that there is a bug which was introduced by me some time
> > >>> ago. Could you test following change in __free_one_page() on top of
> > >>> Vlastimil's patch?
> > >>>
> > >>> -page_idx = pfn & ((1 << max_order) - 1);
> > >>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> > >>
> > >>
> > >> I tested Vlastimil's patch + your change with stress for more than half
> > >> hour, the bug
> > >> I reported is gone :)
> > >
> > >
> > > Oh, ok, will try to send proper patch, once I figure out what to write in
> > > the changelog :)
> > 
> > Thanks in advance!
> 
> After digging into the "PFN busy" race in CMA (see [1]), I believe we
> should just prevent any buddy merging in isolated ranges. This fixes the
> race I'm seeing without the need to hold the zone lock for extend
> periods of time.

"PFNs busy" can be caused by other type of race, too. I guess that
other cases happens more than buddy merging. Do you have any test case for
your problem?

If it is indeed a problem, you can avoid it with simple retry
MAX_ORDER times on alloc_contig_range(). This is a rather dirty but
the reason I suggest it is that there are other type of race in
__alloc_contig_range() and retry could help them, too. For example,
if some of pages in the requested range isn't attached to the LRU yet
or detached from the LRU but not freed to buddy,
test_pages_isolated() can be failed.

Thanks.


Re: Suspicious error for CMA stress test

2016-03-20 Thread Vlastimil Babka

On 03/17/2016 07:54 AM, Joonsoo Kim wrote:

On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:

On 2016/3/14 15:18, Joonsoo Kim wrote:

Hmm, this one is not work, I still can see the bug is there after applying
this patch, did I miss something?


I may find that there is a bug which was introduced by me some time
ago. Could you test following change in __free_one_page() on top of
Vlastimil's patch?

-page_idx = pfn & ((1 << max_order) - 1);
+page_idx = pfn & ((1 << MAX_ORDER) - 1);


I think it wasn't a bug in the context of 3c605096d31, but it certainly Does 
become a bug with my patch, so thanks for catching that.


Actually I've earlier concluded that this line is not needed at all, and can 
lead to smaller code, and enable even more savings. But I'll leave that after 
the fix that needs to go to stable.



Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majord...@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: mailto:"d...@kvack.org;> em...@kvack.org 





Re: Suspicious error for CMA stress test

2016-03-20 Thread Vlastimil Babka

On 03/17/2016 07:54 AM, Joonsoo Kim wrote:

On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:

On 2016/3/14 15:18, Joonsoo Kim wrote:

Hmm, this one is not work, I still can see the bug is there after applying
this patch, did I miss something?


I may find that there is a bug which was introduced by me some time
ago. Could you test following change in __free_one_page() on top of
Vlastimil's patch?

-page_idx = pfn & ((1 << max_order) - 1);
+page_idx = pfn & ((1 << MAX_ORDER) - 1);


I think it wasn't a bug in the context of 3c605096d31, but it certainly Does 
become a bug with my patch, so thanks for catching that.


Actually I've earlier concluded that this line is not needed at all, and can 
lead to smaller code, and enable even more savings. But I'll leave that after 
the fix that needs to go to stable.



Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majord...@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: mailto:"d...@kvack.org;> em...@kvack.org 





Re: Suspicious error for CMA stress test

2016-03-19 Thread Joonsoo Kim
2016-03-17 18:24 GMT+09:00 Hanjun Guo :
> On 2016/3/17 14:54, Joonsoo Kim wrote:
>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
>>> On 2016/3/14 15:18, Joonsoo Kim wrote:
 On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
>>> On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
>>>
>>> How about something like this? Just and idea, probably buggy 
>>> (off-by-one etc.).
>>> Should keep away cost from >> of the
>>> relatively fewer >pageblock_order iterations.
>> Hmm... I tested this and found that it's code size is a little bit
>> larger than mine. I'm not sure why this happens exactly but I guess it 
>> would be
>> related to compiler optimization. In this case, I'm in favor of my
>> implementation because it looks like well abstraction. It adds one
>> unlikely branch to the merge loop but compiler would optimize it to
>> check it once.
> I would be surprised if compiler optimized that to check it once, as
> order increases with each loop iteration. But maybe it's smart
> enough to do something like I did by hand? Guess I'll check the
> disassembly.
 Okay. I used following slightly optimized version and I need to
 add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
 to yours. Please consider it, too.
>>> Hmm, this one is not work, I still can see the bug is there after applying
>>> this patch, did I miss something?
>> I may find that there is a bug which was introduced by me some time
>> ago. Could you test following change in __free_one_page() on top of
>> Vlastimil's patch?
>>
>> -page_idx = pfn & ((1 << max_order) - 1);
>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
>
> I tested Vlastimil's patch + your change with stress for more than half hour, 
> the bug
> I reported is gone :)

Good to hear!

> I have some questions, Joonsoo, you provided a patch as following:
>
> diff --git a/mm/cma.c b/mm/cma.c
> index 3a7a67b..952a8a3 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -448,7 +448,10 @@ bool cma_release(struct cma *cma, const struct page 
> *pages, unsigned int count)
>
> VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
>
> + mutex_lock(_mutex);
> free_contig_range(pfn, count);
> + mutex_unlock(_mutex);
> +
> cma_clear_bitmap(cma, pfn, count);
> trace_cma_release(pfn, pages, count);
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 7f32950..68ed5ae 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1559,7 +1559,8 @@ void free_hot_cold_page(struct page *page, bool cold)
>  * excessively into the page allocator
>  */
> if (migratetype >= MIGRATE_PCPTYPES) {
> -   if (unlikely(is_migrate_isolate(migratetype))) {
> + if (is_migrate_cma(migratetype) ||
> + unlikely(is_migrate_isolate(migratetype))) {
> free_one_page(zone, page, pfn, 0, migratetype);
> goto out;
> }
>
> This patch also works to fix the bug, why not just use this one? is there
> any side effects for this patch? maybe there is performance issue as the
> mutex lock is used, any other issues?

The changes in free_hot_cold_page() would cause unacceptable performance
problem in a big machine, because, with above change,  it takes zone->lock
whenever freeing one page on CMA region.

Thanks.


Re: Suspicious error for CMA stress test

2016-03-19 Thread Joonsoo Kim
2016-03-17 18:24 GMT+09:00 Hanjun Guo :
> On 2016/3/17 14:54, Joonsoo Kim wrote:
>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
>>> On 2016/3/14 15:18, Joonsoo Kim wrote:
 On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
>>> On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
>>>
>>> How about something like this? Just and idea, probably buggy 
>>> (off-by-one etc.).
>>> Should keep away cost from >> of the
>>> relatively fewer >pageblock_order iterations.
>> Hmm... I tested this and found that it's code size is a little bit
>> larger than mine. I'm not sure why this happens exactly but I guess it 
>> would be
>> related to compiler optimization. In this case, I'm in favor of my
>> implementation because it looks like well abstraction. It adds one
>> unlikely branch to the merge loop but compiler would optimize it to
>> check it once.
> I would be surprised if compiler optimized that to check it once, as
> order increases with each loop iteration. But maybe it's smart
> enough to do something like I did by hand? Guess I'll check the
> disassembly.
 Okay. I used following slightly optimized version and I need to
 add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
 to yours. Please consider it, too.
>>> Hmm, this one is not work, I still can see the bug is there after applying
>>> this patch, did I miss something?
>> I may find that there is a bug which was introduced by me some time
>> ago. Could you test following change in __free_one_page() on top of
>> Vlastimil's patch?
>>
>> -page_idx = pfn & ((1 << max_order) - 1);
>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
>
> I tested Vlastimil's patch + your change with stress for more than half hour, 
> the bug
> I reported is gone :)

Good to hear!

> I have some questions, Joonsoo, you provided a patch as following:
>
> diff --git a/mm/cma.c b/mm/cma.c
> index 3a7a67b..952a8a3 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -448,7 +448,10 @@ bool cma_release(struct cma *cma, const struct page 
> *pages, unsigned int count)
>
> VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
>
> + mutex_lock(_mutex);
> free_contig_range(pfn, count);
> + mutex_unlock(_mutex);
> +
> cma_clear_bitmap(cma, pfn, count);
> trace_cma_release(pfn, pages, count);
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 7f32950..68ed5ae 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1559,7 +1559,8 @@ void free_hot_cold_page(struct page *page, bool cold)
>  * excessively into the page allocator
>  */
> if (migratetype >= MIGRATE_PCPTYPES) {
> -   if (unlikely(is_migrate_isolate(migratetype))) {
> + if (is_migrate_cma(migratetype) ||
> + unlikely(is_migrate_isolate(migratetype))) {
> free_one_page(zone, page, pfn, 0, migratetype);
> goto out;
> }
>
> This patch also works to fix the bug, why not just use this one? is there
> any side effects for this patch? maybe there is performance issue as the
> mutex lock is used, any other issues?

The changes in free_hot_cold_page() would cause unacceptable performance
problem in a big machine, because, with above change,  it takes zone->lock
whenever freeing one page on CMA region.

Thanks.


Re: Suspicious error for CMA stress test

2016-03-19 Thread Lucas Stach
Hi Vlastimil, Joonsoo,

Am Freitag, den 18.03.2016, 00:52 +0900 schrieb Joonsoo Kim:
> 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> > On 03/17/2016 10:24 AM, Hanjun Guo wrote:
> >>
> >> On 2016/3/17 14:54, Joonsoo Kim wrote:
> >>>
> >>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
> 
>  On 2016/3/14 15:18, Joonsoo Kim wrote:
> >
> > On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> >>
> >> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> >>>
> >>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> 
>  On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> 
>  How about something like this? Just and idea, probably buggy
>  (off-by-one etc.).
>  Should keep away cost from   expense of the
>  relatively fewer >pageblock_order iterations.
> >>>
> >>> Hmm... I tested this and found that it's code size is a little bit
> >>> larger than mine. I'm not sure why this happens exactly but I guess
> >>> it would be
> >>> related to compiler optimization. In this case, I'm in favor of my
> >>> implementation because it looks like well abstraction. It adds one
> >>> unlikely branch to the merge loop but compiler would optimize it to
> >>> check it once.
> >>
> >> I would be surprised if compiler optimized that to check it once, as
> >> order increases with each loop iteration. But maybe it's smart
> >> enough to do something like I did by hand? Guess I'll check the
> >> disassembly.
> >
> > Okay. I used following slightly optimized version and I need to
> > add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> > to yours. Please consider it, too.
> 
>  Hmm, this one is not work, I still can see the bug is there after
>  applying
>  this patch, did I miss something?
> >>>
> >>> I may find that there is a bug which was introduced by me some time
> >>> ago. Could you test following change in __free_one_page() on top of
> >>> Vlastimil's patch?
> >>>
> >>> -page_idx = pfn & ((1 << max_order) - 1);
> >>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> >>
> >>
> >> I tested Vlastimil's patch + your change with stress for more than half
> >> hour, the bug
> >> I reported is gone :)
> >
> >
> > Oh, ok, will try to send proper patch, once I figure out what to write in
> > the changelog :)
> 
> Thanks in advance!

After digging into the "PFN busy" race in CMA (see [1]), I believe we
should just prevent any buddy merging in isolated ranges. This fixes the
race I'm seeing without the need to hold the zone lock for extend
periods of time.
Also any merging done in an isolated range is likely to be completely
wasted work, as higher order buddy pages are broken up again into single
pages in isolate_freepages.

If we do that the patch to fix the bug in question for this report would
boil down to a check if the current pages buddy is isolated and abort
merging at that point, right? undo_isolate_page_range will then do all
necessary merging that has been skipped while the range was isolated.

Do you see issues with this approach?

Regards,
Lucas

[1] http://thread.gmane.org/gmane.linux.kernel.mm/148383



Re: Suspicious error for CMA stress test

2016-03-19 Thread Lucas Stach
Hi Vlastimil, Joonsoo,

Am Freitag, den 18.03.2016, 00:52 +0900 schrieb Joonsoo Kim:
> 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> > On 03/17/2016 10:24 AM, Hanjun Guo wrote:
> >>
> >> On 2016/3/17 14:54, Joonsoo Kim wrote:
> >>>
> >>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
> 
>  On 2016/3/14 15:18, Joonsoo Kim wrote:
> >
> > On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> >>
> >> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> >>>
> >>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> 
>  On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> 
>  How about something like this? Just and idea, probably buggy
>  (off-by-one etc.).
>  Should keep away cost from   expense of the
>  relatively fewer >pageblock_order iterations.
> >>>
> >>> Hmm... I tested this and found that it's code size is a little bit
> >>> larger than mine. I'm not sure why this happens exactly but I guess
> >>> it would be
> >>> related to compiler optimization. In this case, I'm in favor of my
> >>> implementation because it looks like well abstraction. It adds one
> >>> unlikely branch to the merge loop but compiler would optimize it to
> >>> check it once.
> >>
> >> I would be surprised if compiler optimized that to check it once, as
> >> order increases with each loop iteration. But maybe it's smart
> >> enough to do something like I did by hand? Guess I'll check the
> >> disassembly.
> >
> > Okay. I used following slightly optimized version and I need to
> > add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> > to yours. Please consider it, too.
> 
>  Hmm, this one is not work, I still can see the bug is there after
>  applying
>  this patch, did I miss something?
> >>>
> >>> I may find that there is a bug which was introduced by me some time
> >>> ago. Could you test following change in __free_one_page() on top of
> >>> Vlastimil's patch?
> >>>
> >>> -page_idx = pfn & ((1 << max_order) - 1);
> >>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> >>
> >>
> >> I tested Vlastimil's patch + your change with stress for more than half
> >> hour, the bug
> >> I reported is gone :)
> >
> >
> > Oh, ok, will try to send proper patch, once I figure out what to write in
> > the changelog :)
> 
> Thanks in advance!

After digging into the "PFN busy" race in CMA (see [1]), I believe we
should just prevent any buddy merging in isolated ranges. This fixes the
race I'm seeing without the need to hold the zone lock for extend
periods of time.
Also any merging done in an isolated range is likely to be completely
wasted work, as higher order buddy pages are broken up again into single
pages in isolate_freepages.

If we do that the patch to fix the bug in question for this report would
boil down to a check if the current pages buddy is isolated and abort
merging at that point, right? undo_isolate_page_range will then do all
necessary merging that has been skipped while the range was isolated.

Do you see issues with this approach?

Regards,
Lucas

[1] http://thread.gmane.org/gmane.linux.kernel.mm/148383



Re: Suspicious error for CMA stress test

2016-03-19 Thread Lucas Stach
Am Freitag, den 18.03.2016, 15:10 +0100 schrieb Vlastimil Babka:
> On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
> > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> >>
> >> Okay. I used following slightly optimized version and I need to
> >> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> >> to yours. Please consider it, too.
> >
> > Hmm, this one is not work, I still can see the bug is there after
> > applying
> > this patch, did I miss something?
> 
>  I may find that there is a bug which was introduced by me some time
>  ago. Could you test following change in __free_one_page() on top of
>  Vlastimil's patch?
> 
>  -page_idx = pfn & ((1 << max_order) - 1);
>  +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> >>>
> >>>
> >>> I tested Vlastimil's patch + your change with stress for more than half
> >>> hour, the bug
> >>> I reported is gone :)
> >>
> >>
> >> Oh, ok, will try to send proper patch, once I figure out what to write in
> >> the changelog :)
> > 
> > Thanks in advance!
> > 
> 
> OK, here it is. Hanjun can you please retest this, as I'm not sure if you had
> the same code due to the followup one-liner patches in the thread. Lucas, see 
> if
> it helps with your issue as well. Laura and Joonsoo, please also test and 
> review
> and check changelog if my perception of the problem is accurate :)
> 

This doesn't help for my case, as it is still trying to merge pages in
isolated ranges. It even tries extra hard at doing so.

With concurrent isolation and frees going on this may lead to the start
page of the range to be isolated merging into an higher order buddy page
if it isn't already pageblock aligned, leading both test_pages_isolated
and isolate_freepages to fail on an otherwise perfectly fine range.

What I am arguing is that if a page is freed into an isolated range we
should not try merge it with it's buddies at all, by setting max_order =
order. If the range is isolated because want to isolate freepages from
it, the work to do the merging is wasted, as isolate_freepages will
split higher order pages into order-0 pages again.

If we already finished isolating freepages and are in the process of
undoing the isolation, we don't strictly need to do the merging in
__free_one_page, but can defer it to unset_migratetype_isolate, allowing
to simplify those code paths by disallowing any merging of isolated
pages at all.

Regards,
Lucas



Re: Suspicious error for CMA stress test

2016-03-19 Thread Lucas Stach
Am Freitag, den 18.03.2016, 15:10 +0100 schrieb Vlastimil Babka:
> On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
> > 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> >>
> >> Okay. I used following slightly optimized version and I need to
> >> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> >> to yours. Please consider it, too.
> >
> > Hmm, this one is not work, I still can see the bug is there after
> > applying
> > this patch, did I miss something?
> 
>  I may find that there is a bug which was introduced by me some time
>  ago. Could you test following change in __free_one_page() on top of
>  Vlastimil's patch?
> 
>  -page_idx = pfn & ((1 << max_order) - 1);
>  +page_idx = pfn & ((1 << MAX_ORDER) - 1);
> >>>
> >>>
> >>> I tested Vlastimil's patch + your change with stress for more than half
> >>> hour, the bug
> >>> I reported is gone :)
> >>
> >>
> >> Oh, ok, will try to send proper patch, once I figure out what to write in
> >> the changelog :)
> > 
> > Thanks in advance!
> > 
> 
> OK, here it is. Hanjun can you please retest this, as I'm not sure if you had
> the same code due to the followup one-liner patches in the thread. Lucas, see 
> if
> it helps with your issue as well. Laura and Joonsoo, please also test and 
> review
> and check changelog if my perception of the problem is accurate :)
> 

This doesn't help for my case, as it is still trying to merge pages in
isolated ranges. It even tries extra hard at doing so.

With concurrent isolation and frees going on this may lead to the start
page of the range to be isolated merging into an higher order buddy page
if it isn't already pageblock aligned, leading both test_pages_isolated
and isolate_freepages to fail on an otherwise perfectly fine range.

What I am arguing is that if a page is freed into an isolated range we
should not try merge it with it's buddies at all, by setting max_order =
order. If the range is isolated because want to isolate freepages from
it, the work to do the merging is wasted, as isolate_freepages will
split higher order pages into order-0 pages again.

If we already finished isolating freepages and are in the process of
undoing the isolation, we don't strictly need to do the merging in
__free_one_page, but can defer it to unset_migratetype_isolate, allowing
to simplify those code paths by disallowing any merging of isolated
pages at all.

Regards,
Lucas



Re: Suspicious error for CMA stress test

2016-03-19 Thread Vlastimil Babka

On 03/19/2016 08:24 AM, Hanjun Guo wrote:

On 2016/3/18 22:10, Vlastimil Babka wrote:


Oh, ok, will try to send proper patch, once I figure out what to write in
the changelog :)

Thanks in advance!


OK, here it is. Hanjun can you please retest this, as I'm not sure if you had


I tested this new patch with stress for more than one hour, and it works!


That's good news, thanks!


Since Lucas has comments on it, I'm willing to test further versions if needed.

One minor comments below,


the same code due to the followup one-liner patches in the thread. Lucas, see if
it helps with your issue as well. Laura and Joonsoo, please also test and review
and check changelog if my perception of the problem is accurate :)

Thanks


[...]

+   if (max_order < MAX_ORDER) {
+   /* If we are here, it means order is >= pageblock_order.
+* We want to prevent merge between freepages on isolate
+* pageblock and normal pageblock. Without this, pageblock
+* isolation could cause incorrect freepage or CMA accounting.
+*
+* We don't want to hit this code for the more frequent
+* low-order merging.
+*/
+   if (unlikely(has_isolate_pageblock(zone))) {


In the first version of your patch, it's

+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone))) {

Why remove the IS_ENABLED(CONFIG_CMA) in the new version?


Previously I thought the problem was CMA-specific, but after more detailed look 
I think it's not, as start_isolate_page_range() releases zone lock between 
pageblocks, so unexpected merging due to races can happen also between isolated 
and non-isolated non-CMA pageblocks. This function is called from memory hotplug 
code, and recently also alloc_contig_range() itself is outside CONFIG_CMA for 
allocating gigantic hugepages. Joonsoo's original commit 3c60509 was also not 
restricted to CMA, and same with his patch earlier in this thread.


Hmm I guess another alternate solution would indeed be to modify 
start_isolate_page_range() and undo_isolate_page_range() to hold zone->lock 
across MAX_ORDER blocks (not whole requested range, as that could lead to 
hardlockups). But that still wouldn't help Lucas, IUUC.




Thanks
Hanjun






Re: Suspicious error for CMA stress test

2016-03-19 Thread Vlastimil Babka

On 03/19/2016 08:24 AM, Hanjun Guo wrote:

On 2016/3/18 22:10, Vlastimil Babka wrote:


Oh, ok, will try to send proper patch, once I figure out what to write in
the changelog :)

Thanks in advance!


OK, here it is. Hanjun can you please retest this, as I'm not sure if you had


I tested this new patch with stress for more than one hour, and it works!


That's good news, thanks!


Since Lucas has comments on it, I'm willing to test further versions if needed.

One minor comments below,


the same code due to the followup one-liner patches in the thread. Lucas, see if
it helps with your issue as well. Laura and Joonsoo, please also test and review
and check changelog if my perception of the problem is accurate :)

Thanks


[...]

+   if (max_order < MAX_ORDER) {
+   /* If we are here, it means order is >= pageblock_order.
+* We want to prevent merge between freepages on isolate
+* pageblock and normal pageblock. Without this, pageblock
+* isolation could cause incorrect freepage or CMA accounting.
+*
+* We don't want to hit this code for the more frequent
+* low-order merging.
+*/
+   if (unlikely(has_isolate_pageblock(zone))) {


In the first version of your patch, it's

+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone))) {

Why remove the IS_ENABLED(CONFIG_CMA) in the new version?


Previously I thought the problem was CMA-specific, but after more detailed look 
I think it's not, as start_isolate_page_range() releases zone lock between 
pageblocks, so unexpected merging due to races can happen also between isolated 
and non-isolated non-CMA pageblocks. This function is called from memory hotplug 
code, and recently also alloc_contig_range() itself is outside CONFIG_CMA for 
allocating gigantic hugepages. Joonsoo's original commit 3c60509 was also not 
restricted to CMA, and same with his patch earlier in this thread.


Hmm I guess another alternate solution would indeed be to modify 
start_isolate_page_range() and undo_isolate_page_range() to hold zone->lock 
across MAX_ORDER blocks (not whole requested range, as that could lead to 
hardlockups). But that still wouldn't help Lucas, IUUC.




Thanks
Hanjun






Re: Suspicious error for CMA stress test

2016-03-19 Thread Vlastimil Babka

On 03/17/2016 10:24 AM, Hanjun Guo wrote:

On 2016/3/17 14:54, Joonsoo Kim wrote:

On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:

On 2016/3/14 15:18, Joonsoo Kim wrote:

On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:

On 03/14/2016 07:49 AM, Joonsoo Kim wrote:

On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:

On 03/11/2016 04:00 PM, Joonsoo Kim wrote:

How about something like this? Just and idea, probably buggy (off-by-one etc.).
Should keep away cost from pageblock_order iterations.

Hmm... I tested this and found that it's code size is a little bit
larger than mine. I'm not sure why this happens exactly but I guess it would be
related to compiler optimization. In this case, I'm in favor of my
implementation because it looks like well abstraction. It adds one
unlikely branch to the merge loop but compiler would optimize it to
check it once.

I would be surprised if compiler optimized that to check it once, as
order increases with each loop iteration. But maybe it's smart
enough to do something like I did by hand? Guess I'll check the
disassembly.

Okay. I used following slightly optimized version and I need to
add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
to yours. Please consider it, too.

Hmm, this one is not work, I still can see the bug is there after applying
this patch, did I miss something?

I may find that there is a bug which was introduced by me some time
ago. Could you test following change in __free_one_page() on top of
Vlastimil's patch?

-page_idx = pfn & ((1 << max_order) - 1);
+page_idx = pfn & ((1 << MAX_ORDER) - 1);


I tested Vlastimil's patch + your change with stress for more than half hour, 
the bug
I reported is gone :)


Oh, ok, will try to send proper patch, once I figure out what to write 
in the changelog :)




Re: Suspicious error for CMA stress test

2016-03-19 Thread Vlastimil Babka

On 03/17/2016 10:24 AM, Hanjun Guo wrote:

On 2016/3/17 14:54, Joonsoo Kim wrote:

On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:

On 2016/3/14 15:18, Joonsoo Kim wrote:

On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:

On 03/14/2016 07:49 AM, Joonsoo Kim wrote:

On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:

On 03/11/2016 04:00 PM, Joonsoo Kim wrote:

How about something like this? Just and idea, probably buggy (off-by-one etc.).
Should keep away cost from pageblock_order iterations.

Hmm... I tested this and found that it's code size is a little bit
larger than mine. I'm not sure why this happens exactly but I guess it would be
related to compiler optimization. In this case, I'm in favor of my
implementation because it looks like well abstraction. It adds one
unlikely branch to the merge loop but compiler would optimize it to
check it once.

I would be surprised if compiler optimized that to check it once, as
order increases with each loop iteration. But maybe it's smart
enough to do something like I did by hand? Guess I'll check the
disassembly.

Okay. I used following slightly optimized version and I need to
add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
to yours. Please consider it, too.

Hmm, this one is not work, I still can see the bug is there after applying
this patch, did I miss something?

I may find that there is a bug which was introduced by me some time
ago. Could you test following change in __free_one_page() on top of
Vlastimil's patch?

-page_idx = pfn & ((1 << max_order) - 1);
+page_idx = pfn & ((1 << MAX_ORDER) - 1);


I tested Vlastimil's patch + your change with stress for more than half hour, 
the bug
I reported is gone :)


Oh, ok, will try to send proper patch, once I figure out what to write 
in the changelog :)




Re: Suspicious error for CMA stress test

2016-03-19 Thread Vlastimil Babka

On 03/18/2016 03:42 PM, Lucas Stach wrote:

Am Freitag, den 18.03.2016, 15:10 +0100 schrieb Vlastimil Babka:

On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
> 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :

OK, here it is. Hanjun can you please retest this, as I'm not sure if you had
the same code due to the followup one-liner patches in the thread. Lucas, see if
it helps with your issue as well. Laura and Joonsoo, please also test and review
and check changelog if my perception of the problem is accurate :)



This doesn't help for my case, as it is still trying to merge pages in
isolated ranges. It even tries extra hard at doing so.

With concurrent isolation and frees going on this may lead to the start
page of the range to be isolated merging into an higher order buddy page
if it isn't already pageblock aligned, leading both test_pages_isolated
and isolate_freepages to fail on an otherwise perfectly fine range.

What I am arguing is that if a page is freed into an isolated range we
should not try merge it with it's buddies at all, by setting max_order =
order. If the range is isolated because want to isolate freepages from
it, the work to do the merging is wasted, as isolate_freepages will
split higher order pages into order-0 pages again.

If we already finished isolating freepages and are in the process of
undoing the isolation, we don't strictly need to do the merging in
__free_one_page, but can defer it to unset_migratetype_isolate, allowing
to simplify those code paths by disallowing any merging of isolated
pages at all.


Oh, I think understand now. Yeah, skipping merging for pages in isolated 
pageblocks might be a rather elegant solution. But still, we would have to check 
buddy's migratetype at order >= pageblock_order like my patch does, which is 
annoying. Because even without isolated merging, the buddy might have already 
had order>=pageblock_order when it was isolated.


So what if isolation also split existing buddies in the pageblock immediately 
when it sets the MIGRATETYPE_ISOLATE on the pageblock? Then we would have it 
guaranteed that there's no isolated buddy - a buddy candidate at order >= 
pageblock_order either has a smaller order (so it's not a buddy) or is not 
MIGRATE_ISOLATE so it's safe to merge with.


Does that make sense?




Re: Suspicious error for CMA stress test

2016-03-19 Thread Vlastimil Babka

On 03/18/2016 03:42 PM, Lucas Stach wrote:

Am Freitag, den 18.03.2016, 15:10 +0100 schrieb Vlastimil Babka:

On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
> 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :

OK, here it is. Hanjun can you please retest this, as I'm not sure if you had
the same code due to the followup one-liner patches in the thread. Lucas, see if
it helps with your issue as well. Laura and Joonsoo, please also test and review
and check changelog if my perception of the problem is accurate :)



This doesn't help for my case, as it is still trying to merge pages in
isolated ranges. It even tries extra hard at doing so.

With concurrent isolation and frees going on this may lead to the start
page of the range to be isolated merging into an higher order buddy page
if it isn't already pageblock aligned, leading both test_pages_isolated
and isolate_freepages to fail on an otherwise perfectly fine range.

What I am arguing is that if a page is freed into an isolated range we
should not try merge it with it's buddies at all, by setting max_order =
order. If the range is isolated because want to isolate freepages from
it, the work to do the merging is wasted, as isolate_freepages will
split higher order pages into order-0 pages again.

If we already finished isolating freepages and are in the process of
undoing the isolation, we don't strictly need to do the merging in
__free_one_page, but can defer it to unset_migratetype_isolate, allowing
to simplify those code paths by disallowing any merging of isolated
pages at all.


Oh, I think understand now. Yeah, skipping merging for pages in isolated 
pageblocks might be a rather elegant solution. But still, we would have to check 
buddy's migratetype at order >= pageblock_order like my patch does, which is 
annoying. Because even without isolated merging, the buddy might have already 
had order>=pageblock_order when it was isolated.


So what if isolation also split existing buddies in the pageblock immediately 
when it sets the MIGRATETYPE_ISOLATE on the pageblock? Then we would have it 
guaranteed that there's no isolated buddy - a buddy candidate at order >= 
pageblock_order either has a smaller order (so it's not a buddy) or is not 
MIGRATE_ISOLATE so it's safe to merge with.


Does that make sense?




Re: Suspicious error for CMA stress test

2016-03-19 Thread Joonsoo Kim
2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> On 03/17/2016 10:24 AM, Hanjun Guo wrote:
>>
>> On 2016/3/17 14:54, Joonsoo Kim wrote:
>>>
>>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:

 On 2016/3/14 15:18, Joonsoo Kim wrote:
>
> On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
>>
>> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
>>>
>>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:

 On 03/11/2016 04:00 PM, Joonsoo Kim wrote:

 How about something like this? Just and idea, probably buggy
 (off-by-one etc.).
 Should keep away cost from >>> expense of the
 relatively fewer >pageblock_order iterations.
>>>
>>> Hmm... I tested this and found that it's code size is a little bit
>>> larger than mine. I'm not sure why this happens exactly but I guess
>>> it would be
>>> related to compiler optimization. In this case, I'm in favor of my
>>> implementation because it looks like well abstraction. It adds one
>>> unlikely branch to the merge loop but compiler would optimize it to
>>> check it once.
>>
>> I would be surprised if compiler optimized that to check it once, as
>> order increases with each loop iteration. But maybe it's smart
>> enough to do something like I did by hand? Guess I'll check the
>> disassembly.
>
> Okay. I used following slightly optimized version and I need to
> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> to yours. Please consider it, too.

 Hmm, this one is not work, I still can see the bug is there after
 applying
 this patch, did I miss something?
>>>
>>> I may find that there is a bug which was introduced by me some time
>>> ago. Could you test following change in __free_one_page() on top of
>>> Vlastimil's patch?
>>>
>>> -page_idx = pfn & ((1 << max_order) - 1);
>>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
>>
>>
>> I tested Vlastimil's patch + your change with stress for more than half
>> hour, the bug
>> I reported is gone :)
>
>
> Oh, ok, will try to send proper patch, once I figure out what to write in
> the changelog :)

Thanks in advance!

Thanks.


Re: Suspicious error for CMA stress test

2016-03-19 Thread Joonsoo Kim
2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
> On 03/17/2016 10:24 AM, Hanjun Guo wrote:
>>
>> On 2016/3/17 14:54, Joonsoo Kim wrote:
>>>
>>> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:

 On 2016/3/14 15:18, Joonsoo Kim wrote:
>
> On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
>>
>> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
>>>
>>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:

 On 03/11/2016 04:00 PM, Joonsoo Kim wrote:

 How about something like this? Just and idea, probably buggy
 (off-by-one etc.).
 Should keep away cost from >>> expense of the
 relatively fewer >pageblock_order iterations.
>>>
>>> Hmm... I tested this and found that it's code size is a little bit
>>> larger than mine. I'm not sure why this happens exactly but I guess
>>> it would be
>>> related to compiler optimization. In this case, I'm in favor of my
>>> implementation because it looks like well abstraction. It adds one
>>> unlikely branch to the merge loop but compiler would optimize it to
>>> check it once.
>>
>> I would be surprised if compiler optimized that to check it once, as
>> order increases with each loop iteration. But maybe it's smart
>> enough to do something like I did by hand? Guess I'll check the
>> disassembly.
>
> Okay. I used following slightly optimized version and I need to
> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> to yours. Please consider it, too.

 Hmm, this one is not work, I still can see the bug is there after
 applying
 this patch, did I miss something?
>>>
>>> I may find that there is a bug which was introduced by me some time
>>> ago. Could you test following change in __free_one_page() on top of
>>> Vlastimil's patch?
>>>
>>> -page_idx = pfn & ((1 << max_order) - 1);
>>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
>>
>>
>> I tested Vlastimil's patch + your change with stress for more than half
>> hour, the bug
>> I reported is gone :)
>
>
> Oh, ok, will try to send proper patch, once I figure out what to write in
> the changelog :)

Thanks in advance!

Thanks.


Re: Suspicious error for CMA stress test

2016-03-19 Thread Hanjun Guo
On 2016/3/17 14:54, Joonsoo Kim wrote:
> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
>> On 2016/3/14 15:18, Joonsoo Kim wrote:
>>> On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
 On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
>> On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
>>
>> How about something like this? Just and idea, probably buggy (off-by-one 
>> etc.).
>> Should keep away cost from > the
>> relatively fewer >pageblock_order iterations.
> Hmm... I tested this and found that it's code size is a little bit
> larger than mine. I'm not sure why this happens exactly but I guess it 
> would be
> related to compiler optimization. In this case, I'm in favor of my
> implementation because it looks like well abstraction. It adds one
> unlikely branch to the merge loop but compiler would optimize it to
> check it once.
 I would be surprised if compiler optimized that to check it once, as
 order increases with each loop iteration. But maybe it's smart
 enough to do something like I did by hand? Guess I'll check the
 disassembly.
>>> Okay. I used following slightly optimized version and I need to
>>> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
>>> to yours. Please consider it, too.
>> Hmm, this one is not work, I still can see the bug is there after applying
>> this patch, did I miss something?
> I may find that there is a bug which was introduced by me some time
> ago. Could you test following change in __free_one_page() on top of
> Vlastimil's patch?
>
> -page_idx = pfn & ((1 << max_order) - 1);
> +page_idx = pfn & ((1 << MAX_ORDER) - 1);

I tested Vlastimil's patch + your change with stress for more than half hour, 
the bug
I reported is gone :)

I have some questions, Joonsoo, you provided a patch as following:

diff --git a/mm/cma.c b/mm/cma.c
index 3a7a67b..952a8a3 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -448,7 +448,10 @@ bool cma_release(struct cma *cma, const struct page 
*pages, unsigned int count)
 
VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
 
+ mutex_lock(_mutex);
free_contig_range(pfn, count);
+ mutex_unlock(_mutex);
+
cma_clear_bitmap(cma, pfn, count);
trace_cma_release(pfn, pages, count);
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7f32950..68ed5ae 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1559,7 +1559,8 @@ void free_hot_cold_page(struct page *page, bool cold)
 * excessively into the page allocator
 */
if (migratetype >= MIGRATE_PCPTYPES) {
-   if (unlikely(is_migrate_isolate(migratetype))) {
+ if (is_migrate_cma(migratetype) ||
+ unlikely(is_migrate_isolate(migratetype))) {
free_one_page(zone, page, pfn, 0, migratetype);
goto out;
}

This patch also works to fix the bug, why not just use this one? is there
any side effects for this patch? maybe there is performance issue as the
mutex lock is used, any other issues?

Thanks
Hanjun



Re: Suspicious error for CMA stress test

2016-03-19 Thread Hanjun Guo
On 2016/3/17 14:54, Joonsoo Kim wrote:
> On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
>> On 2016/3/14 15:18, Joonsoo Kim wrote:
>>> On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
 On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
>> On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
>>
>> How about something like this? Just and idea, probably buggy (off-by-one 
>> etc.).
>> Should keep away cost from > the
>> relatively fewer >pageblock_order iterations.
> Hmm... I tested this and found that it's code size is a little bit
> larger than mine. I'm not sure why this happens exactly but I guess it 
> would be
> related to compiler optimization. In this case, I'm in favor of my
> implementation because it looks like well abstraction. It adds one
> unlikely branch to the merge loop but compiler would optimize it to
> check it once.
 I would be surprised if compiler optimized that to check it once, as
 order increases with each loop iteration. But maybe it's smart
 enough to do something like I did by hand? Guess I'll check the
 disassembly.
>>> Okay. I used following slightly optimized version and I need to
>>> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
>>> to yours. Please consider it, too.
>> Hmm, this one is not work, I still can see the bug is there after applying
>> this patch, did I miss something?
> I may find that there is a bug which was introduced by me some time
> ago. Could you test following change in __free_one_page() on top of
> Vlastimil's patch?
>
> -page_idx = pfn & ((1 << max_order) - 1);
> +page_idx = pfn & ((1 << MAX_ORDER) - 1);

I tested Vlastimil's patch + your change with stress for more than half hour, 
the bug
I reported is gone :)

I have some questions, Joonsoo, you provided a patch as following:

diff --git a/mm/cma.c b/mm/cma.c
index 3a7a67b..952a8a3 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -448,7 +448,10 @@ bool cma_release(struct cma *cma, const struct page 
*pages, unsigned int count)
 
VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
 
+ mutex_lock(_mutex);
free_contig_range(pfn, count);
+ mutex_unlock(_mutex);
+
cma_clear_bitmap(cma, pfn, count);
trace_cma_release(pfn, pages, count);
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7f32950..68ed5ae 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1559,7 +1559,8 @@ void free_hot_cold_page(struct page *page, bool cold)
 * excessively into the page allocator
 */
if (migratetype >= MIGRATE_PCPTYPES) {
-   if (unlikely(is_migrate_isolate(migratetype))) {
+ if (is_migrate_cma(migratetype) ||
+ unlikely(is_migrate_isolate(migratetype))) {
free_one_page(zone, page, pfn, 0, migratetype);
goto out;
}

This patch also works to fix the bug, why not just use this one? is there
any side effects for this patch? maybe there is performance issue as the
mutex lock is used, any other issues?

Thanks
Hanjun



Re: Suspicious error for CMA stress test

2016-03-19 Thread Joonsoo Kim
On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
> On 2016/3/14 15:18, Joonsoo Kim wrote:
> > On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> >> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> >>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
>  On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> 
>  How about something like this? Just and idea, probably buggy (off-by-one 
>  etc.).
>  Should keep away cost from   the
>  relatively fewer >pageblock_order iterations.
> >>> Hmm... I tested this and found that it's code size is a little bit
> >>> larger than mine. I'm not sure why this happens exactly but I guess it 
> >>> would be
> >>> related to compiler optimization. In this case, I'm in favor of my
> >>> implementation because it looks like well abstraction. It adds one
> >>> unlikely branch to the merge loop but compiler would optimize it to
> >>> check it once.
> >> I would be surprised if compiler optimized that to check it once, as
> >> order increases with each loop iteration. But maybe it's smart
> >> enough to do something like I did by hand? Guess I'll check the
> >> disassembly.
> > Okay. I used following slightly optimized version and I need to
> > add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> > to yours. Please consider it, too.
> 
> Hmm, this one is not work, I still can see the bug is there after applying
> this patch, did I miss something?

I may find that there is a bug which was introduced by me some time
ago. Could you test following change in __free_one_page() on top of
Vlastimil's patch?

-page_idx = pfn & ((1 << max_order) - 1);
+page_idx = pfn & ((1 << MAX_ORDER) - 1);

Thanks.


Re: Suspicious error for CMA stress test

2016-03-19 Thread Joonsoo Kim
On Wed, Mar 16, 2016 at 05:44:28PM +0800, Hanjun Guo wrote:
> On 2016/3/14 15:18, Joonsoo Kim wrote:
> > On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> >> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> >>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
>  On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> 
>  How about something like this? Just and idea, probably buggy (off-by-one 
>  etc.).
>  Should keep away cost from   the
>  relatively fewer >pageblock_order iterations.
> >>> Hmm... I tested this and found that it's code size is a little bit
> >>> larger than mine. I'm not sure why this happens exactly but I guess it 
> >>> would be
> >>> related to compiler optimization. In this case, I'm in favor of my
> >>> implementation because it looks like well abstraction. It adds one
> >>> unlikely branch to the merge loop but compiler would optimize it to
> >>> check it once.
> >> I would be surprised if compiler optimized that to check it once, as
> >> order increases with each loop iteration. But maybe it's smart
> >> enough to do something like I did by hand? Guess I'll check the
> >> disassembly.
> > Okay. I used following slightly optimized version and I need to
> > add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> > to yours. Please consider it, too.
> 
> Hmm, this one is not work, I still can see the bug is there after applying
> this patch, did I miss something?

I may find that there is a bug which was introduced by me some time
ago. Could you test following change in __free_one_page() on top of
Vlastimil's patch?

-page_idx = pfn & ((1 << max_order) - 1);
+page_idx = pfn & ((1 << MAX_ORDER) - 1);

Thanks.


Re: Suspicious error for CMA stress test

2016-03-19 Thread Hanjun Guo
On 2016/3/17 23:31, Joonsoo Kim wrote:
[...]
>>> I may find that there is a bug which was introduced by me some time
>>> ago. Could you test following change in __free_one_page() on top of
>>> Vlastimil's patch?
>>>
>>> -page_idx = pfn & ((1 << max_order) - 1);
>>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
>> I tested Vlastimil's patch + your change with stress for more than half 
>> hour, the bug
>> I reported is gone :)
> Good to hear!
>
>> I have some questions, Joonsoo, you provided a patch as following:
>>
>> diff --git a/mm/cma.c b/mm/cma.c
>> index 3a7a67b..952a8a3 100644
>> --- a/mm/cma.c
>> +++ b/mm/cma.c
>> @@ -448,7 +448,10 @@ bool cma_release(struct cma *cma, const struct page 
>> *pages, unsigned int count)
>>
>> VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
>>
>> + mutex_lock(_mutex);
>> free_contig_range(pfn, count);
>> + mutex_unlock(_mutex);
>> +
>> cma_clear_bitmap(cma, pfn, count);
>> trace_cma_release(pfn, pages, count);
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 7f32950..68ed5ae 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1559,7 +1559,8 @@ void free_hot_cold_page(struct page *page, bool cold)
>>  * excessively into the page allocator
>>  */
>> if (migratetype >= MIGRATE_PCPTYPES) {
>> -   if (unlikely(is_migrate_isolate(migratetype))) {
>> + if (is_migrate_cma(migratetype) ||
>> + unlikely(is_migrate_isolate(migratetype))) {
>> free_one_page(zone, page, pfn, 0, migratetype);
>> goto out;
>> }
>>
>> This patch also works to fix the bug, why not just use this one? is there
>> any side effects for this patch? maybe there is performance issue as the
>> mutex lock is used, any other issues?
> The changes in free_hot_cold_page() would cause unacceptable performance
> problem in a big machine, because, with above change,  it takes zone->lock
> whenever freeing one page on CMA region.

Thanks for the clarify :)

Hanjun



Re: Suspicious error for CMA stress test

2016-03-19 Thread Hanjun Guo
On 2016/3/17 23:31, Joonsoo Kim wrote:
[...]
>>> I may find that there is a bug which was introduced by me some time
>>> ago. Could you test following change in __free_one_page() on top of
>>> Vlastimil's patch?
>>>
>>> -page_idx = pfn & ((1 << max_order) - 1);
>>> +page_idx = pfn & ((1 << MAX_ORDER) - 1);
>> I tested Vlastimil's patch + your change with stress for more than half 
>> hour, the bug
>> I reported is gone :)
> Good to hear!
>
>> I have some questions, Joonsoo, you provided a patch as following:
>>
>> diff --git a/mm/cma.c b/mm/cma.c
>> index 3a7a67b..952a8a3 100644
>> --- a/mm/cma.c
>> +++ b/mm/cma.c
>> @@ -448,7 +448,10 @@ bool cma_release(struct cma *cma, const struct page 
>> *pages, unsigned int count)
>>
>> VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
>>
>> + mutex_lock(_mutex);
>> free_contig_range(pfn, count);
>> + mutex_unlock(_mutex);
>> +
>> cma_clear_bitmap(cma, pfn, count);
>> trace_cma_release(pfn, pages, count);
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 7f32950..68ed5ae 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1559,7 +1559,8 @@ void free_hot_cold_page(struct page *page, bool cold)
>>  * excessively into the page allocator
>>  */
>> if (migratetype >= MIGRATE_PCPTYPES) {
>> -   if (unlikely(is_migrate_isolate(migratetype))) {
>> + if (is_migrate_cma(migratetype) ||
>> + unlikely(is_migrate_isolate(migratetype))) {
>> free_one_page(zone, page, pfn, 0, migratetype);
>> goto out;
>> }
>>
>> This patch also works to fix the bug, why not just use this one? is there
>> any side effects for this patch? maybe there is performance issue as the
>> mutex lock is used, any other issues?
> The changes in free_hot_cold_page() would cause unacceptable performance
> problem in a big machine, because, with above change,  it takes zone->lock
> whenever freeing one page on CMA region.

Thanks for the clarify :)

Hanjun



Re: Suspicious error for CMA stress test

2016-03-19 Thread Hanjun Guo
On 2016/3/18 22:10, Vlastimil Babka wrote:
> On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
>> 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
>>> Okay. I used following slightly optimized version and I need to
>>> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
>>> to yours. Please consider it, too.
>> Hmm, this one is not work, I still can see the bug is there after
>> applying
>> this patch, did I miss something?
> I may find that there is a bug which was introduced by me some time
> ago. Could you test following change in __free_one_page() on top of
> Vlastimil's patch?
>
> -page_idx = pfn & ((1 << max_order) - 1);
> +page_idx = pfn & ((1 << MAX_ORDER) - 1);

 I tested Vlastimil's patch + your change with stress for more than half
 hour, the bug
 I reported is gone :)
>>>
>>> Oh, ok, will try to send proper patch, once I figure out what to write in
>>> the changelog :)
>> Thanks in advance!
>>
> OK, here it is. Hanjun can you please retest this, as I'm not sure if you had

I tested this new patch with stress for more than one hour, and it works!
Since Lucas has comments on it, I'm willing to test further versions if needed.

One minor comments below,

> the same code due to the followup one-liner patches in the thread. Lucas, see 
> if
> it helps with your issue as well. Laura and Joonsoo, please also test and 
> review
> and check changelog if my perception of the problem is accurate :)
>
> Thanks
>
[...]
> + if (max_order < MAX_ORDER) {
> + /* If we are here, it means order is >= pageblock_order.
> +  * We want to prevent merge between freepages on isolate
> +  * pageblock and normal pageblock. Without this, pageblock
> +  * isolation could cause incorrect freepage or CMA accounting.
> +  *
> +  * We don't want to hit this code for the more frequent
> +  * low-order merging.
> +  */
> + if (unlikely(has_isolate_pageblock(zone))) {

In the first version of your patch, it's

+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone))) {

Why remove the IS_ENABLED(CONFIG_CMA) in the new version?

Thanks
Hanjun




Re: Suspicious error for CMA stress test

2016-03-19 Thread Hanjun Guo
On 2016/3/18 22:10, Vlastimil Babka wrote:
> On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
>> 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
>>> Okay. I used following slightly optimized version and I need to
>>> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
>>> to yours. Please consider it, too.
>> Hmm, this one is not work, I still can see the bug is there after
>> applying
>> this patch, did I miss something?
> I may find that there is a bug which was introduced by me some time
> ago. Could you test following change in __free_one_page() on top of
> Vlastimil's patch?
>
> -page_idx = pfn & ((1 << max_order) - 1);
> +page_idx = pfn & ((1 << MAX_ORDER) - 1);

 I tested Vlastimil's patch + your change with stress for more than half
 hour, the bug
 I reported is gone :)
>>>
>>> Oh, ok, will try to send proper patch, once I figure out what to write in
>>> the changelog :)
>> Thanks in advance!
>>
> OK, here it is. Hanjun can you please retest this, as I'm not sure if you had

I tested this new patch with stress for more than one hour, and it works!
Since Lucas has comments on it, I'm willing to test further versions if needed.

One minor comments below,

> the same code due to the followup one-liner patches in the thread. Lucas, see 
> if
> it helps with your issue as well. Laura and Joonsoo, please also test and 
> review
> and check changelog if my perception of the problem is accurate :)
>
> Thanks
>
[...]
> + if (max_order < MAX_ORDER) {
> + /* If we are here, it means order is >= pageblock_order.
> +  * We want to prevent merge between freepages on isolate
> +  * pageblock and normal pageblock. Without this, pageblock
> +  * isolation could cause incorrect freepage or CMA accounting.
> +  *
> +  * We don't want to hit this code for the more frequent
> +  * low-order merging.
> +  */
> + if (unlikely(has_isolate_pageblock(zone))) {

In the first version of your patch, it's

+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone))) {

Why remove the IS_ENABLED(CONFIG_CMA) in the new version?

Thanks
Hanjun




Re: Suspicious error for CMA stress test

2016-03-18 Thread Vlastimil Babka

On 03/14/2016 03:10 PM, Joonsoo Kim wrote:

2016-03-14 21:30 GMT+09:00 Vlastimil Babka :

Now I see why this happen. I enabled CONFIG_DEBUG_PAGEALLOC
and it makes difference.

I tested on x86_64, gcc (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4.

With CONFIG_CMA + CONFIG_DEBUG_PAGEALLOC
./scripts/bloat-o-meter page_alloc_base.o page_alloc_vlastimil_orig.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 510/0 (510)
function old new   delta
free_one_page   10501334+284
free_pcppages_bulk  13961622+226

./scripts/bloat-o-meter page_alloc_base.o page_alloc_mine.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 351/0 (351)
function old new   delta
free_one_page   10501230+180
free_pcppages_bulk  13961567+171


With CONFIG_CMA + !CONFIG_DEBUG_PAGEALLOC
(pa_b is base, pa_v is yours and pa_m is mine)

./scripts/bloat-o-meter pa_b.o pa_v.o
add/remove: 0/0 grow/shrink: 1/1 up/down: 88/-23 (65)
function old new   delta
free_one_page761 849 +88
free_pcppages_bulk  11171094 -23

./scripts/bloat-o-meter pa_b.o pa_m.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 329/0 (329)
function old new   delta
free_one_page7611031+270
free_pcppages_bulk  11171176 +59

Still, it has difference but less than before.
Maybe, we are still using different configuration. Could you
check if CONFIG_DEBUG_VM is enabled or not? In my case, it's not


It's disabled here.


enabled. And, do you think this bloat isn't acceptable?


Well, it is quite significant. But given that Hanjun sees the errors 
still, it's not the biggest issue now :/



Thanks.





Re: Suspicious error for CMA stress test

2016-03-18 Thread Vlastimil Babka

On 03/14/2016 03:10 PM, Joonsoo Kim wrote:

2016-03-14 21:30 GMT+09:00 Vlastimil Babka :

Now I see why this happen. I enabled CONFIG_DEBUG_PAGEALLOC
and it makes difference.

I tested on x86_64, gcc (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4.

With CONFIG_CMA + CONFIG_DEBUG_PAGEALLOC
./scripts/bloat-o-meter page_alloc_base.o page_alloc_vlastimil_orig.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 510/0 (510)
function old new   delta
free_one_page   10501334+284
free_pcppages_bulk  13961622+226

./scripts/bloat-o-meter page_alloc_base.o page_alloc_mine.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 351/0 (351)
function old new   delta
free_one_page   10501230+180
free_pcppages_bulk  13961567+171


With CONFIG_CMA + !CONFIG_DEBUG_PAGEALLOC
(pa_b is base, pa_v is yours and pa_m is mine)

./scripts/bloat-o-meter pa_b.o pa_v.o
add/remove: 0/0 grow/shrink: 1/1 up/down: 88/-23 (65)
function old new   delta
free_one_page761 849 +88
free_pcppages_bulk  11171094 -23

./scripts/bloat-o-meter pa_b.o pa_m.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 329/0 (329)
function old new   delta
free_one_page7611031+270
free_pcppages_bulk  11171176 +59

Still, it has difference but less than before.
Maybe, we are still using different configuration. Could you
check if CONFIG_DEBUG_VM is enabled or not? In my case, it's not


It's disabled here.


enabled. And, do you think this bloat isn't acceptable?


Well, it is quite significant. But given that Hanjun sees the errors 
still, it's not the biggest issue now :/



Thanks.





Re: Suspicious error for CMA stress test

2016-03-18 Thread Vlastimil Babka
On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
> 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
>>
>> Okay. I used following slightly optimized version and I need to
>> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
>> to yours. Please consider it, too.
>
> Hmm, this one is not work, I still can see the bug is there after
> applying
> this patch, did I miss something?

 I may find that there is a bug which was introduced by me some time
 ago. Could you test following change in __free_one_page() on top of
 Vlastimil's patch?

 -page_idx = pfn & ((1 << max_order) - 1);
 +page_idx = pfn & ((1 << MAX_ORDER) - 1);
>>>
>>>
>>> I tested Vlastimil's patch + your change with stress for more than half
>>> hour, the bug
>>> I reported is gone :)
>>
>>
>> Oh, ok, will try to send proper patch, once I figure out what to write in
>> the changelog :)
> 
> Thanks in advance!
> 

OK, here it is. Hanjun can you please retest this, as I'm not sure if you had
the same code due to the followup one-liner patches in the thread. Lucas, see if
it helps with your issue as well. Laura and Joonsoo, please also test and review
and check changelog if my perception of the problem is accurate :)

Thanks

8<
From: Vlastimil Babka 
Date: Fri, 18 Mar 2016 14:22:31 +0100
Subject: [PATCH] mm/page_alloc: prevent merging between isolated and other
 pageblocks

Hanjun Guo has reported that a CMA stress test causes broken accounting of
CMA and free pages:

> Before the test, I got:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree:  195044 kB
>
>
> After running the test:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree: 6602584 kB
>
> So the freed CMA memory is more than total..
>
> Also the the MemFree is more than mem total:
>
> -bash-4.3# cat /proc/meminfo
> MemTotal:   16342016 kB
> MemFree:22367268 kB
> MemAvailable:   22370528 kB

Laura Abbott has confirmed the issue and suspected the freepage accounting
rewrite around 3.18/4.0 by Joonsoo Kim. Joonsoo had a theory that this is
caused by unexpected merging between MIGRATE_ISOLATE and MIGRATE_CMA
pageblocks:

> CMA isolates MAX_ORDER aligned blocks, but, during the process,
> partialy isolated block exists. If MAX_ORDER is 11 and
> pageblock_order is 9, two pageblocks make up MAX_ORDER
> aligned block and I can think following scenario because pageblock
> (un)isolation would be done one by one.
>
> (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
> MIGRATE_ISOLATE, respectively.
>
> CC -> IC -> II (Isolation)
> II -> CI -> CC (Un-isolation)
>
> If some pages are freed at this intermediate state such as IC or CI,
> that page could be merged to the other page that is resident on
> different type of pageblock and it will cause wrong freepage count.

This was supposed to be prevented by CMA operating on MAX_ORDER blocks, but
since it doesn't hold the zone->lock between pageblocks, a race window does
exist.

It's also likely that unexpected merging can occur between MIGRATE_ISOLATE
and non-CMA pageblocks. This should be prevented in __free_one_page() since
commit 3c605096d315 ("mm/page_alloc: restrict max order of merging on isolated
pageblock"). However, we only check the migratetype of the pageblock where
buddy merging has been initiated, not the migratetype of the buddy pageblock
(or group of pageblocks) which can be MIGRATE_ISOLATE.

Joonsoo has suggested checking for buddy migratetype as part of
page_is_buddy(), but that would add extra checks in allocator hotpath and
bloat-o-meter has shown significant code bloat (the function is inline).

This patch reduces the bloat at some expense of more complicated code. The
buddy-merging while-loop in __free_one_page() is initially bounded to
pageblock_border and without any migratetype checks. The checks are placed
outside, bumping the max_order if merging is allowed, and returning to the
while-loop with a statement which can't be possibly considered harmful.

This fixes the accounting bug and also removes the arguably weird state in the
original commit 3c605096d315 where buddies could be left unmerged.

Fixes: 3c605096d315 ("mm/page_alloc: restrict max order of merging on isolated 
pageblock")
Link: https://lkml.org/lkml/2016/3/2/280
Reported-by: Hanjun Guo 
Debugged-by: Laura Abbott 
Debugged-by: Joonsoo Kim 
Signed-off-by: Vlastimil Babka 
Cc:  # 3.18+
---
 mm/page_alloc.c | 46 +-
 1 file changed, 33 insertions(+), 13 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c46b75d14b6f..112a5d5cec51 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -683,34 +683,28 @@ static inline void __free_one_page(struct page *page,
unsigned long combined_idx;

Re: Suspicious error for CMA stress test

2016-03-18 Thread Vlastimil Babka
On 03/17/2016 04:52 PM, Joonsoo Kim wrote:
> 2016-03-18 0:43 GMT+09:00 Vlastimil Babka :
>>
>> Okay. I used following slightly optimized version and I need to
>> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
>> to yours. Please consider it, too.
>
> Hmm, this one is not work, I still can see the bug is there after
> applying
> this patch, did I miss something?

 I may find that there is a bug which was introduced by me some time
 ago. Could you test following change in __free_one_page() on top of
 Vlastimil's patch?

 -page_idx = pfn & ((1 << max_order) - 1);
 +page_idx = pfn & ((1 << MAX_ORDER) - 1);
>>>
>>>
>>> I tested Vlastimil's patch + your change with stress for more than half
>>> hour, the bug
>>> I reported is gone :)
>>
>>
>> Oh, ok, will try to send proper patch, once I figure out what to write in
>> the changelog :)
> 
> Thanks in advance!
> 

OK, here it is. Hanjun can you please retest this, as I'm not sure if you had
the same code due to the followup one-liner patches in the thread. Lucas, see if
it helps with your issue as well. Laura and Joonsoo, please also test and review
and check changelog if my perception of the problem is accurate :)

Thanks

8<
From: Vlastimil Babka 
Date: Fri, 18 Mar 2016 14:22:31 +0100
Subject: [PATCH] mm/page_alloc: prevent merging between isolated and other
 pageblocks

Hanjun Guo has reported that a CMA stress test causes broken accounting of
CMA and free pages:

> Before the test, I got:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree:  195044 kB
>
>
> After running the test:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree: 6602584 kB
>
> So the freed CMA memory is more than total..
>
> Also the the MemFree is more than mem total:
>
> -bash-4.3# cat /proc/meminfo
> MemTotal:   16342016 kB
> MemFree:22367268 kB
> MemAvailable:   22370528 kB

Laura Abbott has confirmed the issue and suspected the freepage accounting
rewrite around 3.18/4.0 by Joonsoo Kim. Joonsoo had a theory that this is
caused by unexpected merging between MIGRATE_ISOLATE and MIGRATE_CMA
pageblocks:

> CMA isolates MAX_ORDER aligned blocks, but, during the process,
> partialy isolated block exists. If MAX_ORDER is 11 and
> pageblock_order is 9, two pageblocks make up MAX_ORDER
> aligned block and I can think following scenario because pageblock
> (un)isolation would be done one by one.
>
> (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
> MIGRATE_ISOLATE, respectively.
>
> CC -> IC -> II (Isolation)
> II -> CI -> CC (Un-isolation)
>
> If some pages are freed at this intermediate state such as IC or CI,
> that page could be merged to the other page that is resident on
> different type of pageblock and it will cause wrong freepage count.

This was supposed to be prevented by CMA operating on MAX_ORDER blocks, but
since it doesn't hold the zone->lock between pageblocks, a race window does
exist.

It's also likely that unexpected merging can occur between MIGRATE_ISOLATE
and non-CMA pageblocks. This should be prevented in __free_one_page() since
commit 3c605096d315 ("mm/page_alloc: restrict max order of merging on isolated
pageblock"). However, we only check the migratetype of the pageblock where
buddy merging has been initiated, not the migratetype of the buddy pageblock
(or group of pageblocks) which can be MIGRATE_ISOLATE.

Joonsoo has suggested checking for buddy migratetype as part of
page_is_buddy(), but that would add extra checks in allocator hotpath and
bloat-o-meter has shown significant code bloat (the function is inline).

This patch reduces the bloat at some expense of more complicated code. The
buddy-merging while-loop in __free_one_page() is initially bounded to
pageblock_border and without any migratetype checks. The checks are placed
outside, bumping the max_order if merging is allowed, and returning to the
while-loop with a statement which can't be possibly considered harmful.

This fixes the accounting bug and also removes the arguably weird state in the
original commit 3c605096d315 where buddies could be left unmerged.

Fixes: 3c605096d315 ("mm/page_alloc: restrict max order of merging on isolated 
pageblock")
Link: https://lkml.org/lkml/2016/3/2/280
Reported-by: Hanjun Guo 
Debugged-by: Laura Abbott 
Debugged-by: Joonsoo Kim 
Signed-off-by: Vlastimil Babka 
Cc:  # 3.18+
---
 mm/page_alloc.c | 46 +-
 1 file changed, 33 insertions(+), 13 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c46b75d14b6f..112a5d5cec51 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -683,34 +683,28 @@ static inline void __free_one_page(struct page *page,
unsigned long combined_idx;
unsigned long uninitialized_var(buddy_idx);
struct page *buddy;
-   unsigned int max_order = MAX_ORDER;
+   unsigned int 

Re: Suspicious error for CMA stress test

2016-03-16 Thread Hanjun Guo
On 2016/3/14 15:18, Joonsoo Kim wrote:
> On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
>> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
>>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
 On 03/11/2016 04:00 PM, Joonsoo Kim wrote:

 How about something like this? Just and idea, probably buggy (off-by-one 
 etc.).
 Should keep away cost from >>> the
 relatively fewer >pageblock_order iterations.
>>> Hmm... I tested this and found that it's code size is a little bit
>>> larger than mine. I'm not sure why this happens exactly but I guess it 
>>> would be
>>> related to compiler optimization. In this case, I'm in favor of my
>>> implementation because it looks like well abstraction. It adds one
>>> unlikely branch to the merge loop but compiler would optimize it to
>>> check it once.
>> I would be surprised if compiler optimized that to check it once, as
>> order increases with each loop iteration. But maybe it's smart
>> enough to do something like I did by hand? Guess I'll check the
>> disassembly.
> Okay. I used following slightly optimized version and I need to
> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> to yours. Please consider it, too.

Hmm, this one is not work, I still can see the bug is there after applying
this patch, did I miss something?

Thanks
Hanjun



Re: Suspicious error for CMA stress test

2016-03-16 Thread Hanjun Guo
On 2016/3/14 15:18, Joonsoo Kim wrote:
> On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
>> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
>>> On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
 On 03/11/2016 04:00 PM, Joonsoo Kim wrote:

 How about something like this? Just and idea, probably buggy (off-by-one 
 etc.).
 Should keep away cost from >>> the
 relatively fewer >pageblock_order iterations.
>>> Hmm... I tested this and found that it's code size is a little bit
>>> larger than mine. I'm not sure why this happens exactly but I guess it 
>>> would be
>>> related to compiler optimization. In this case, I'm in favor of my
>>> implementation because it looks like well abstraction. It adds one
>>> unlikely branch to the merge loop but compiler would optimize it to
>>> check it once.
>> I would be surprised if compiler optimized that to check it once, as
>> order increases with each loop iteration. But maybe it's smart
>> enough to do something like I did by hand? Guess I'll check the
>> disassembly.
> Okay. I used following slightly optimized version and I need to
> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
> to yours. Please consider it, too.

Hmm, this one is not work, I still can see the bug is there after applying
this patch, did I miss something?

Thanks
Hanjun



Re: Suspicious error for CMA stress test

2016-03-14 Thread Joonsoo Kim
2016-03-14 21:30 GMT+09:00 Vlastimil Babka :
> On 03/14/2016 08:18 AM, Joonsoo Kim wrote:
>>
>> On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
>>>
>>> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:

 On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
>
> On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
>
> How about something like this? Just and idea, probably buggy
> (off-by-one etc.).
> Should keep away cost from  of the
> relatively fewer >pageblock_order iterations.


 Hmm... I tested this and found that it's code size is a little bit
 larger than mine. I'm not sure why this happens exactly but I guess it
 would be
 related to compiler optimization. In this case, I'm in favor of my
 implementation because it looks like well abstraction. It adds one
 unlikely branch to the merge loop but compiler would optimize it to
 check it once.
>>>
>>>
>>> I would be surprised if compiler optimized that to check it once, as
>>> order increases with each loop iteration. But maybe it's smart
>>> enough to do something like I did by hand? Guess I'll check the
>>> disassembly.
>>
>>
>> Okay. I used following slightly optimized version and I need to
>> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
>> to yours. Please consider it, too.
>
>
> Hmm, so this is bloat-o-meter on x86_64, gcc 5.3.1. CONFIG_CMA=y
>
> next-20160310 vs my patch (with added min_t as you pointed out):
> add/remove: 0/0 grow/shrink: 1/1 up/down: 69/-5 (64)
> function old new   delta
> free_one_page833 902 +69
> free_pcppages_bulk  13331328  -5
>
> next-20160310 vs your patch:
> add/remove: 0/0 grow/shrink: 2/0 up/down: 577/0 (577)
> function old new   delta
> free_one_page8331187+354
> free_pcppages_bulk  13331556+223
>
> my patch vs your patch:
> add/remove: 0/0 grow/shrink: 2/0 up/down: 513/0 (513)
> function old new   delta
> free_one_page9021187+285
> free_pcppages_bulk  13281556+228
>
> The increase of your version is surprising, wonder what the compiler did.
> Otherwise I would like simpler/maintainable version, but this is crazy.
> Can you post your results? I wonder if your compiler e.g. decided to stop
> inlining page_is_buddy() or something.

Now I see why this happen. I enabled CONFIG_DEBUG_PAGEALLOC
and it makes difference.

I tested on x86_64, gcc (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4.

With CONFIG_CMA + CONFIG_DEBUG_PAGEALLOC
./scripts/bloat-o-meter page_alloc_base.o page_alloc_vlastimil_orig.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 510/0 (510)
function old new   delta
free_one_page   10501334+284
free_pcppages_bulk  13961622+226

./scripts/bloat-o-meter page_alloc_base.o page_alloc_mine.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 351/0 (351)
function old new   delta
free_one_page   10501230+180
free_pcppages_bulk  13961567+171


With CONFIG_CMA + !CONFIG_DEBUG_PAGEALLOC
(pa_b is base, pa_v is yours and pa_m is mine)

./scripts/bloat-o-meter pa_b.o pa_v.o
add/remove: 0/0 grow/shrink: 1/1 up/down: 88/-23 (65)
function old new   delta
free_one_page761 849 +88
free_pcppages_bulk  11171094 -23

./scripts/bloat-o-meter pa_b.o pa_m.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 329/0 (329)
function old new   delta
free_one_page7611031+270
free_pcppages_bulk  11171176 +59

Still, it has difference but less than before.
Maybe, we are still using different configuration. Could you
check if CONFIG_DEBUG_VM is enabled or not? In my case, it's not
enabled. And, do you think this bloat isn't acceptable?

Thanks.


Re: Suspicious error for CMA stress test

2016-03-14 Thread Joonsoo Kim
2016-03-14 21:30 GMT+09:00 Vlastimil Babka :
> On 03/14/2016 08:18 AM, Joonsoo Kim wrote:
>>
>> On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
>>>
>>> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:

 On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
>
> On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
>
> How about something like this? Just and idea, probably buggy
> (off-by-one etc.).
> Should keep away cost from  of the
> relatively fewer >pageblock_order iterations.


 Hmm... I tested this and found that it's code size is a little bit
 larger than mine. I'm not sure why this happens exactly but I guess it
 would be
 related to compiler optimization. In this case, I'm in favor of my
 implementation because it looks like well abstraction. It adds one
 unlikely branch to the merge loop but compiler would optimize it to
 check it once.
>>>
>>>
>>> I would be surprised if compiler optimized that to check it once, as
>>> order increases with each loop iteration. But maybe it's smart
>>> enough to do something like I did by hand? Guess I'll check the
>>> disassembly.
>>
>>
>> Okay. I used following slightly optimized version and I need to
>> add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
>> to yours. Please consider it, too.
>
>
> Hmm, so this is bloat-o-meter on x86_64, gcc 5.3.1. CONFIG_CMA=y
>
> next-20160310 vs my patch (with added min_t as you pointed out):
> add/remove: 0/0 grow/shrink: 1/1 up/down: 69/-5 (64)
> function old new   delta
> free_one_page833 902 +69
> free_pcppages_bulk  13331328  -5
>
> next-20160310 vs your patch:
> add/remove: 0/0 grow/shrink: 2/0 up/down: 577/0 (577)
> function old new   delta
> free_one_page8331187+354
> free_pcppages_bulk  13331556+223
>
> my patch vs your patch:
> add/remove: 0/0 grow/shrink: 2/0 up/down: 513/0 (513)
> function old new   delta
> free_one_page9021187+285
> free_pcppages_bulk  13281556+228
>
> The increase of your version is surprising, wonder what the compiler did.
> Otherwise I would like simpler/maintainable version, but this is crazy.
> Can you post your results? I wonder if your compiler e.g. decided to stop
> inlining page_is_buddy() or something.

Now I see why this happen. I enabled CONFIG_DEBUG_PAGEALLOC
and it makes difference.

I tested on x86_64, gcc (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4.

With CONFIG_CMA + CONFIG_DEBUG_PAGEALLOC
./scripts/bloat-o-meter page_alloc_base.o page_alloc_vlastimil_orig.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 510/0 (510)
function old new   delta
free_one_page   10501334+284
free_pcppages_bulk  13961622+226

./scripts/bloat-o-meter page_alloc_base.o page_alloc_mine.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 351/0 (351)
function old new   delta
free_one_page   10501230+180
free_pcppages_bulk  13961567+171


With CONFIG_CMA + !CONFIG_DEBUG_PAGEALLOC
(pa_b is base, pa_v is yours and pa_m is mine)

./scripts/bloat-o-meter pa_b.o pa_v.o
add/remove: 0/0 grow/shrink: 1/1 up/down: 88/-23 (65)
function old new   delta
free_one_page761 849 +88
free_pcppages_bulk  11171094 -23

./scripts/bloat-o-meter pa_b.o pa_m.o
add/remove: 0/0 grow/shrink: 2/0 up/down: 329/0 (329)
function old new   delta
free_one_page7611031+270
free_pcppages_bulk  11171176 +59

Still, it has difference but less than before.
Maybe, we are still using different configuration. Could you
check if CONFIG_DEBUG_VM is enabled or not? In my case, it's not
enabled. And, do you think this bloat isn't acceptable?

Thanks.


Re: Suspicious error for CMA stress test

2016-03-14 Thread Vlastimil Babka

On 03/14/2016 08:18 AM, Joonsoo Kim wrote:

On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:

On 03/14/2016 07:49 AM, Joonsoo Kim wrote:

On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:

On 03/11/2016 04:00 PM, Joonsoo Kim wrote:

How about something like this? Just and idea, probably buggy (off-by-one etc.).
Should keep away cost from pageblock_order iterations.


Hmm... I tested this and found that it's code size is a little bit
larger than mine. I'm not sure why this happens exactly but I guess it would be
related to compiler optimization. In this case, I'm in favor of my
implementation because it looks like well abstraction. It adds one
unlikely branch to the merge loop but compiler would optimize it to
check it once.


I would be surprised if compiler optimized that to check it once, as
order increases with each loop iteration. But maybe it's smart
enough to do something like I did by hand? Guess I'll check the
disassembly.


Okay. I used following slightly optimized version and I need to
add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
to yours. Please consider it, too.


Hmm, so this is bloat-o-meter on x86_64, gcc 5.3.1. CONFIG_CMA=y

next-20160310 vs my patch (with added min_t as you pointed out):
add/remove: 0/0 grow/shrink: 1/1 up/down: 69/-5 (64)
function old new   delta
free_one_page833 902 +69
free_pcppages_bulk  13331328  -5

next-20160310 vs your patch:
add/remove: 0/0 grow/shrink: 2/0 up/down: 577/0 (577)
function old new   delta
free_one_page8331187+354
free_pcppages_bulk  13331556+223

my patch vs your patch:
add/remove: 0/0 grow/shrink: 2/0 up/down: 513/0 (513)
function old new   delta
free_one_page9021187+285
free_pcppages_bulk  13281556+228

The increase of your version is surprising, wonder what the compiler 
did. Otherwise I would like simpler/maintainable version, but this is crazy.
Can you post your results? I wonder if your compiler e.g. decided to 
stop inlining page_is_buddy() or something.





Re: Suspicious error for CMA stress test

2016-03-14 Thread Vlastimil Babka

On 03/14/2016 08:18 AM, Joonsoo Kim wrote:

On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:

On 03/14/2016 07:49 AM, Joonsoo Kim wrote:

On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:

On 03/11/2016 04:00 PM, Joonsoo Kim wrote:

How about something like this? Just and idea, probably buggy (off-by-one etc.).
Should keep away cost from pageblock_order iterations.


Hmm... I tested this and found that it's code size is a little bit
larger than mine. I'm not sure why this happens exactly but I guess it would be
related to compiler optimization. In this case, I'm in favor of my
implementation because it looks like well abstraction. It adds one
unlikely branch to the merge loop but compiler would optimize it to
check it once.


I would be surprised if compiler optimized that to check it once, as
order increases with each loop iteration. But maybe it's smart
enough to do something like I did by hand? Guess I'll check the
disassembly.


Okay. I used following slightly optimized version and I need to
add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
to yours. Please consider it, too.


Hmm, so this is bloat-o-meter on x86_64, gcc 5.3.1. CONFIG_CMA=y

next-20160310 vs my patch (with added min_t as you pointed out):
add/remove: 0/0 grow/shrink: 1/1 up/down: 69/-5 (64)
function old new   delta
free_one_page833 902 +69
free_pcppages_bulk  13331328  -5

next-20160310 vs your patch:
add/remove: 0/0 grow/shrink: 2/0 up/down: 577/0 (577)
function old new   delta
free_one_page8331187+354
free_pcppages_bulk  13331556+223

my patch vs your patch:
add/remove: 0/0 grow/shrink: 2/0 up/down: 513/0 (513)
function old new   delta
free_one_page9021187+285
free_pcppages_bulk  13281556+228

The increase of your version is surprising, wonder what the compiler 
did. Otherwise I would like simpler/maintainable version, but this is crazy.
Can you post your results? I wonder if your compiler e.g. decided to 
stop inlining page_is_buddy() or something.





Re: Suspicious error for CMA stress test

2016-03-14 Thread Joonsoo Kim
On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> >On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> >>On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> >>
> >>How about something like this? Just and idea, probably buggy (off-by-one 
> >>etc.).
> >>Should keep away cost from  >>relatively fewer >pageblock_order iterations.
> >
> >Hmm... I tested this and found that it's code size is a little bit
> >larger than mine. I'm not sure why this happens exactly but I guess it would 
> >be
> >related to compiler optimization. In this case, I'm in favor of my
> >implementation because it looks like well abstraction. It adds one
> >unlikely branch to the merge loop but compiler would optimize it to
> >check it once.
> 
> I would be surprised if compiler optimized that to check it once, as
> order increases with each loop iteration. But maybe it's smart
> enough to do something like I did by hand? Guess I'll check the
> disassembly.

Okay. I used following slightly optimized version and I need to
add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
to yours. Please consider it, too.

Thanks.

>8
>From 36b8ffdaa0e7a8d33fd47a62a35a9e507e3e62e9 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim 
Date: Mon, 14 Mar 2016 15:20:07 +0900
Subject: [PATCH] mm: fix cma

Signed-off-by: Joonsoo Kim 
---
 mm/page_alloc.c | 29 +++--
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0bb933a..f7baa4f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -627,8 +627,8 @@ static inline void rmv_page_order(struct page *page)
  *
  * For recording page's order, we use page_private(page).
  */
-static inline int page_is_buddy(struct page *page, struct page *buddy,
-   unsigned int order)
+static inline int page_is_buddy(struct zone *zone, struct page *page,
+   struct page *buddy, unsigned int order, int mt)
 {
if (!pfn_valid_within(page_to_pfn(buddy)))
return 0;
@@ -651,6 +651,15 @@ static inline int page_is_buddy(struct page *page, struct 
page *buddy,
if (page_zone_id(page) != page_zone_id(buddy))
return 0;
 
+   if (unlikely(has_isolate_pageblock(zone) &&
+   order >= pageblock_order)) {
+   int buddy_mt = get_pageblock_migratetype(buddy);
+
+   if (mt != buddy_mt && (is_migrate_isolate(mt) ||
+   is_migrate_isolate(buddy_mt)))
+   return 0;
+   }
+
VM_BUG_ON_PAGE(page_count(buddy) != 0, buddy);
 
return 1;
@@ -698,17 +707,8 @@ static inline void __free_one_page(struct page *page,
VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
 
VM_BUG_ON(migratetype == -1);
-   if (is_migrate_isolate(migratetype)) {
-   /*
-* We restrict max order of merging to prevent merge
-* between freepages on isolate pageblock and normal
-* pageblock. Without this, pageblock isolation
-* could cause incorrect freepage accounting.
-*/
-   max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
-   } else {
+   if (!is_migrate_isolate(migratetype))
__mod_zone_freepage_state(zone, 1 << order, migratetype);
-   }
 
page_idx = pfn & ((1 << max_order) - 1);
 
@@ -718,7 +718,7 @@ static inline void __free_one_page(struct page *page,
while (order < max_order - 1) {
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
-   if (!page_is_buddy(page, buddy, order))
+   if (!page_is_buddy(zone, page, buddy, order, migratetype))
break;
/*
 * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
@@ -752,7 +752,8 @@ static inline void __free_one_page(struct page *page,
higher_page = page + (combined_idx - page_idx);
buddy_idx = __find_buddy_index(combined_idx, order + 1);
higher_buddy = higher_page + (buddy_idx - combined_idx);
-   if (page_is_buddy(higher_page, higher_buddy, order + 1)) {
+   if (page_is_buddy(zone, higher_page, higher_buddy,
+   order + 1, migratetype)) {
list_add_tail(>lru,
>free_area[order].free_list[migratetype]);
goto out;
-- 
1.9.1



Re: Suspicious error for CMA stress test

2016-03-14 Thread Joonsoo Kim
On Mon, Mar 14, 2016 at 08:06:16AM +0100, Vlastimil Babka wrote:
> On 03/14/2016 07:49 AM, Joonsoo Kim wrote:
> >On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> >>On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> >>
> >>How about something like this? Just and idea, probably buggy (off-by-one 
> >>etc.).
> >>Should keep away cost from  >>relatively fewer >pageblock_order iterations.
> >
> >Hmm... I tested this and found that it's code size is a little bit
> >larger than mine. I'm not sure why this happens exactly but I guess it would 
> >be
> >related to compiler optimization. In this case, I'm in favor of my
> >implementation because it looks like well abstraction. It adds one
> >unlikely branch to the merge loop but compiler would optimize it to
> >check it once.
> 
> I would be surprised if compiler optimized that to check it once, as
> order increases with each loop iteration. But maybe it's smart
> enough to do something like I did by hand? Guess I'll check the
> disassembly.

Okay. I used following slightly optimized version and I need to
add 'max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1)'
to yours. Please consider it, too.

Thanks.

>8
>From 36b8ffdaa0e7a8d33fd47a62a35a9e507e3e62e9 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim 
Date: Mon, 14 Mar 2016 15:20:07 +0900
Subject: [PATCH] mm: fix cma

Signed-off-by: Joonsoo Kim 
---
 mm/page_alloc.c | 29 +++--
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0bb933a..f7baa4f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -627,8 +627,8 @@ static inline void rmv_page_order(struct page *page)
  *
  * For recording page's order, we use page_private(page).
  */
-static inline int page_is_buddy(struct page *page, struct page *buddy,
-   unsigned int order)
+static inline int page_is_buddy(struct zone *zone, struct page *page,
+   struct page *buddy, unsigned int order, int mt)
 {
if (!pfn_valid_within(page_to_pfn(buddy)))
return 0;
@@ -651,6 +651,15 @@ static inline int page_is_buddy(struct page *page, struct 
page *buddy,
if (page_zone_id(page) != page_zone_id(buddy))
return 0;
 
+   if (unlikely(has_isolate_pageblock(zone) &&
+   order >= pageblock_order)) {
+   int buddy_mt = get_pageblock_migratetype(buddy);
+
+   if (mt != buddy_mt && (is_migrate_isolate(mt) ||
+   is_migrate_isolate(buddy_mt)))
+   return 0;
+   }
+
VM_BUG_ON_PAGE(page_count(buddy) != 0, buddy);
 
return 1;
@@ -698,17 +707,8 @@ static inline void __free_one_page(struct page *page,
VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
 
VM_BUG_ON(migratetype == -1);
-   if (is_migrate_isolate(migratetype)) {
-   /*
-* We restrict max order of merging to prevent merge
-* between freepages on isolate pageblock and normal
-* pageblock. Without this, pageblock isolation
-* could cause incorrect freepage accounting.
-*/
-   max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
-   } else {
+   if (!is_migrate_isolate(migratetype))
__mod_zone_freepage_state(zone, 1 << order, migratetype);
-   }
 
page_idx = pfn & ((1 << max_order) - 1);
 
@@ -718,7 +718,7 @@ static inline void __free_one_page(struct page *page,
while (order < max_order - 1) {
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
-   if (!page_is_buddy(page, buddy, order))
+   if (!page_is_buddy(zone, page, buddy, order, migratetype))
break;
/*
 * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
@@ -752,7 +752,8 @@ static inline void __free_one_page(struct page *page,
higher_page = page + (combined_idx - page_idx);
buddy_idx = __find_buddy_index(combined_idx, order + 1);
higher_buddy = higher_page + (buddy_idx - combined_idx);
-   if (page_is_buddy(higher_page, higher_buddy, order + 1)) {
+   if (page_is_buddy(zone, higher_page, higher_buddy,
+   order + 1, migratetype)) {
list_add_tail(>lru,
>free_area[order].free_list[migratetype]);
goto out;
-- 
1.9.1



Re: Suspicious error for CMA stress test

2016-03-14 Thread Vlastimil Babka

On 03/14/2016 07:49 AM, Joonsoo Kim wrote:

On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:

On 03/11/2016 04:00 PM, Joonsoo Kim wrote:

How about something like this? Just and idea, probably buggy (off-by-one etc.).
Should keep away cost from pageblock_order iterations.


Hmm... I tested this and found that it's code size is a little bit
larger than mine. I'm not sure why this happens exactly but I guess it would be
related to compiler optimization. In this case, I'm in favor of my
implementation because it looks like well abstraction. It adds one
unlikely branch to the merge loop but compiler would optimize it to
check it once.


I would be surprised if compiler optimized that to check it once, as 
order increases with each loop iteration. But maybe it's smart enough to 
do something like I did by hand? Guess I'll check the disassembly.




Thanks.



diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ff1e3cbc8956..b8005a07b2a1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -685,21 +685,13 @@ static inline void __free_one_page(struct page *page,
unsigned long combined_idx;
unsigned long uninitialized_var(buddy_idx);
struct page *buddy;
-   unsigned int max_order = MAX_ORDER;
+   unsigned int max_order = pageblock_order + 1;

VM_BUG_ON(!zone_is_initialized(zone));
VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);

VM_BUG_ON(migratetype == -1);
-   if (is_migrate_isolate(migratetype)) {
-   /*
-* We restrict max order of merging to prevent merge
-* between freepages on isolate pageblock and normal
-* pageblock. Without this, pageblock isolation
-* could cause incorrect freepage accounting.
-*/
-   max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
-   } else {
+   if (likely(!is_migrate_isolate(migratetype))) {
__mod_zone_freepage_state(zone, 1 << order, migratetype);
}

@@ -708,11 +700,12 @@ static inline void __free_one_page(struct page *page,
VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
VM_BUG_ON_PAGE(bad_range(zone, page), page);

+continue_merging:
while (order < max_order - 1) {
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
if (!page_is_buddy(page, buddy, order))
-   break;
+   goto done_merging;
/*
 * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
 * merge with it and move up one order.
@@ -729,6 +722,26 @@ static inline void __free_one_page(struct page *page,
page_idx = combined_idx;
order++;
}
+   if (max_order < MAX_ORDER) {
+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone))) {
+
+   int buddy_mt;
+
+   buddy_idx = __find_buddy_index(page_idx, order);
+   buddy = page + (buddy_idx - page_idx);
+   buddy_mt = get_pageblock_migratetype(buddy);
+
+   if (migratetype != buddy_mt &&
+   (is_migrate_isolate(migratetype) ||
+   is_migrate_isolate(buddy_mt)))
+   goto done_merging;
+   }
+   max_order++;
+   goto continue_merging;
+   }
+
+done_merging:
set_page_order(page, order);

/*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majord...@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: mailto:"d...@kvack.org;> em...@kvack.org 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majord...@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: mailto:"d...@kvack.org;> em...@kvack.org 





Re: Suspicious error for CMA stress test

2016-03-14 Thread Vlastimil Babka

On 03/14/2016 07:49 AM, Joonsoo Kim wrote:

On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:

On 03/11/2016 04:00 PM, Joonsoo Kim wrote:

How about something like this? Just and idea, probably buggy (off-by-one etc.).
Should keep away cost from pageblock_order iterations.


Hmm... I tested this and found that it's code size is a little bit
larger than mine. I'm not sure why this happens exactly but I guess it would be
related to compiler optimization. In this case, I'm in favor of my
implementation because it looks like well abstraction. It adds one
unlikely branch to the merge loop but compiler would optimize it to
check it once.


I would be surprised if compiler optimized that to check it once, as 
order increases with each loop iteration. But maybe it's smart enough to 
do something like I did by hand? Guess I'll check the disassembly.




Thanks.



diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ff1e3cbc8956..b8005a07b2a1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -685,21 +685,13 @@ static inline void __free_one_page(struct page *page,
unsigned long combined_idx;
unsigned long uninitialized_var(buddy_idx);
struct page *buddy;
-   unsigned int max_order = MAX_ORDER;
+   unsigned int max_order = pageblock_order + 1;

VM_BUG_ON(!zone_is_initialized(zone));
VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);

VM_BUG_ON(migratetype == -1);
-   if (is_migrate_isolate(migratetype)) {
-   /*
-* We restrict max order of merging to prevent merge
-* between freepages on isolate pageblock and normal
-* pageblock. Without this, pageblock isolation
-* could cause incorrect freepage accounting.
-*/
-   max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
-   } else {
+   if (likely(!is_migrate_isolate(migratetype))) {
__mod_zone_freepage_state(zone, 1 << order, migratetype);
}

@@ -708,11 +700,12 @@ static inline void __free_one_page(struct page *page,
VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
VM_BUG_ON_PAGE(bad_range(zone, page), page);

+continue_merging:
while (order < max_order - 1) {
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
if (!page_is_buddy(page, buddy, order))
-   break;
+   goto done_merging;
/*
 * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
 * merge with it and move up one order.
@@ -729,6 +722,26 @@ static inline void __free_one_page(struct page *page,
page_idx = combined_idx;
order++;
}
+   if (max_order < MAX_ORDER) {
+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone))) {
+
+   int buddy_mt;
+
+   buddy_idx = __find_buddy_index(page_idx, order);
+   buddy = page + (buddy_idx - page_idx);
+   buddy_mt = get_pageblock_migratetype(buddy);
+
+   if (migratetype != buddy_mt &&
+   (is_migrate_isolate(migratetype) ||
+   is_migrate_isolate(buddy_mt)))
+   goto done_merging;
+   }
+   max_order++;
+   goto continue_merging;
+   }
+
+done_merging:
set_page_order(page, order);

/*

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majord...@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: mailto:"d...@kvack.org;> em...@kvack.org 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majord...@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: mailto:"d...@kvack.org;> em...@kvack.org 





Re: Suspicious error for CMA stress test

2016-03-14 Thread Joonsoo Kim
On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> > 2016-03-09 10:23 GMT+09:00 Leizhen (ThunderTown) 
> > :
> >>
> >> Hi, Joonsoo:
> >> This new patch worked well. Do you plan to upstream it in the near 
> >> furture?
> > 
> > Of course!
> > But, I should think more because it touches allocator's fastpatch and
> > I'd like to detour.
> > If I fail to think a better solution, I will send it as is, soon.
> 
> How about something like this? Just and idea, probably buggy (off-by-one 
> etc.).
> Should keep away cost from  relatively fewer >pageblock_order iterations.

Hmm... I tested this and found that it's code size is a little bit
larger than mine. I'm not sure why this happens exactly but I guess it would be
related to compiler optimization. In this case, I'm in favor of my
implementation because it looks like well abstraction. It adds one
unlikely branch to the merge loop but compiler would optimize it to
check it once.

Thanks.

> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ff1e3cbc8956..b8005a07b2a1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -685,21 +685,13 @@ static inline void __free_one_page(struct page *page,
>   unsigned long combined_idx;
>   unsigned long uninitialized_var(buddy_idx);
>   struct page *buddy;
> - unsigned int max_order = MAX_ORDER;
> + unsigned int max_order = pageblock_order + 1;
>  
>   VM_BUG_ON(!zone_is_initialized(zone));
>   VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
>  
>   VM_BUG_ON(migratetype == -1);
> - if (is_migrate_isolate(migratetype)) {
> - /*
> -  * We restrict max order of merging to prevent merge
> -  * between freepages on isolate pageblock and normal
> -  * pageblock. Without this, pageblock isolation
> -  * could cause incorrect freepage accounting.
> -  */
> - max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
> - } else {
> + if (likely(!is_migrate_isolate(migratetype))) {
>   __mod_zone_freepage_state(zone, 1 << order, migratetype);
>   }
>  
> @@ -708,11 +700,12 @@ static inline void __free_one_page(struct page *page,
>   VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
>   VM_BUG_ON_PAGE(bad_range(zone, page), page);
>  
> +continue_merging:
>   while (order < max_order - 1) {
>   buddy_idx = __find_buddy_index(page_idx, order);
>   buddy = page + (buddy_idx - page_idx);
>   if (!page_is_buddy(page, buddy, order))
> - break;
> + goto done_merging;
>   /*
>* Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
>* merge with it and move up one order.
> @@ -729,6 +722,26 @@ static inline void __free_one_page(struct page *page,
>   page_idx = combined_idx;
>   order++;
>   }
> + if (max_order < MAX_ORDER) {
> + if (IS_ENABLED(CONFIG_CMA) &&
> + unlikely(has_isolate_pageblock(zone))) {
> +
> + int buddy_mt;
> +
> + buddy_idx = __find_buddy_index(page_idx, order);
> + buddy = page + (buddy_idx - page_idx);
> + buddy_mt = get_pageblock_migratetype(buddy);
> +
> + if (migratetype != buddy_mt &&
> + (is_migrate_isolate(migratetype) ||
> + is_migrate_isolate(buddy_mt)))
> + goto done_merging;
> + }
> + max_order++;
> + goto continue_merging;
> + }
> +
> +done_merging:
>   set_page_order(page, order);
>  
>   /*
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 


Re: Suspicious error for CMA stress test

2016-03-14 Thread Joonsoo Kim
On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote:
> On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> > 2016-03-09 10:23 GMT+09:00 Leizhen (ThunderTown) 
> > :
> >>
> >> Hi, Joonsoo:
> >> This new patch worked well. Do you plan to upstream it in the near 
> >> furture?
> > 
> > Of course!
> > But, I should think more because it touches allocator's fastpatch and
> > I'd like to detour.
> > If I fail to think a better solution, I will send it as is, soon.
> 
> How about something like this? Just and idea, probably buggy (off-by-one 
> etc.).
> Should keep away cost from  relatively fewer >pageblock_order iterations.

Hmm... I tested this and found that it's code size is a little bit
larger than mine. I'm not sure why this happens exactly but I guess it would be
related to compiler optimization. In this case, I'm in favor of my
implementation because it looks like well abstraction. It adds one
unlikely branch to the merge loop but compiler would optimize it to
check it once.

Thanks.

> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ff1e3cbc8956..b8005a07b2a1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -685,21 +685,13 @@ static inline void __free_one_page(struct page *page,
>   unsigned long combined_idx;
>   unsigned long uninitialized_var(buddy_idx);
>   struct page *buddy;
> - unsigned int max_order = MAX_ORDER;
> + unsigned int max_order = pageblock_order + 1;
>  
>   VM_BUG_ON(!zone_is_initialized(zone));
>   VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
>  
>   VM_BUG_ON(migratetype == -1);
> - if (is_migrate_isolate(migratetype)) {
> - /*
> -  * We restrict max order of merging to prevent merge
> -  * between freepages on isolate pageblock and normal
> -  * pageblock. Without this, pageblock isolation
> -  * could cause incorrect freepage accounting.
> -  */
> - max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
> - } else {
> + if (likely(!is_migrate_isolate(migratetype))) {
>   __mod_zone_freepage_state(zone, 1 << order, migratetype);
>   }
>  
> @@ -708,11 +700,12 @@ static inline void __free_one_page(struct page *page,
>   VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
>   VM_BUG_ON_PAGE(bad_range(zone, page), page);
>  
> +continue_merging:
>   while (order < max_order - 1) {
>   buddy_idx = __find_buddy_index(page_idx, order);
>   buddy = page + (buddy_idx - page_idx);
>   if (!page_is_buddy(page, buddy, order))
> - break;
> + goto done_merging;
>   /*
>* Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
>* merge with it and move up one order.
> @@ -729,6 +722,26 @@ static inline void __free_one_page(struct page *page,
>   page_idx = combined_idx;
>   order++;
>   }
> + if (max_order < MAX_ORDER) {
> + if (IS_ENABLED(CONFIG_CMA) &&
> + unlikely(has_isolate_pageblock(zone))) {
> +
> + int buddy_mt;
> +
> + buddy_idx = __find_buddy_index(page_idx, order);
> + buddy = page + (buddy_idx - page_idx);
> + buddy_mt = get_pageblock_migratetype(buddy);
> +
> + if (migratetype != buddy_mt &&
> + (is_migrate_isolate(migratetype) ||
> + is_migrate_isolate(buddy_mt)))
> + goto done_merging;
> + }
> + max_order++;
> + goto continue_merging;
> + }
> +
> +done_merging:
>   set_page_order(page, order);
>  
>   /*
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 


Re: Suspicious error for CMA stress test

2016-03-11 Thread Vlastimil Babka
On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> 2016-03-09 10:23 GMT+09:00 Leizhen (ThunderTown) :
>>
>> Hi, Joonsoo:
>> This new patch worked well. Do you plan to upstream it in the near 
>> furture?
> 
> Of course!
> But, I should think more because it touches allocator's fastpatch and
> I'd like to detour.
> If I fail to think a better solution, I will send it as is, soon.

How about something like this? Just and idea, probably buggy (off-by-one etc.).
Should keep away cost from pageblock_order iterations.

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ff1e3cbc8956..b8005a07b2a1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -685,21 +685,13 @@ static inline void __free_one_page(struct page *page,
unsigned long combined_idx;
unsigned long uninitialized_var(buddy_idx);
struct page *buddy;
-   unsigned int max_order = MAX_ORDER;
+   unsigned int max_order = pageblock_order + 1;
 
VM_BUG_ON(!zone_is_initialized(zone));
VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
 
VM_BUG_ON(migratetype == -1);
-   if (is_migrate_isolate(migratetype)) {
-   /*
-* We restrict max order of merging to prevent merge
-* between freepages on isolate pageblock and normal
-* pageblock. Without this, pageblock isolation
-* could cause incorrect freepage accounting.
-*/
-   max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
-   } else {
+   if (likely(!is_migrate_isolate(migratetype))) {
__mod_zone_freepage_state(zone, 1 << order, migratetype);
}
 
@@ -708,11 +700,12 @@ static inline void __free_one_page(struct page *page,
VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
VM_BUG_ON_PAGE(bad_range(zone, page), page);
 
+continue_merging:
while (order < max_order - 1) {
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
if (!page_is_buddy(page, buddy, order))
-   break;
+   goto done_merging;
/*
 * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
 * merge with it and move up one order.
@@ -729,6 +722,26 @@ static inline void __free_one_page(struct page *page,
page_idx = combined_idx;
order++;
}
+   if (max_order < MAX_ORDER) {
+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone))) {
+
+   int buddy_mt;
+
+   buddy_idx = __find_buddy_index(page_idx, order);
+   buddy = page + (buddy_idx - page_idx);
+   buddy_mt = get_pageblock_migratetype(buddy);
+
+   if (migratetype != buddy_mt &&
+   (is_migrate_isolate(migratetype) ||
+   is_migrate_isolate(buddy_mt)))
+   goto done_merging;
+   }
+   max_order++;
+   goto continue_merging;
+   }
+
+done_merging:
set_page_order(page, order);
 
/*



Re: Suspicious error for CMA stress test

2016-03-11 Thread Vlastimil Babka
On 03/11/2016 04:00 PM, Joonsoo Kim wrote:
> 2016-03-09 10:23 GMT+09:00 Leizhen (ThunderTown) :
>>
>> Hi, Joonsoo:
>> This new patch worked well. Do you plan to upstream it in the near 
>> furture?
> 
> Of course!
> But, I should think more because it touches allocator's fastpatch and
> I'd like to detour.
> If I fail to think a better solution, I will send it as is, soon.

How about something like this? Just and idea, probably buggy (off-by-one etc.).
Should keep away cost from pageblock_order iterations.

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ff1e3cbc8956..b8005a07b2a1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -685,21 +685,13 @@ static inline void __free_one_page(struct page *page,
unsigned long combined_idx;
unsigned long uninitialized_var(buddy_idx);
struct page *buddy;
-   unsigned int max_order = MAX_ORDER;
+   unsigned int max_order = pageblock_order + 1;
 
VM_BUG_ON(!zone_is_initialized(zone));
VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
 
VM_BUG_ON(migratetype == -1);
-   if (is_migrate_isolate(migratetype)) {
-   /*
-* We restrict max order of merging to prevent merge
-* between freepages on isolate pageblock and normal
-* pageblock. Without this, pageblock isolation
-* could cause incorrect freepage accounting.
-*/
-   max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
-   } else {
+   if (likely(!is_migrate_isolate(migratetype))) {
__mod_zone_freepage_state(zone, 1 << order, migratetype);
}
 
@@ -708,11 +700,12 @@ static inline void __free_one_page(struct page *page,
VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
VM_BUG_ON_PAGE(bad_range(zone, page), page);
 
+continue_merging:
while (order < max_order - 1) {
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
if (!page_is_buddy(page, buddy, order))
-   break;
+   goto done_merging;
/*
 * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
 * merge with it and move up one order.
@@ -729,6 +722,26 @@ static inline void __free_one_page(struct page *page,
page_idx = combined_idx;
order++;
}
+   if (max_order < MAX_ORDER) {
+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone))) {
+
+   int buddy_mt;
+
+   buddy_idx = __find_buddy_index(page_idx, order);
+   buddy = page + (buddy_idx - page_idx);
+   buddy_mt = get_pageblock_migratetype(buddy);
+
+   if (migratetype != buddy_mt &&
+   (is_migrate_isolate(migratetype) ||
+   is_migrate_isolate(buddy_mt)))
+   goto done_merging;
+   }
+   max_order++;
+   goto continue_merging;
+   }
+
+done_merging:
set_page_order(page, order);
 
/*



Re: Suspicious error for CMA stress test

2016-03-11 Thread Joonsoo Kim
2016-03-09 10:23 GMT+09:00 Leizhen (ThunderTown) :
>
>
> On 2016/3/8 9:54, Leizhen (ThunderTown) wrote:
>>
>>
>> On 2016/3/8 2:42, Laura Abbott wrote:
>>> On 03/07/2016 12:16 AM, Leizhen (ThunderTown) wrote:


 On 2016/3/7 12:34, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
>> On 2016/3/4 14:38, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
 On 2016/3/4 12:32, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
>> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>>> On 2016/3/3 15:42, Joonsoo Kim wrote:
 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> (cc -mm and Joonsoo Kim)
>
>
> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>> Hi,
>>
>> I came across a suspicious error for CMA stress test:
>>
>> Before the test, I got:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree:  195044 kB
>>
>>
>> After running the test:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree: 6602584 kB
>>
>> So the freed CMA memory is more than total..
>>
>> Also the the MemFree is more than mem total:
>>
>> -bash-4.3# cat /proc/meminfo
>> MemTotal:   16342016 kB
>> MemFree:22367268 kB
>> MemAvailable:   22370528 kB
>>> [...]
> I played with this a bit and can see the same problem. The sanity
> check of CmaFree < CmaTotal generally triggers in
> __move_zone_freepage_state in unset_migratetype_isolate.
> This also seems to be present as far back as v4.0 which was the
> first version to have the updated accounting from Joonsoo.
> Were there known limitations with the new freepage accounting,
> Joonsoo?
 I don't know. I also played with this and looks like there is
 accounting problem, however, for my case, number of free page is 
 slightly less
 than total. I will take a look.

 Hanjun, could you tell me your malloc_size? I tested with 1 and it 
 doesn't
 look like your case.
>>> I tested with malloc_size with 2M, and it grows much bigger than 
>>> 1M, also I
>>> did some other test:
>> Thanks! Now, I can re-generate erronous situation you mentioned.
>>
>>>   - run with single thread with 10 times, everything is fine.
>>>
>>>   - I hack the cam_alloc() and free as below [1] to see if it's 
>>> lock issue, with
>>> the same test with 100 multi-thread, then I got:
>> [1] would not be sufficient to close this race.
>>
>> Try following things [A]. And, for more accurate test, I changed 
>> code a bit more
>> to prevent kernel page allocation from cma area [B]. This will 
>> prevent kernel
>> page allocation from cma area completely so we can focus 
>> cma_alloc/release race.
>>
>> Although, this is not correct fix, it could help that we can guess
>> where the problem is.
> More correct fix is something like below.
> Please test it.
 Hmm, this is not working:
>>> Sad to hear that.
>>>
>>> Could you tell me your system's MAX_ORDER and pageblock_order?
>>>
>>
>> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
>
> Hmm... that's same with me.
>
> Below is similar fix that prevents buddy merging when one of buddy's
> migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
> no idea why previous fix (more correct fix) doesn't work for you.
> (It works for me.) But, maybe there is a bug on the fix
> so I make new one which is more general form. Please test it.

 Hi,
 Hanjun Guo has gone to Tailand on business, so I help him to run this 
 patch. The result
 shows that the count of "CmaFree:" is OK now. But sometimes printed some 
 information as below:

 alloc_contig_range: [28500, 28600) PFNs busy
 alloc_contig_range: [28300, 28380) PFNs busy

>>>
>>> Those messages aren't necessarily a problem. Those messages indicate that
>> OK.
>>
>>> those pages weren't able to be isolated. Given the test here is a
>>> concurrency test, I suspect some concurrent allocation or free prevented
>>> isolation which is to be expected some times. I'd only be concerned if
>>> seeing those messages cause allocation failure or some 

Re: Suspicious error for CMA stress test

2016-03-11 Thread Joonsoo Kim
2016-03-09 10:23 GMT+09:00 Leizhen (ThunderTown) :
>
>
> On 2016/3/8 9:54, Leizhen (ThunderTown) wrote:
>>
>>
>> On 2016/3/8 2:42, Laura Abbott wrote:
>>> On 03/07/2016 12:16 AM, Leizhen (ThunderTown) wrote:


 On 2016/3/7 12:34, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
>> On 2016/3/4 14:38, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
 On 2016/3/4 12:32, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
>> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>>> On 2016/3/3 15:42, Joonsoo Kim wrote:
 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> (cc -mm and Joonsoo Kim)
>
>
> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>> Hi,
>>
>> I came across a suspicious error for CMA stress test:
>>
>> Before the test, I got:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree:  195044 kB
>>
>>
>> After running the test:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree: 6602584 kB
>>
>> So the freed CMA memory is more than total..
>>
>> Also the the MemFree is more than mem total:
>>
>> -bash-4.3# cat /proc/meminfo
>> MemTotal:   16342016 kB
>> MemFree:22367268 kB
>> MemAvailable:   22370528 kB
>>> [...]
> I played with this a bit and can see the same problem. The sanity
> check of CmaFree < CmaTotal generally triggers in
> __move_zone_freepage_state in unset_migratetype_isolate.
> This also seems to be present as far back as v4.0 which was the
> first version to have the updated accounting from Joonsoo.
> Were there known limitations with the new freepage accounting,
> Joonsoo?
 I don't know. I also played with this and looks like there is
 accounting problem, however, for my case, number of free page is 
 slightly less
 than total. I will take a look.

 Hanjun, could you tell me your malloc_size? I tested with 1 and it 
 doesn't
 look like your case.
>>> I tested with malloc_size with 2M, and it grows much bigger than 
>>> 1M, also I
>>> did some other test:
>> Thanks! Now, I can re-generate erronous situation you mentioned.
>>
>>>   - run with single thread with 10 times, everything is fine.
>>>
>>>   - I hack the cam_alloc() and free as below [1] to see if it's 
>>> lock issue, with
>>> the same test with 100 multi-thread, then I got:
>> [1] would not be sufficient to close this race.
>>
>> Try following things [A]. And, for more accurate test, I changed 
>> code a bit more
>> to prevent kernel page allocation from cma area [B]. This will 
>> prevent kernel
>> page allocation from cma area completely so we can focus 
>> cma_alloc/release race.
>>
>> Although, this is not correct fix, it could help that we can guess
>> where the problem is.
> More correct fix is something like below.
> Please test it.
 Hmm, this is not working:
>>> Sad to hear that.
>>>
>>> Could you tell me your system's MAX_ORDER and pageblock_order?
>>>
>>
>> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
>
> Hmm... that's same with me.
>
> Below is similar fix that prevents buddy merging when one of buddy's
> migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
> no idea why previous fix (more correct fix) doesn't work for you.
> (It works for me.) But, maybe there is a bug on the fix
> so I make new one which is more general form. Please test it.

 Hi,
 Hanjun Guo has gone to Tailand on business, so I help him to run this 
 patch. The result
 shows that the count of "CmaFree:" is OK now. But sometimes printed some 
 information as below:

 alloc_contig_range: [28500, 28600) PFNs busy
 alloc_contig_range: [28300, 28380) PFNs busy

>>>
>>> Those messages aren't necessarily a problem. Those messages indicate that
>> OK.
>>
>>> those pages weren't able to be isolated. Given the test here is a
>>> concurrency test, I suspect some concurrent allocation or free prevented
>>> isolation which is to be expected some times. I'd only be concerned if
>>> seeing those messages cause allocation failure or some other notable impact.
>> I chose memory block size: 

Re: Suspicious error for CMA stress test

2016-03-08 Thread Xishi Qiu
On 2016/3/8 23:36, Joonsoo Kim wrote:

> 2016-03-08 19:45 GMT+09:00 Xishi Qiu :
>> On 2016/3/8 15:48, Joonsoo Kim wrote:
>>
>>> On Mon, Mar 07, 2016 at 01:59:12PM +0100, Vlastimil Babka wrote:
 On 03/07/2016 05:34 AM, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
>>> Sad to hear that.
>>>
>>> Could you tell me your system's MAX_ORDER and pageblock_order?
>>>
>>
>> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!

 I thought that CMA regions/operations (and isolation IIRC?) were
 supposed to be MAX_ORDER aligned exactly to prevent needing these
 extra checks for buddy merging. So what's wrong?
>>>
>>> CMA isolates MAX_ORDER aligned blocks, but, during the process,
>>> partialy isolated block exists. If MAX_ORDER is 11 and
>>> pageblock_order is 9, two pageblocks make up MAX_ORDER
>>> aligned block and I can think following scenario because pageblock
>>> (un)isolation would be done one by one.
>>>
>>> (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
>>> MIGRATE_ISOLATE, respectively.
>>>
>>
>> Hi Joonsoo,
>>
>>> CC -> IC -> II (Isolation)
>>
>>> II -> CI -> CC (Un-isolation)
>>>
>>> If some pages are freed at this intermediate state such as IC or CI,
>>> that page could be merged to the other page that is resident on
>>> different type of pageblock and it will cause wrong freepage count.
>>>
>>
>> Isolation will appear when do cma alloc, so there are two following threads.
>>
>> C(free)C(used) -> start_isolate_page_range -> I(free)C(used) -> 
>> I(free)I(someone free it) -> undo_isolate_page_range -> C(free)C(free)
>> so free cma is 2M -> 0M -> 0M -> 4M, the increased 2M was freed by someone.
> 
> Your example is correct one but think about following one.
> C(free)C(used) -> start_isolate_page_range -> I(free)C(used) ->
> I(free)**C**(someone free it) -> undo_isolate_page_range ->
> C(free)C(free)
> 
> it would be 2M -> 0M -> 2M -> 6M.
> When we do I(free)C(someone free it), CMA freepage is added
> because it is on CMA pageblock. But, bad merging happens and
> 4M buddy is made and it is in isolate buddy list.
> Later, when we do undo_isolation, this 4M buddy is moved to
> CMA buddy list and 4M is added to CMA freepage counter so
> total is 6M.
> 

Hi Joonsoo,

I know the cause of the problem now, thank you very much.

> Thanks.
> 
> .
> 





Re: Suspicious error for CMA stress test

2016-03-08 Thread Xishi Qiu
On 2016/3/8 23:36, Joonsoo Kim wrote:

> 2016-03-08 19:45 GMT+09:00 Xishi Qiu :
>> On 2016/3/8 15:48, Joonsoo Kim wrote:
>>
>>> On Mon, Mar 07, 2016 at 01:59:12PM +0100, Vlastimil Babka wrote:
 On 03/07/2016 05:34 AM, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
>>> Sad to hear that.
>>>
>>> Could you tell me your system's MAX_ORDER and pageblock_order?
>>>
>>
>> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!

 I thought that CMA regions/operations (and isolation IIRC?) were
 supposed to be MAX_ORDER aligned exactly to prevent needing these
 extra checks for buddy merging. So what's wrong?
>>>
>>> CMA isolates MAX_ORDER aligned blocks, but, during the process,
>>> partialy isolated block exists. If MAX_ORDER is 11 and
>>> pageblock_order is 9, two pageblocks make up MAX_ORDER
>>> aligned block and I can think following scenario because pageblock
>>> (un)isolation would be done one by one.
>>>
>>> (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
>>> MIGRATE_ISOLATE, respectively.
>>>
>>
>> Hi Joonsoo,
>>
>>> CC -> IC -> II (Isolation)
>>
>>> II -> CI -> CC (Un-isolation)
>>>
>>> If some pages are freed at this intermediate state such as IC or CI,
>>> that page could be merged to the other page that is resident on
>>> different type of pageblock and it will cause wrong freepage count.
>>>
>>
>> Isolation will appear when do cma alloc, so there are two following threads.
>>
>> C(free)C(used) -> start_isolate_page_range -> I(free)C(used) -> 
>> I(free)I(someone free it) -> undo_isolate_page_range -> C(free)C(free)
>> so free cma is 2M -> 0M -> 0M -> 4M, the increased 2M was freed by someone.
> 
> Your example is correct one but think about following one.
> C(free)C(used) -> start_isolate_page_range -> I(free)C(used) ->
> I(free)**C**(someone free it) -> undo_isolate_page_range ->
> C(free)C(free)
> 
> it would be 2M -> 0M -> 2M -> 6M.
> When we do I(free)C(someone free it), CMA freepage is added
> because it is on CMA pageblock. But, bad merging happens and
> 4M buddy is made and it is in isolate buddy list.
> Later, when we do undo_isolation, this 4M buddy is moved to
> CMA buddy list and 4M is added to CMA freepage counter so
> total is 6M.
> 

Hi Joonsoo,

I know the cause of the problem now, thank you very much.

> Thanks.
> 
> .
> 





Re: Suspicious error for CMA stress test

2016-03-08 Thread Leizhen (ThunderTown)


On 2016/3/8 9:54, Leizhen (ThunderTown) wrote:
> 
> 
> On 2016/3/8 2:42, Laura Abbott wrote:
>> On 03/07/2016 12:16 AM, Leizhen (ThunderTown) wrote:
>>>
>>>
>>> On 2016/3/7 12:34, Joonsoo Kim wrote:
 On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
> On 2016/3/4 14:38, Joonsoo Kim wrote:
>> On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
>>> On 2016/3/4 12:32, Joonsoo Kim wrote:
 On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>> On 2016/3/3 15:42, Joonsoo Kim wrote:
>>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
 (cc -mm and Joonsoo Kim)


 On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> Hi,
>
> I came across a suspicious error for CMA stress test:
>
> Before the test, I got:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree:  195044 kB
>
>
> After running the test:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree: 6602584 kB
>
> So the freed CMA memory is more than total..
>
> Also the the MemFree is more than mem total:
>
> -bash-4.3# cat /proc/meminfo
> MemTotal:   16342016 kB
> MemFree:22367268 kB
> MemAvailable:   22370528 kB
>> [...]
 I played with this a bit and can see the same problem. The sanity
 check of CmaFree < CmaTotal generally triggers in
 __move_zone_freepage_state in unset_migratetype_isolate.
 This also seems to be present as far back as v4.0 which was the
 first version to have the updated accounting from Joonsoo.
 Were there known limitations with the new freepage accounting,
 Joonsoo?
>>> I don't know. I also played with this and looks like there is
>>> accounting problem, however, for my case, number of free page is 
>>> slightly less
>>> than total. I will take a look.
>>>
>>> Hanjun, could you tell me your malloc_size? I tested with 1 and it 
>>> doesn't
>>> look like your case.
>> I tested with malloc_size with 2M, and it grows much bigger than 1M, 
>> also I
>> did some other test:
> Thanks! Now, I can re-generate erronous situation you mentioned.
>
>>   - run with single thread with 10 times, everything is fine.
>>
>>   - I hack the cam_alloc() and free as below [1] to see if it's lock 
>> issue, with
>> the same test with 100 multi-thread, then I got:
> [1] would not be sufficient to close this race.
>
> Try following things [A]. And, for more accurate test, I changed code 
> a bit more
> to prevent kernel page allocation from cma area [B]. This will 
> prevent kernel
> page allocation from cma area completely so we can focus 
> cma_alloc/release race.
>
> Although, this is not correct fix, it could help that we can guess
> where the problem is.
 More correct fix is something like below.
 Please test it.
>>> Hmm, this is not working:
>> Sad to hear that.
>>
>> Could you tell me your system's MAX_ORDER and pageblock_order?
>>
>
> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!

 Hmm... that's same with me.

 Below is similar fix that prevents buddy merging when one of buddy's
 migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
 no idea why previous fix (more correct fix) doesn't work for you.
 (It works for me.) But, maybe there is a bug on the fix
 so I make new one which is more general form. Please test it.
>>>
>>> Hi,
>>> Hanjun Guo has gone to Tailand on business, so I help him to run this 
>>> patch. The result
>>> shows that the count of "CmaFree:" is OK now. But sometimes printed some 
>>> information as below:
>>>
>>> alloc_contig_range: [28500, 28600) PFNs busy
>>> alloc_contig_range: [28300, 28380) PFNs busy
>>>
>>
>> Those messages aren't necessarily a problem. Those messages indicate that
> OK.
> 
>> those pages weren't able to be isolated. Given the test here is a
>> concurrency test, I suspect some concurrent allocation or free prevented
>> isolation which is to be expected some times. I'd only be concerned if
>> seeing those messages cause allocation failure or some other notable impact.
> I chose memory block size: 512K, 1M, 2M ran serveral times, there was no 
> memory allocation failure.

Hi, Joonsoo:
This new patch worked well. Do you plan to 

Re: Suspicious error for CMA stress test

2016-03-08 Thread Leizhen (ThunderTown)


On 2016/3/8 9:54, Leizhen (ThunderTown) wrote:
> 
> 
> On 2016/3/8 2:42, Laura Abbott wrote:
>> On 03/07/2016 12:16 AM, Leizhen (ThunderTown) wrote:
>>>
>>>
>>> On 2016/3/7 12:34, Joonsoo Kim wrote:
 On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
> On 2016/3/4 14:38, Joonsoo Kim wrote:
>> On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
>>> On 2016/3/4 12:32, Joonsoo Kim wrote:
 On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>> On 2016/3/3 15:42, Joonsoo Kim wrote:
>>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
 (cc -mm and Joonsoo Kim)


 On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> Hi,
>
> I came across a suspicious error for CMA stress test:
>
> Before the test, I got:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree:  195044 kB
>
>
> After running the test:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree: 6602584 kB
>
> So the freed CMA memory is more than total..
>
> Also the the MemFree is more than mem total:
>
> -bash-4.3# cat /proc/meminfo
> MemTotal:   16342016 kB
> MemFree:22367268 kB
> MemAvailable:   22370528 kB
>> [...]
 I played with this a bit and can see the same problem. The sanity
 check of CmaFree < CmaTotal generally triggers in
 __move_zone_freepage_state in unset_migratetype_isolate.
 This also seems to be present as far back as v4.0 which was the
 first version to have the updated accounting from Joonsoo.
 Were there known limitations with the new freepage accounting,
 Joonsoo?
>>> I don't know. I also played with this and looks like there is
>>> accounting problem, however, for my case, number of free page is 
>>> slightly less
>>> than total. I will take a look.
>>>
>>> Hanjun, could you tell me your malloc_size? I tested with 1 and it 
>>> doesn't
>>> look like your case.
>> I tested with malloc_size with 2M, and it grows much bigger than 1M, 
>> also I
>> did some other test:
> Thanks! Now, I can re-generate erronous situation you mentioned.
>
>>   - run with single thread with 10 times, everything is fine.
>>
>>   - I hack the cam_alloc() and free as below [1] to see if it's lock 
>> issue, with
>> the same test with 100 multi-thread, then I got:
> [1] would not be sufficient to close this race.
>
> Try following things [A]. And, for more accurate test, I changed code 
> a bit more
> to prevent kernel page allocation from cma area [B]. This will 
> prevent kernel
> page allocation from cma area completely so we can focus 
> cma_alloc/release race.
>
> Although, this is not correct fix, it could help that we can guess
> where the problem is.
 More correct fix is something like below.
 Please test it.
>>> Hmm, this is not working:
>> Sad to hear that.
>>
>> Could you tell me your system's MAX_ORDER and pageblock_order?
>>
>
> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!

 Hmm... that's same with me.

 Below is similar fix that prevents buddy merging when one of buddy's
 migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
 no idea why previous fix (more correct fix) doesn't work for you.
 (It works for me.) But, maybe there is a bug on the fix
 so I make new one which is more general form. Please test it.
>>>
>>> Hi,
>>> Hanjun Guo has gone to Tailand on business, so I help him to run this 
>>> patch. The result
>>> shows that the count of "CmaFree:" is OK now. But sometimes printed some 
>>> information as below:
>>>
>>> alloc_contig_range: [28500, 28600) PFNs busy
>>> alloc_contig_range: [28300, 28380) PFNs busy
>>>
>>
>> Those messages aren't necessarily a problem. Those messages indicate that
> OK.
> 
>> those pages weren't able to be isolated. Given the test here is a
>> concurrency test, I suspect some concurrent allocation or free prevented
>> isolation which is to be expected some times. I'd only be concerned if
>> seeing those messages cause allocation failure or some other notable impact.
> I chose memory block size: 512K, 1M, 2M ran serveral times, there was no 
> memory allocation failure.

Hi, Joonsoo:
This new patch worked well. Do you plan to upstream it in the near 

Re: Suspicious error for CMA stress test

2016-03-08 Thread Joonsoo Kim
2016-03-08 19:45 GMT+09:00 Xishi Qiu :
> On 2016/3/8 15:48, Joonsoo Kim wrote:
>
>> On Mon, Mar 07, 2016 at 01:59:12PM +0100, Vlastimil Babka wrote:
>>> On 03/07/2016 05:34 AM, Joonsoo Kim wrote:
 On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
>> Sad to hear that.
>>
>> Could you tell me your system's MAX_ORDER and pageblock_order?
>>
>
> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
>>>
>>> I thought that CMA regions/operations (and isolation IIRC?) were
>>> supposed to be MAX_ORDER aligned exactly to prevent needing these
>>> extra checks for buddy merging. So what's wrong?
>>
>> CMA isolates MAX_ORDER aligned blocks, but, during the process,
>> partialy isolated block exists. If MAX_ORDER is 11 and
>> pageblock_order is 9, two pageblocks make up MAX_ORDER
>> aligned block and I can think following scenario because pageblock
>> (un)isolation would be done one by one.
>>
>> (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
>> MIGRATE_ISOLATE, respectively.
>>
>
> Hi Joonsoo,
>
>> CC -> IC -> II (Isolation)
>
>> II -> CI -> CC (Un-isolation)
>>
>> If some pages are freed at this intermediate state such as IC or CI,
>> that page could be merged to the other page that is resident on
>> different type of pageblock and it will cause wrong freepage count.
>>
>
> Isolation will appear when do cma alloc, so there are two following threads.
>
> C(free)C(used) -> start_isolate_page_range -> I(free)C(used) -> 
> I(free)I(someone free it) -> undo_isolate_page_range -> C(free)C(free)
> so free cma is 2M -> 0M -> 0M -> 4M, the increased 2M was freed by someone.

Your example is correct one but think about following one.
C(free)C(used) -> start_isolate_page_range -> I(free)C(used) ->
I(free)**C**(someone free it) -> undo_isolate_page_range ->
C(free)C(free)

it would be 2M -> 0M -> 2M -> 6M.
When we do I(free)C(someone free it), CMA freepage is added
because it is on CMA pageblock. But, bad merging happens and
4M buddy is made and it is in isolate buddy list.
Later, when we do undo_isolation, this 4M buddy is moved to
CMA buddy list and 4M is added to CMA freepage counter so
total is 6M.

Thanks.


Re: Suspicious error for CMA stress test

2016-03-08 Thread Joonsoo Kim
2016-03-08 19:45 GMT+09:00 Xishi Qiu :
> On 2016/3/8 15:48, Joonsoo Kim wrote:
>
>> On Mon, Mar 07, 2016 at 01:59:12PM +0100, Vlastimil Babka wrote:
>>> On 03/07/2016 05:34 AM, Joonsoo Kim wrote:
 On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
>> Sad to hear that.
>>
>> Could you tell me your system's MAX_ORDER and pageblock_order?
>>
>
> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
>>>
>>> I thought that CMA regions/operations (and isolation IIRC?) were
>>> supposed to be MAX_ORDER aligned exactly to prevent needing these
>>> extra checks for buddy merging. So what's wrong?
>>
>> CMA isolates MAX_ORDER aligned blocks, but, during the process,
>> partialy isolated block exists. If MAX_ORDER is 11 and
>> pageblock_order is 9, two pageblocks make up MAX_ORDER
>> aligned block and I can think following scenario because pageblock
>> (un)isolation would be done one by one.
>>
>> (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
>> MIGRATE_ISOLATE, respectively.
>>
>
> Hi Joonsoo,
>
>> CC -> IC -> II (Isolation)
>
>> II -> CI -> CC (Un-isolation)
>>
>> If some pages are freed at this intermediate state such as IC or CI,
>> that page could be merged to the other page that is resident on
>> different type of pageblock and it will cause wrong freepage count.
>>
>
> Isolation will appear when do cma alloc, so there are two following threads.
>
> C(free)C(used) -> start_isolate_page_range -> I(free)C(used) -> 
> I(free)I(someone free it) -> undo_isolate_page_range -> C(free)C(free)
> so free cma is 2M -> 0M -> 0M -> 4M, the increased 2M was freed by someone.

Your example is correct one but think about following one.
C(free)C(used) -> start_isolate_page_range -> I(free)C(used) ->
I(free)**C**(someone free it) -> undo_isolate_page_range ->
C(free)C(free)

it would be 2M -> 0M -> 2M -> 6M.
When we do I(free)C(someone free it), CMA freepage is added
because it is on CMA pageblock. But, bad merging happens and
4M buddy is made and it is in isolate buddy list.
Later, when we do undo_isolation, this 4M buddy is moved to
CMA buddy list and 4M is added to CMA freepage counter so
total is 6M.

Thanks.


Re: Suspicious error for CMA stress test

2016-03-08 Thread Xishi Qiu
On 2016/3/8 15:48, Joonsoo Kim wrote:

> On Mon, Mar 07, 2016 at 01:59:12PM +0100, Vlastimil Babka wrote:
>> On 03/07/2016 05:34 AM, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
> Sad to hear that.
>
> Could you tell me your system's MAX_ORDER and pageblock_order?
>

 MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
>>
>> I thought that CMA regions/operations (and isolation IIRC?) were
>> supposed to be MAX_ORDER aligned exactly to prevent needing these
>> extra checks for buddy merging. So what's wrong?
> 
> CMA isolates MAX_ORDER aligned blocks, but, during the process,
> partialy isolated block exists. If MAX_ORDER is 11 and
> pageblock_order is 9, two pageblocks make up MAX_ORDER
> aligned block and I can think following scenario because pageblock
> (un)isolation would be done one by one.
> 
> (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
> MIGRATE_ISOLATE, respectively.
> 

Hi Joonsoo,

> CC -> IC -> II (Isolation)

> II -> CI -> CC (Un-isolation)
> 
> If some pages are freed at this intermediate state such as IC or CI,
> that page could be merged to the other page that is resident on
> different type of pageblock and it will cause wrong freepage count.
> 

Isolation will appear when do cma alloc, so there are two following threads.

C(free)C(used) -> start_isolate_page_range -> I(free)C(used) -> 
I(free)I(someone free it) -> undo_isolate_page_range -> C(free)C(free)
so free cma is 2M -> 0M -> 0M -> 4M, the increased 2M was freed by someone.
C(used)C(free) -> start_isolate_page_range -> C(used)I(free) -> C(someone free 
it)C(free) -> undo_isolate_page_range -> C(free)C(free)
so free cma is 2M -> 0M -> 4M -> 4M, the increased 2M was freed by someone.

so these two cases are no problem, right?

Thanks,
Xishi Qiu

> If we don't release zone lock during whole isolation process, there
> would be no problem and CMA can use that implementation. But,
> isolation is used by another feature and I guess it cannot use that
> kind of implementation.
> 
> Thanks.
> 
> 
> .
> 





Re: Suspicious error for CMA stress test

2016-03-08 Thread Xishi Qiu
On 2016/3/8 15:48, Joonsoo Kim wrote:

> On Mon, Mar 07, 2016 at 01:59:12PM +0100, Vlastimil Babka wrote:
>> On 03/07/2016 05:34 AM, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
> Sad to hear that.
>
> Could you tell me your system's MAX_ORDER and pageblock_order?
>

 MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
>>
>> I thought that CMA regions/operations (and isolation IIRC?) were
>> supposed to be MAX_ORDER aligned exactly to prevent needing these
>> extra checks for buddy merging. So what's wrong?
> 
> CMA isolates MAX_ORDER aligned blocks, but, during the process,
> partialy isolated block exists. If MAX_ORDER is 11 and
> pageblock_order is 9, two pageblocks make up MAX_ORDER
> aligned block and I can think following scenario because pageblock
> (un)isolation would be done one by one.
> 
> (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
> MIGRATE_ISOLATE, respectively.
> 

Hi Joonsoo,

> CC -> IC -> II (Isolation)

> II -> CI -> CC (Un-isolation)
> 
> If some pages are freed at this intermediate state such as IC or CI,
> that page could be merged to the other page that is resident on
> different type of pageblock and it will cause wrong freepage count.
> 

Isolation will appear when do cma alloc, so there are two following threads.

C(free)C(used) -> start_isolate_page_range -> I(free)C(used) -> 
I(free)I(someone free it) -> undo_isolate_page_range -> C(free)C(free)
so free cma is 2M -> 0M -> 0M -> 4M, the increased 2M was freed by someone.
C(used)C(free) -> start_isolate_page_range -> C(used)I(free) -> C(someone free 
it)C(free) -> undo_isolate_page_range -> C(free)C(free)
so free cma is 2M -> 0M -> 4M -> 4M, the increased 2M was freed by someone.

so these two cases are no problem, right?

Thanks,
Xishi Qiu

> If we don't release zone lock during whole isolation process, there
> would be no problem and CMA can use that implementation. But,
> isolation is used by another feature and I guess it cannot use that
> kind of implementation.
> 
> Thanks.
> 
> 
> .
> 





Re: Suspicious error for CMA stress test

2016-03-08 Thread Joonsoo Kim
On Tue, Mar 08, 2016 at 09:42:00AM +0800, Xishi Qiu wrote:
> On 2016/3/4 13:33, Hanjun Guo wrote:
> 
> > Hi Joonsoo,
> > 
> > On 2016/3/4 10:02, Joonsoo Kim wrote:
> >> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> >>> On 2016/3/3 15:42, Joonsoo Kim wrote:
>  2016-03-03 10:25 GMT+09:00 Laura Abbott :
> > (cc -mm and Joonsoo Kim)
> >
> >
> > On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> >> Hi,
> >>
> >> I came across a suspicious error for CMA stress test:
> >>
> >> Before the test, I got:
> >> -bash-4.3# cat /proc/meminfo | grep Cma
> >> CmaTotal: 204800 kB
> >> CmaFree:  195044 kB
> >>
> >>
> >> After running the test:
> >> -bash-4.3# cat /proc/meminfo | grep Cma
> >> CmaTotal: 204800 kB
> >> CmaFree: 6602584 kB
> >>
> >> So the freed CMA memory is more than total..
> >>
> >> Also the the MemFree is more than mem total:
> >>
> >> -bash-4.3# cat /proc/meminfo
> >> MemTotal:   16342016 kB
> >> MemFree:22367268 kB
> >> MemAvailable:   22370528 kB
> >>> [...]
> > I played with this a bit and can see the same problem. The sanity
> > check of CmaFree < CmaTotal generally triggers in
> > __move_zone_freepage_state in unset_migratetype_isolate.
> > This also seems to be present as far back as v4.0 which was the
> > first version to have the updated accounting from Joonsoo.
> > Were there known limitations with the new freepage accounting,
> > Joonsoo?
>  I don't know. I also played with this and looks like there is
>  accounting problem, however, for my case, number of free page is 
>  slightly less
>  than total. I will take a look.
> 
>  Hanjun, could you tell me your malloc_size? I tested with 1 and it 
>  doesn't
>  look like your case.
> >>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also 
> >>> I
> >>> did some other test:
> >> Thanks! Now, I can re-generate erronous situation you mentioned.
> >>
> >>>  - run with single thread with 10 times, everything is fine.
> >>>
> >>>  - I hack the cam_alloc() and free as below [1] to see if it's lock 
> >>> issue, with
> >>>the same test with 100 multi-thread, then I got:
> >> [1] would not be sufficient to close this race.
> >>
> >> Try following things [A]. And, for more accurate test, I changed code a 
> >> bit more
> >> to prevent kernel page allocation from cma area [B]. This will prevent 
> >> kernel
> >> page allocation from cma area completely so we can focus cma_alloc/release 
> >> race.
> >>
> >> Although, this is not correct fix, it could help that we can guess
> >> where the problem is.
> >>
> >> Thanks.
> >>
> >> [A]
> > 
> > I tested this solution [A], it can fix the problem, as you are posting a 
> > new patch, I will
> > test that one and leave [B] alone :)
> > 
> 
> Hi Joonsoo,
> 
> How does this problem happen? Why the count is larger than total?
> 
> Patch A prevent the cma page free to pcp, right?
> 
> ...
> -   if (unlikely(is_migrate_isolate(migratetype))) {
> +   if (is_migrate_cma(migratetype) ||
> +   unlikely(is_migrate_isolate(migratetype))) {
> ...
> > .
> > 

Even without free to pcp, bad merging could happen. Please see another
thread I mentioned some example.

Thanks.


Re: Suspicious error for CMA stress test

2016-03-08 Thread Joonsoo Kim
On Tue, Mar 08, 2016 at 09:42:00AM +0800, Xishi Qiu wrote:
> On 2016/3/4 13:33, Hanjun Guo wrote:
> 
> > Hi Joonsoo,
> > 
> > On 2016/3/4 10:02, Joonsoo Kim wrote:
> >> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> >>> On 2016/3/3 15:42, Joonsoo Kim wrote:
>  2016-03-03 10:25 GMT+09:00 Laura Abbott :
> > (cc -mm and Joonsoo Kim)
> >
> >
> > On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> >> Hi,
> >>
> >> I came across a suspicious error for CMA stress test:
> >>
> >> Before the test, I got:
> >> -bash-4.3# cat /proc/meminfo | grep Cma
> >> CmaTotal: 204800 kB
> >> CmaFree:  195044 kB
> >>
> >>
> >> After running the test:
> >> -bash-4.3# cat /proc/meminfo | grep Cma
> >> CmaTotal: 204800 kB
> >> CmaFree: 6602584 kB
> >>
> >> So the freed CMA memory is more than total..
> >>
> >> Also the the MemFree is more than mem total:
> >>
> >> -bash-4.3# cat /proc/meminfo
> >> MemTotal:   16342016 kB
> >> MemFree:22367268 kB
> >> MemAvailable:   22370528 kB
> >>> [...]
> > I played with this a bit and can see the same problem. The sanity
> > check of CmaFree < CmaTotal generally triggers in
> > __move_zone_freepage_state in unset_migratetype_isolate.
> > This also seems to be present as far back as v4.0 which was the
> > first version to have the updated accounting from Joonsoo.
> > Were there known limitations with the new freepage accounting,
> > Joonsoo?
>  I don't know. I also played with this and looks like there is
>  accounting problem, however, for my case, number of free page is 
>  slightly less
>  than total. I will take a look.
> 
>  Hanjun, could you tell me your malloc_size? I tested with 1 and it 
>  doesn't
>  look like your case.
> >>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also 
> >>> I
> >>> did some other test:
> >> Thanks! Now, I can re-generate erronous situation you mentioned.
> >>
> >>>  - run with single thread with 10 times, everything is fine.
> >>>
> >>>  - I hack the cam_alloc() and free as below [1] to see if it's lock 
> >>> issue, with
> >>>the same test with 100 multi-thread, then I got:
> >> [1] would not be sufficient to close this race.
> >>
> >> Try following things [A]. And, for more accurate test, I changed code a 
> >> bit more
> >> to prevent kernel page allocation from cma area [B]. This will prevent 
> >> kernel
> >> page allocation from cma area completely so we can focus cma_alloc/release 
> >> race.
> >>
> >> Although, this is not correct fix, it could help that we can guess
> >> where the problem is.
> >>
> >> Thanks.
> >>
> >> [A]
> > 
> > I tested this solution [A], it can fix the problem, as you are posting a 
> > new patch, I will
> > test that one and leave [B] alone :)
> > 
> 
> Hi Joonsoo,
> 
> How does this problem happen? Why the count is larger than total?
> 
> Patch A prevent the cma page free to pcp, right?
> 
> ...
> -   if (unlikely(is_migrate_isolate(migratetype))) {
> +   if (is_migrate_cma(migratetype) ||
> +   unlikely(is_migrate_isolate(migratetype))) {
> ...
> > .
> > 

Even without free to pcp, bad merging could happen. Please see another
thread I mentioned some example.

Thanks.


Re: Suspicious error for CMA stress test

2016-03-07 Thread Joonsoo Kim
On Mon, Mar 07, 2016 at 01:59:12PM +0100, Vlastimil Babka wrote:
> On 03/07/2016 05:34 AM, Joonsoo Kim wrote:
> >On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
> >>>Sad to hear that.
> >>>
> >>>Could you tell me your system's MAX_ORDER and pageblock_order?
> >>>
> >>
> >>MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
> 
> I thought that CMA regions/operations (and isolation IIRC?) were
> supposed to be MAX_ORDER aligned exactly to prevent needing these
> extra checks for buddy merging. So what's wrong?

CMA isolates MAX_ORDER aligned blocks, but, during the process,
partialy isolated block exists. If MAX_ORDER is 11 and
pageblock_order is 9, two pageblocks make up MAX_ORDER
aligned block and I can think following scenario because pageblock
(un)isolation would be done one by one.

(each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
MIGRATE_ISOLATE, respectively.

CC -> IC -> II (Isolation)
II -> CI -> CC (Un-isolation)

If some pages are freed at this intermediate state such as IC or CI,
that page could be merged to the other page that is resident on
different type of pageblock and it will cause wrong freepage count.

If we don't release zone lock during whole isolation process, there
would be no problem and CMA can use that implementation. But,
isolation is used by another feature and I guess it cannot use that
kind of implementation.

Thanks.



Re: Suspicious error for CMA stress test

2016-03-07 Thread Joonsoo Kim
On Mon, Mar 07, 2016 at 01:59:12PM +0100, Vlastimil Babka wrote:
> On 03/07/2016 05:34 AM, Joonsoo Kim wrote:
> >On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
> >>>Sad to hear that.
> >>>
> >>>Could you tell me your system's MAX_ORDER and pageblock_order?
> >>>
> >>
> >>MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
> 
> I thought that CMA regions/operations (and isolation IIRC?) were
> supposed to be MAX_ORDER aligned exactly to prevent needing these
> extra checks for buddy merging. So what's wrong?

CMA isolates MAX_ORDER aligned blocks, but, during the process,
partialy isolated block exists. If MAX_ORDER is 11 and
pageblock_order is 9, two pageblocks make up MAX_ORDER
aligned block and I can think following scenario because pageblock
(un)isolation would be done one by one.

(each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
MIGRATE_ISOLATE, respectively.

CC -> IC -> II (Isolation)
II -> CI -> CC (Un-isolation)

If some pages are freed at this intermediate state such as IC or CI,
that page could be merged to the other page that is resident on
different type of pageblock and it will cause wrong freepage count.

If we don't release zone lock during whole isolation process, there
would be no problem and CMA can use that implementation. But,
isolation is used by another feature and I guess it cannot use that
kind of implementation.

Thanks.



Re: Suspicious error for CMA stress test

2016-03-07 Thread Hanjun Guo

On 03/07/2016 04:16 PM, Leizhen (ThunderTown) wrote:



On 2016/3/7 12:34, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:

On 2016/3/4 14:38, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:

On 2016/3/4 12:32, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:

On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:

On 2016/3/3 15:42, Joonsoo Kim wrote:

2016-03-03 10:25 GMT+09:00 Laura Abbott :

(cc -mm and Joonsoo Kim)


On 03/02/2016 05:52 AM, Hanjun Guo wrote:

Hi,

I came across a suspicious error for CMA stress test:

Before the test, I got:
-bash-4.3# cat /proc/meminfo | grep Cma
CmaTotal: 204800 kB
CmaFree:  195044 kB


After running the test:
-bash-4.3# cat /proc/meminfo | grep Cma
CmaTotal: 204800 kB
CmaFree: 6602584 kB

So the freed CMA memory is more than total..

Also the the MemFree is more than mem total:

-bash-4.3# cat /proc/meminfo
MemTotal:   16342016 kB
MemFree:22367268 kB
MemAvailable:   22370528 kB

[...]

I played with this a bit and can see the same problem. The sanity
check of CmaFree < CmaTotal generally triggers in
__move_zone_freepage_state in unset_migratetype_isolate.
This also seems to be present as far back as v4.0 which was the
first version to have the updated accounting from Joonsoo.
Were there known limitations with the new freepage accounting,
Joonsoo?

I don't know. I also played with this and looks like there is
accounting problem, however, for my case, number of free page is slightly less
than total. I will take a look.

Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
look like your case.

I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
did some other test:

Thanks! Now, I can re-generate erronous situation you mentioned.


  - run with single thread with 10 times, everything is fine.

  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, with
the same test with 100 multi-thread, then I got:

[1] would not be sufficient to close this race.

Try following things [A]. And, for more accurate test, I changed code a bit more
to prevent kernel page allocation from cma area [B]. This will prevent kernel
page allocation from cma area completely so we can focus cma_alloc/release race.

Although, this is not correct fix, it could help that we can guess
where the problem is.

More correct fix is something like below.
Please test it.

Hmm, this is not working:

Sad to hear that.

Could you tell me your system's MAX_ORDER and pageblock_order?



MAX_ORDER is 11, pageblock_order is 9, thanks for your help!


Hmm... that's same with me.

Below is similar fix that prevents buddy merging when one of buddy's
migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
no idea why previous fix (more correct fix) doesn't work for you.
(It works for me.) But, maybe there is a bug on the fix
so I make new one which is more general form. Please test it.


Hi,
Hanjun Guo has gone to Tailand on business, so I help him to run this 
patch. The result
shows that the count of "CmaFree:" is OK now.


Thanks Leizhen :)


But sometimes printed some information as below:

alloc_contig_range: [28500, 28600) PFNs busy
alloc_contig_range: [28300, 28380) PFNs busy


I think it's not a problem for the stress test, as it's
the lock not released yet.

Thanks
Hanjun


Re: Suspicious error for CMA stress test

2016-03-07 Thread Hanjun Guo

On 03/07/2016 04:16 PM, Leizhen (ThunderTown) wrote:



On 2016/3/7 12:34, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:

On 2016/3/4 14:38, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:

On 2016/3/4 12:32, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:

On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:

On 2016/3/3 15:42, Joonsoo Kim wrote:

2016-03-03 10:25 GMT+09:00 Laura Abbott :

(cc -mm and Joonsoo Kim)


On 03/02/2016 05:52 AM, Hanjun Guo wrote:

Hi,

I came across a suspicious error for CMA stress test:

Before the test, I got:
-bash-4.3# cat /proc/meminfo | grep Cma
CmaTotal: 204800 kB
CmaFree:  195044 kB


After running the test:
-bash-4.3# cat /proc/meminfo | grep Cma
CmaTotal: 204800 kB
CmaFree: 6602584 kB

So the freed CMA memory is more than total..

Also the the MemFree is more than mem total:

-bash-4.3# cat /proc/meminfo
MemTotal:   16342016 kB
MemFree:22367268 kB
MemAvailable:   22370528 kB

[...]

I played with this a bit and can see the same problem. The sanity
check of CmaFree < CmaTotal generally triggers in
__move_zone_freepage_state in unset_migratetype_isolate.
This also seems to be present as far back as v4.0 which was the
first version to have the updated accounting from Joonsoo.
Were there known limitations with the new freepage accounting,
Joonsoo?

I don't know. I also played with this and looks like there is
accounting problem, however, for my case, number of free page is slightly less
than total. I will take a look.

Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
look like your case.

I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
did some other test:

Thanks! Now, I can re-generate erronous situation you mentioned.


  - run with single thread with 10 times, everything is fine.

  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, with
the same test with 100 multi-thread, then I got:

[1] would not be sufficient to close this race.

Try following things [A]. And, for more accurate test, I changed code a bit more
to prevent kernel page allocation from cma area [B]. This will prevent kernel
page allocation from cma area completely so we can focus cma_alloc/release race.

Although, this is not correct fix, it could help that we can guess
where the problem is.

More correct fix is something like below.
Please test it.

Hmm, this is not working:

Sad to hear that.

Could you tell me your system's MAX_ORDER and pageblock_order?



MAX_ORDER is 11, pageblock_order is 9, thanks for your help!


Hmm... that's same with me.

Below is similar fix that prevents buddy merging when one of buddy's
migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
no idea why previous fix (more correct fix) doesn't work for you.
(It works for me.) But, maybe there is a bug on the fix
so I make new one which is more general form. Please test it.


Hi,
Hanjun Guo has gone to Tailand on business, so I help him to run this 
patch. The result
shows that the count of "CmaFree:" is OK now.


Thanks Leizhen :)


But sometimes printed some information as below:

alloc_contig_range: [28500, 28600) PFNs busy
alloc_contig_range: [28300, 28380) PFNs busy


I think it's not a problem for the stress test, as it's
the lock not released yet.

Thanks
Hanjun


Re: Suspicious error for CMA stress test

2016-03-07 Thread Leizhen (ThunderTown)


On 2016/3/8 2:42, Laura Abbott wrote:
> On 03/07/2016 12:16 AM, Leizhen (ThunderTown) wrote:
>>
>>
>> On 2016/3/7 12:34, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
 On 2016/3/4 14:38, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
>> On 2016/3/4 12:32, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
 On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> On 2016/3/3 15:42, Joonsoo Kim wrote:
>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
>>> (cc -mm and Joonsoo Kim)
>>>
>>>
>>> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
 Hi,

 I came across a suspicious error for CMA stress test:

 Before the test, I got:
 -bash-4.3# cat /proc/meminfo | grep Cma
 CmaTotal: 204800 kB
 CmaFree:  195044 kB


 After running the test:
 -bash-4.3# cat /proc/meminfo | grep Cma
 CmaTotal: 204800 kB
 CmaFree: 6602584 kB

 So the freed CMA memory is more than total..

 Also the the MemFree is more than mem total:

 -bash-4.3# cat /proc/meminfo
 MemTotal:   16342016 kB
 MemFree:22367268 kB
 MemAvailable:   22370528 kB
> [...]
>>> I played with this a bit and can see the same problem. The sanity
>>> check of CmaFree < CmaTotal generally triggers in
>>> __move_zone_freepage_state in unset_migratetype_isolate.
>>> This also seems to be present as far back as v4.0 which was the
>>> first version to have the updated accounting from Joonsoo.
>>> Were there known limitations with the new freepage accounting,
>>> Joonsoo?
>> I don't know. I also played with this and looks like there is
>> accounting problem, however, for my case, number of free page is 
>> slightly less
>> than total. I will take a look.
>>
>> Hanjun, could you tell me your malloc_size? I tested with 1 and it 
>> doesn't
>> look like your case.
> I tested with malloc_size with 2M, and it grows much bigger than 1M, 
> also I
> did some other test:
 Thanks! Now, I can re-generate erronous situation you mentioned.

>   - run with single thread with 10 times, everything is fine.
>
>   - I hack the cam_alloc() and free as below [1] to see if it's lock 
> issue, with
> the same test with 100 multi-thread, then I got:
 [1] would not be sufficient to close this race.

 Try following things [A]. And, for more accurate test, I changed code 
 a bit more
 to prevent kernel page allocation from cma area [B]. This will prevent 
 kernel
 page allocation from cma area completely so we can focus 
 cma_alloc/release race.

 Although, this is not correct fix, it could help that we can guess
 where the problem is.
>>> More correct fix is something like below.
>>> Please test it.
>> Hmm, this is not working:
> Sad to hear that.
>
> Could you tell me your system's MAX_ORDER and pageblock_order?
>

 MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
>>>
>>> Hmm... that's same with me.
>>>
>>> Below is similar fix that prevents buddy merging when one of buddy's
>>> migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
>>> no idea why previous fix (more correct fix) doesn't work for you.
>>> (It works for me.) But, maybe there is a bug on the fix
>>> so I make new one which is more general form. Please test it.
>>
>> Hi,
>> Hanjun Guo has gone to Tailand on business, so I help him to run this 
>> patch. The result
>> shows that the count of "CmaFree:" is OK now. But sometimes printed some 
>> information as below:
>>
>> alloc_contig_range: [28500, 28600) PFNs busy
>> alloc_contig_range: [28300, 28380) PFNs busy
>>
> 
> Those messages aren't necessarily a problem. Those messages indicate that
OK.

> those pages weren't able to be isolated. Given the test here is a
> concurrency test, I suspect some concurrent allocation or free prevented
> isolation which is to be expected some times. I'd only be concerned if
> seeing those messages cause allocation failure or some other notable impact.
I chose memory block size: 512K, 1M, 2M ran serveral times, there was no memory 
allocation failure.

> 
> Thanks,
> Laura
>  
>>>
>>> Thanks.
>>>
>>> -->8-
>>> >From dd41e348572948d70b935fc24f82c096ff0fb417 Mon Sep 17 00:00:00 2001
>>> From: Joonsoo Kim 
>>> Date: Fri, 4 Mar 2016 13:28:17 +0900

Re: Suspicious error for CMA stress test

2016-03-07 Thread Leizhen (ThunderTown)


On 2016/3/8 2:42, Laura Abbott wrote:
> On 03/07/2016 12:16 AM, Leizhen (ThunderTown) wrote:
>>
>>
>> On 2016/3/7 12:34, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
 On 2016/3/4 14:38, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
>> On 2016/3/4 12:32, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
 On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> On 2016/3/3 15:42, Joonsoo Kim wrote:
>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
>>> (cc -mm and Joonsoo Kim)
>>>
>>>
>>> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
 Hi,

 I came across a suspicious error for CMA stress test:

 Before the test, I got:
 -bash-4.3# cat /proc/meminfo | grep Cma
 CmaTotal: 204800 kB
 CmaFree:  195044 kB


 After running the test:
 -bash-4.3# cat /proc/meminfo | grep Cma
 CmaTotal: 204800 kB
 CmaFree: 6602584 kB

 So the freed CMA memory is more than total..

 Also the the MemFree is more than mem total:

 -bash-4.3# cat /proc/meminfo
 MemTotal:   16342016 kB
 MemFree:22367268 kB
 MemAvailable:   22370528 kB
> [...]
>>> I played with this a bit and can see the same problem. The sanity
>>> check of CmaFree < CmaTotal generally triggers in
>>> __move_zone_freepage_state in unset_migratetype_isolate.
>>> This also seems to be present as far back as v4.0 which was the
>>> first version to have the updated accounting from Joonsoo.
>>> Were there known limitations with the new freepage accounting,
>>> Joonsoo?
>> I don't know. I also played with this and looks like there is
>> accounting problem, however, for my case, number of free page is 
>> slightly less
>> than total. I will take a look.
>>
>> Hanjun, could you tell me your malloc_size? I tested with 1 and it 
>> doesn't
>> look like your case.
> I tested with malloc_size with 2M, and it grows much bigger than 1M, 
> also I
> did some other test:
 Thanks! Now, I can re-generate erronous situation you mentioned.

>   - run with single thread with 10 times, everything is fine.
>
>   - I hack the cam_alloc() and free as below [1] to see if it's lock 
> issue, with
> the same test with 100 multi-thread, then I got:
 [1] would not be sufficient to close this race.

 Try following things [A]. And, for more accurate test, I changed code 
 a bit more
 to prevent kernel page allocation from cma area [B]. This will prevent 
 kernel
 page allocation from cma area completely so we can focus 
 cma_alloc/release race.

 Although, this is not correct fix, it could help that we can guess
 where the problem is.
>>> More correct fix is something like below.
>>> Please test it.
>> Hmm, this is not working:
> Sad to hear that.
>
> Could you tell me your system's MAX_ORDER and pageblock_order?
>

 MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
>>>
>>> Hmm... that's same with me.
>>>
>>> Below is similar fix that prevents buddy merging when one of buddy's
>>> migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
>>> no idea why previous fix (more correct fix) doesn't work for you.
>>> (It works for me.) But, maybe there is a bug on the fix
>>> so I make new one which is more general form. Please test it.
>>
>> Hi,
>> Hanjun Guo has gone to Tailand on business, so I help him to run this 
>> patch. The result
>> shows that the count of "CmaFree:" is OK now. But sometimes printed some 
>> information as below:
>>
>> alloc_contig_range: [28500, 28600) PFNs busy
>> alloc_contig_range: [28300, 28380) PFNs busy
>>
> 
> Those messages aren't necessarily a problem. Those messages indicate that
OK.

> those pages weren't able to be isolated. Given the test here is a
> concurrency test, I suspect some concurrent allocation or free prevented
> isolation which is to be expected some times. I'd only be concerned if
> seeing those messages cause allocation failure or some other notable impact.
I chose memory block size: 512K, 1M, 2M ran serveral times, there was no memory 
allocation failure.

> 
> Thanks,
> Laura
>  
>>>
>>> Thanks.
>>>
>>> -->8-
>>> >From dd41e348572948d70b935fc24f82c096ff0fb417 Mon Sep 17 00:00:00 2001
>>> From: Joonsoo Kim 
>>> Date: Fri, 4 Mar 2016 13:28:17 +0900
>>> Subject: [PATCH] mm/cma: fix race
>>>
>>> 

Re: Suspicious error for CMA stress test

2016-03-07 Thread Xishi Qiu
On 2016/3/4 13:33, Hanjun Guo wrote:

> Hi Joonsoo,
> 
> On 2016/3/4 10:02, Joonsoo Kim wrote:
>> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>>> On 2016/3/3 15:42, Joonsoo Kim wrote:
 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> (cc -mm and Joonsoo Kim)
>
>
> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>> Hi,
>>
>> I came across a suspicious error for CMA stress test:
>>
>> Before the test, I got:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree:  195044 kB
>>
>>
>> After running the test:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree: 6602584 kB
>>
>> So the freed CMA memory is more than total..
>>
>> Also the the MemFree is more than mem total:
>>
>> -bash-4.3# cat /proc/meminfo
>> MemTotal:   16342016 kB
>> MemFree:22367268 kB
>> MemAvailable:   22370528 kB
>>> [...]
> I played with this a bit and can see the same problem. The sanity
> check of CmaFree < CmaTotal generally triggers in
> __move_zone_freepage_state in unset_migratetype_isolate.
> This also seems to be present as far back as v4.0 which was the
> first version to have the updated accounting from Joonsoo.
> Were there known limitations with the new freepage accounting,
> Joonsoo?
 I don't know. I also played with this and looks like there is
 accounting problem, however, for my case, number of free page is slightly 
 less
 than total. I will take a look.

 Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
 look like your case.
>>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
>>> did some other test:
>> Thanks! Now, I can re-generate erronous situation you mentioned.
>>
>>>  - run with single thread with 10 times, everything is fine.
>>>
>>>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
>>> with
>>>the same test with 100 multi-thread, then I got:
>> [1] would not be sufficient to close this race.
>>
>> Try following things [A]. And, for more accurate test, I changed code a bit 
>> more
>> to prevent kernel page allocation from cma area [B]. This will prevent kernel
>> page allocation from cma area completely so we can focus cma_alloc/release 
>> race.
>>
>> Although, this is not correct fix, it could help that we can guess
>> where the problem is.
>>
>> Thanks.
>>
>> [A]
> 
> I tested this solution [A], it can fix the problem, as you are posting a new 
> patch, I will
> test that one and leave [B] alone :)
> 

Hi Joonsoo,

How does this problem happen? Why the count is larger than total?

Patch A prevent the cma page free to pcp, right?

...
-   if (unlikely(is_migrate_isolate(migratetype))) {
+   if (is_migrate_cma(migratetype) ||
+   unlikely(is_migrate_isolate(migratetype))) {
...

Thanks,
Xishi Qiu

> Thanks
> Hanjun
> 
> 
> 
> .
> 





Re: Suspicious error for CMA stress test

2016-03-07 Thread Xishi Qiu
On 2016/3/4 13:33, Hanjun Guo wrote:

> Hi Joonsoo,
> 
> On 2016/3/4 10:02, Joonsoo Kim wrote:
>> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>>> On 2016/3/3 15:42, Joonsoo Kim wrote:
 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> (cc -mm and Joonsoo Kim)
>
>
> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>> Hi,
>>
>> I came across a suspicious error for CMA stress test:
>>
>> Before the test, I got:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree:  195044 kB
>>
>>
>> After running the test:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree: 6602584 kB
>>
>> So the freed CMA memory is more than total..
>>
>> Also the the MemFree is more than mem total:
>>
>> -bash-4.3# cat /proc/meminfo
>> MemTotal:   16342016 kB
>> MemFree:22367268 kB
>> MemAvailable:   22370528 kB
>>> [...]
> I played with this a bit and can see the same problem. The sanity
> check of CmaFree < CmaTotal generally triggers in
> __move_zone_freepage_state in unset_migratetype_isolate.
> This also seems to be present as far back as v4.0 which was the
> first version to have the updated accounting from Joonsoo.
> Were there known limitations with the new freepage accounting,
> Joonsoo?
 I don't know. I also played with this and looks like there is
 accounting problem, however, for my case, number of free page is slightly 
 less
 than total. I will take a look.

 Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
 look like your case.
>>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
>>> did some other test:
>> Thanks! Now, I can re-generate erronous situation you mentioned.
>>
>>>  - run with single thread with 10 times, everything is fine.
>>>
>>>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
>>> with
>>>the same test with 100 multi-thread, then I got:
>> [1] would not be sufficient to close this race.
>>
>> Try following things [A]. And, for more accurate test, I changed code a bit 
>> more
>> to prevent kernel page allocation from cma area [B]. This will prevent kernel
>> page allocation from cma area completely so we can focus cma_alloc/release 
>> race.
>>
>> Although, this is not correct fix, it could help that we can guess
>> where the problem is.
>>
>> Thanks.
>>
>> [A]
> 
> I tested this solution [A], it can fix the problem, as you are posting a new 
> patch, I will
> test that one and leave [B] alone :)
> 

Hi Joonsoo,

How does this problem happen? Why the count is larger than total?

Patch A prevent the cma page free to pcp, right?

...
-   if (unlikely(is_migrate_isolate(migratetype))) {
+   if (is_migrate_cma(migratetype) ||
+   unlikely(is_migrate_isolate(migratetype))) {
...

Thanks,
Xishi Qiu

> Thanks
> Hanjun
> 
> 
> 
> .
> 





Re: Suspicious error for CMA stress test

2016-03-07 Thread Laura Abbott

On 03/07/2016 12:16 AM, Leizhen (ThunderTown) wrote:



On 2016/3/7 12:34, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:

On 2016/3/4 14:38, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:

On 2016/3/4 12:32, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:

On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:

On 2016/3/3 15:42, Joonsoo Kim wrote:

2016-03-03 10:25 GMT+09:00 Laura Abbott :

(cc -mm and Joonsoo Kim)


On 03/02/2016 05:52 AM, Hanjun Guo wrote:

Hi,

I came across a suspicious error for CMA stress test:

Before the test, I got:
-bash-4.3# cat /proc/meminfo | grep Cma
CmaTotal: 204800 kB
CmaFree:  195044 kB


After running the test:
-bash-4.3# cat /proc/meminfo | grep Cma
CmaTotal: 204800 kB
CmaFree: 6602584 kB

So the freed CMA memory is more than total..

Also the the MemFree is more than mem total:

-bash-4.3# cat /proc/meminfo
MemTotal:   16342016 kB
MemFree:22367268 kB
MemAvailable:   22370528 kB

[...]

I played with this a bit and can see the same problem. The sanity
check of CmaFree < CmaTotal generally triggers in
__move_zone_freepage_state in unset_migratetype_isolate.
This also seems to be present as far back as v4.0 which was the
first version to have the updated accounting from Joonsoo.
Were there known limitations with the new freepage accounting,
Joonsoo?

I don't know. I also played with this and looks like there is
accounting problem, however, for my case, number of free page is slightly less
than total. I will take a look.

Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
look like your case.

I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
did some other test:

Thanks! Now, I can re-generate erronous situation you mentioned.


  - run with single thread with 10 times, everything is fine.

  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, with
the same test with 100 multi-thread, then I got:

[1] would not be sufficient to close this race.

Try following things [A]. And, for more accurate test, I changed code a bit more
to prevent kernel page allocation from cma area [B]. This will prevent kernel
page allocation from cma area completely so we can focus cma_alloc/release race.

Although, this is not correct fix, it could help that we can guess
where the problem is.

More correct fix is something like below.
Please test it.

Hmm, this is not working:

Sad to hear that.

Could you tell me your system's MAX_ORDER and pageblock_order?



MAX_ORDER is 11, pageblock_order is 9, thanks for your help!


Hmm... that's same with me.

Below is similar fix that prevents buddy merging when one of buddy's
migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
no idea why previous fix (more correct fix) doesn't work for you.
(It works for me.) But, maybe there is a bug on the fix
so I make new one which is more general form. Please test it.


Hi,
Hanjun Guo has gone to Tailand on business, so I help him to run this 
patch. The result
shows that the count of "CmaFree:" is OK now. But sometimes printed some 
information as below:

alloc_contig_range: [28500, 28600) PFNs busy
alloc_contig_range: [28300, 28380) PFNs busy



Those messages aren't necessarily a problem. Those messages indicate that
those pages weren't able to be isolated. Given the test here is a
concurrency test, I suspect some concurrent allocation or free prevented
isolation which is to be expected some times. I'd only be concerned if
seeing those messages cause allocation failure or some other notable impact.

Thanks,
Laura
 


Thanks.

-->8-
>From dd41e348572948d70b935fc24f82c096ff0fb417 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim 
Date: Fri, 4 Mar 2016 13:28:17 +0900
Subject: [PATCH] mm/cma: fix race

Signed-off-by: Joonsoo Kim 
---
  mm/page_alloc.c | 33 +++--
  1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c6c38ed..d80d071 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -620,8 +620,8 @@ static inline void rmv_page_order(struct page *page)
   *
   * For recording page's order, we use page_private(page).
   */
-static inline int page_is_buddy(struct page *page, struct page *buddy,
-   unsigned int order)
+static inline int page_is_buddy(struct zone *zone, struct page *page,
+   struct page *buddy, unsigned int order)
  {
 if (!pfn_valid_within(page_to_pfn(buddy)))
 return 0;
@@ -644,6 +644,20 @@ static inline int page_is_buddy(struct page *page, struct 
page *buddy,
 if (page_zone_id(page) != page_zone_id(buddy))
 return 0;

+   if 

Re: Suspicious error for CMA stress test

2016-03-07 Thread Laura Abbott

On 03/07/2016 12:16 AM, Leizhen (ThunderTown) wrote:



On 2016/3/7 12:34, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:

On 2016/3/4 14:38, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:

On 2016/3/4 12:32, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:

On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:

On 2016/3/3 15:42, Joonsoo Kim wrote:

2016-03-03 10:25 GMT+09:00 Laura Abbott :

(cc -mm and Joonsoo Kim)


On 03/02/2016 05:52 AM, Hanjun Guo wrote:

Hi,

I came across a suspicious error for CMA stress test:

Before the test, I got:
-bash-4.3# cat /proc/meminfo | grep Cma
CmaTotal: 204800 kB
CmaFree:  195044 kB


After running the test:
-bash-4.3# cat /proc/meminfo | grep Cma
CmaTotal: 204800 kB
CmaFree: 6602584 kB

So the freed CMA memory is more than total..

Also the the MemFree is more than mem total:

-bash-4.3# cat /proc/meminfo
MemTotal:   16342016 kB
MemFree:22367268 kB
MemAvailable:   22370528 kB

[...]

I played with this a bit and can see the same problem. The sanity
check of CmaFree < CmaTotal generally triggers in
__move_zone_freepage_state in unset_migratetype_isolate.
This also seems to be present as far back as v4.0 which was the
first version to have the updated accounting from Joonsoo.
Were there known limitations with the new freepage accounting,
Joonsoo?

I don't know. I also played with this and looks like there is
accounting problem, however, for my case, number of free page is slightly less
than total. I will take a look.

Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
look like your case.

I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
did some other test:

Thanks! Now, I can re-generate erronous situation you mentioned.


  - run with single thread with 10 times, everything is fine.

  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, with
the same test with 100 multi-thread, then I got:

[1] would not be sufficient to close this race.

Try following things [A]. And, for more accurate test, I changed code a bit more
to prevent kernel page allocation from cma area [B]. This will prevent kernel
page allocation from cma area completely so we can focus cma_alloc/release race.

Although, this is not correct fix, it could help that we can guess
where the problem is.

More correct fix is something like below.
Please test it.

Hmm, this is not working:

Sad to hear that.

Could you tell me your system's MAX_ORDER and pageblock_order?



MAX_ORDER is 11, pageblock_order is 9, thanks for your help!


Hmm... that's same with me.

Below is similar fix that prevents buddy merging when one of buddy's
migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
no idea why previous fix (more correct fix) doesn't work for you.
(It works for me.) But, maybe there is a bug on the fix
so I make new one which is more general form. Please test it.


Hi,
Hanjun Guo has gone to Tailand on business, so I help him to run this 
patch. The result
shows that the count of "CmaFree:" is OK now. But sometimes printed some 
information as below:

alloc_contig_range: [28500, 28600) PFNs busy
alloc_contig_range: [28300, 28380) PFNs busy



Those messages aren't necessarily a problem. Those messages indicate that
those pages weren't able to be isolated. Given the test here is a
concurrency test, I suspect some concurrent allocation or free prevented
isolation which is to be expected some times. I'd only be concerned if
seeing those messages cause allocation failure or some other notable impact.

Thanks,
Laura
 


Thanks.

-->8-
>From dd41e348572948d70b935fc24f82c096ff0fb417 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim 
Date: Fri, 4 Mar 2016 13:28:17 +0900
Subject: [PATCH] mm/cma: fix race

Signed-off-by: Joonsoo Kim 
---
  mm/page_alloc.c | 33 +++--
  1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c6c38ed..d80d071 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -620,8 +620,8 @@ static inline void rmv_page_order(struct page *page)
   *
   * For recording page's order, we use page_private(page).
   */
-static inline int page_is_buddy(struct page *page, struct page *buddy,
-   unsigned int order)
+static inline int page_is_buddy(struct zone *zone, struct page *page,
+   struct page *buddy, unsigned int order)
  {
 if (!pfn_valid_within(page_to_pfn(buddy)))
 return 0;
@@ -644,6 +644,20 @@ static inline int page_is_buddy(struct page *page, struct 
page *buddy,
 if (page_zone_id(page) != page_zone_id(buddy))
 return 0;

+   if (IS_ENABLED(CONFIG_CMA) &&
+   

Re: Suspicious error for CMA stress test

2016-03-07 Thread Vlastimil Babka

On 03/07/2016 05:34 AM, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:

Sad to hear that.

Could you tell me your system's MAX_ORDER and pageblock_order?



MAX_ORDER is 11, pageblock_order is 9, thanks for your help!


I thought that CMA regions/operations (and isolation IIRC?) were 
supposed to be MAX_ORDER aligned exactly to prevent needing these extra 
checks for buddy merging. So what's wrong?



Hmm... that's same with me.

Below is similar fix that prevents buddy merging when one of buddy's
migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
no idea why previous fix (more correct fix) doesn't work for you.
(It works for me.) But, maybe there is a bug on the fix
so I make new one which is more general form. Please test it.

Thanks.

-->8-
 From dd41e348572948d70b935fc24f82c096ff0fb417 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim 
Date: Fri, 4 Mar 2016 13:28:17 +0900
Subject: [PATCH] mm/cma: fix race

Signed-off-by: Joonsoo Kim 
---
  mm/page_alloc.c | 33 +++--
  1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c6c38ed..d80d071 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -620,8 +620,8 @@ static inline void rmv_page_order(struct page *page)
   *
   * For recording page's order, we use page_private(page).
   */
-static inline int page_is_buddy(struct page *page, struct page *buddy,
-   unsigned int order)
+static inline int page_is_buddy(struct zone *zone, struct page *page,
+   struct page *buddy, unsigned int order)
  {
 if (!pfn_valid_within(page_to_pfn(buddy)))
 return 0;
@@ -644,6 +644,20 @@ static inline int page_is_buddy(struct page *page, struct 
page *buddy,
 if (page_zone_id(page) != page_zone_id(buddy))
 return 0;

+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone)) &&
+   unlikely(order >= pageblock_order)) {
+   int page_mt, buddy_mt;
+
+   page_mt = get_pageblock_migratetype(page);
+   buddy_mt = get_pageblock_migratetype(buddy);
+
+   if (page_mt != buddy_mt &&
+   (is_migrate_isolate(page_mt) ||
+   is_migrate_isolate(buddy_mt)))
+   return 0;
+   }
+
 VM_BUG_ON_PAGE(page_count(buddy) != 0, buddy);

 return 1;
@@ -691,17 +705,8 @@ static inline void __free_one_page(struct page *page,
 VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);

 VM_BUG_ON(migratetype == -1);
-   if (is_migrate_isolate(migratetype)) {
-   /*
-* We restrict max order of merging to prevent merge
-* between freepages on isolate pageblock and normal
-* pageblock. Without this, pageblock isolation
-* could cause incorrect freepage accounting.
-*/
-   max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
-   } else {
+   if (!is_migrate_isolate(migratetype))
 __mod_zone_freepage_state(zone, 1 << order, migratetype);
-   }

 page_idx = pfn & ((1 << max_order) - 1);

@@ -711,7 +716,7 @@ static inline void __free_one_page(struct page *page,
 while (order < max_order - 1) {
 buddy_idx = __find_buddy_index(page_idx, order);
 buddy = page + (buddy_idx - page_idx);
-   if (!page_is_buddy(page, buddy, order))
+   if (!page_is_buddy(zone, page, buddy, order))
 break;
 /*
  * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard 
page,
@@ -745,7 +750,7 @@ static inline void __free_one_page(struct page *page,
 higher_page = page + (combined_idx - page_idx);
 buddy_idx = __find_buddy_index(combined_idx, order + 1);
 higher_buddy = higher_page + (buddy_idx - combined_idx);
-   if (page_is_buddy(higher_page, higher_buddy, order + 1)) {
+   if (page_is_buddy(zone, higher_page, higher_buddy, order + 1)) {
 list_add_tail(>lru,
 
>free_area[order].free_list[migratetype]);
 goto out;





Re: Suspicious error for CMA stress test

2016-03-07 Thread Vlastimil Babka

On 03/07/2016 05:34 AM, Joonsoo Kim wrote:

On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:

Sad to hear that.

Could you tell me your system's MAX_ORDER and pageblock_order?



MAX_ORDER is 11, pageblock_order is 9, thanks for your help!


I thought that CMA regions/operations (and isolation IIRC?) were 
supposed to be MAX_ORDER aligned exactly to prevent needing these extra 
checks for buddy merging. So what's wrong?



Hmm... that's same with me.

Below is similar fix that prevents buddy merging when one of buddy's
migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
no idea why previous fix (more correct fix) doesn't work for you.
(It works for me.) But, maybe there is a bug on the fix
so I make new one which is more general form. Please test it.

Thanks.

-->8-
 From dd41e348572948d70b935fc24f82c096ff0fb417 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim 
Date: Fri, 4 Mar 2016 13:28:17 +0900
Subject: [PATCH] mm/cma: fix race

Signed-off-by: Joonsoo Kim 
---
  mm/page_alloc.c | 33 +++--
  1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c6c38ed..d80d071 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -620,8 +620,8 @@ static inline void rmv_page_order(struct page *page)
   *
   * For recording page's order, we use page_private(page).
   */
-static inline int page_is_buddy(struct page *page, struct page *buddy,
-   unsigned int order)
+static inline int page_is_buddy(struct zone *zone, struct page *page,
+   struct page *buddy, unsigned int order)
  {
 if (!pfn_valid_within(page_to_pfn(buddy)))
 return 0;
@@ -644,6 +644,20 @@ static inline int page_is_buddy(struct page *page, struct 
page *buddy,
 if (page_zone_id(page) != page_zone_id(buddy))
 return 0;

+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone)) &&
+   unlikely(order >= pageblock_order)) {
+   int page_mt, buddy_mt;
+
+   page_mt = get_pageblock_migratetype(page);
+   buddy_mt = get_pageblock_migratetype(buddy);
+
+   if (page_mt != buddy_mt &&
+   (is_migrate_isolate(page_mt) ||
+   is_migrate_isolate(buddy_mt)))
+   return 0;
+   }
+
 VM_BUG_ON_PAGE(page_count(buddy) != 0, buddy);

 return 1;
@@ -691,17 +705,8 @@ static inline void __free_one_page(struct page *page,
 VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);

 VM_BUG_ON(migratetype == -1);
-   if (is_migrate_isolate(migratetype)) {
-   /*
-* We restrict max order of merging to prevent merge
-* between freepages on isolate pageblock and normal
-* pageblock. Without this, pageblock isolation
-* could cause incorrect freepage accounting.
-*/
-   max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
-   } else {
+   if (!is_migrate_isolate(migratetype))
 __mod_zone_freepage_state(zone, 1 << order, migratetype);
-   }

 page_idx = pfn & ((1 << max_order) - 1);

@@ -711,7 +716,7 @@ static inline void __free_one_page(struct page *page,
 while (order < max_order - 1) {
 buddy_idx = __find_buddy_index(page_idx, order);
 buddy = page + (buddy_idx - page_idx);
-   if (!page_is_buddy(page, buddy, order))
+   if (!page_is_buddy(zone, page, buddy, order))
 break;
 /*
  * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard 
page,
@@ -745,7 +750,7 @@ static inline void __free_one_page(struct page *page,
 higher_page = page + (combined_idx - page_idx);
 buddy_idx = __find_buddy_index(combined_idx, order + 1);
 higher_buddy = higher_page + (buddy_idx - combined_idx);
-   if (page_is_buddy(higher_page, higher_buddy, order + 1)) {
+   if (page_is_buddy(zone, higher_page, higher_buddy, order + 1)) {
 list_add_tail(>lru,
 
>free_area[order].free_list[migratetype]);
 goto out;





Re: Suspicious error for CMA stress test

2016-03-07 Thread Leizhen (ThunderTown)


On 2016/3/7 12:34, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
>> On 2016/3/4 14:38, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
 On 2016/3/4 12:32, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
>> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>>> On 2016/3/3 15:42, Joonsoo Kim wrote:
 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> (cc -mm and Joonsoo Kim)
>
>
> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>> Hi,
>>
>> I came across a suspicious error for CMA stress test:
>>
>> Before the test, I got:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree:  195044 kB
>>
>>
>> After running the test:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree: 6602584 kB
>>
>> So the freed CMA memory is more than total..
>>
>> Also the the MemFree is more than mem total:
>>
>> -bash-4.3# cat /proc/meminfo
>> MemTotal:   16342016 kB
>> MemFree:22367268 kB
>> MemAvailable:   22370528 kB
>>> [...]
> I played with this a bit and can see the same problem. The sanity
> check of CmaFree < CmaTotal generally triggers in
> __move_zone_freepage_state in unset_migratetype_isolate.
> This also seems to be present as far back as v4.0 which was the
> first version to have the updated accounting from Joonsoo.
> Were there known limitations with the new freepage accounting,
> Joonsoo?
 I don't know. I also played with this and looks like there is
 accounting problem, however, for my case, number of free page is 
 slightly less
 than total. I will take a look.

 Hanjun, could you tell me your malloc_size? I tested with 1 and it 
 doesn't
 look like your case.
>>> I tested with malloc_size with 2M, and it grows much bigger than 1M, 
>>> also I
>>> did some other test:
>> Thanks! Now, I can re-generate erronous situation you mentioned.
>>
>>>  - run with single thread with 10 times, everything is fine.
>>>
>>>  - I hack the cam_alloc() and free as below [1] to see if it's lock 
>>> issue, with
>>>the same test with 100 multi-thread, then I got:
>> [1] would not be sufficient to close this race.
>>
>> Try following things [A]. And, for more accurate test, I changed code a 
>> bit more
>> to prevent kernel page allocation from cma area [B]. This will prevent 
>> kernel
>> page allocation from cma area completely so we can focus 
>> cma_alloc/release race.
>>
>> Although, this is not correct fix, it could help that we can guess
>> where the problem is.
> More correct fix is something like below.
> Please test it.
 Hmm, this is not working:
>>> Sad to hear that.
>>>
>>> Could you tell me your system's MAX_ORDER and pageblock_order?
>>>
>>
>> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
> 
> Hmm... that's same with me.
> 
> Below is similar fix that prevents buddy merging when one of buddy's
> migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
> no idea why previous fix (more correct fix) doesn't work for you.
> (It works for me.) But, maybe there is a bug on the fix
> so I make new one which is more general form. Please test it.

Hi,
Hanjun Guo has gone to Tailand on business, so I help him to run this 
patch. The result
shows that the count of "CmaFree:" is OK now. But sometimes printed some 
information as below:

alloc_contig_range: [28500, 28600) PFNs busy
alloc_contig_range: [28300, 28380) PFNs busy

> 
> Thanks.
> 
> -->8-
>>From dd41e348572948d70b935fc24f82c096ff0fb417 Mon Sep 17 00:00:00 2001
> From: Joonsoo Kim 
> Date: Fri, 4 Mar 2016 13:28:17 +0900
> Subject: [PATCH] mm/cma: fix race
> 
> Signed-off-by: Joonsoo Kim 
> ---
>  mm/page_alloc.c | 33 +++--
>  1 file changed, 19 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c6c38ed..d80d071 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -620,8 +620,8 @@ static inline void rmv_page_order(struct page *page)
>   *
>   * For recording page's order, we use page_private(page).
>   */
> -static inline int page_is_buddy(struct page *page, struct page *buddy,
> -   unsigned int order)
> +static inline int page_is_buddy(struct zone *zone, struct page *page,
> +   struct page *buddy, unsigned int order)
>  {
> if 

Re: Suspicious error for CMA stress test

2016-03-07 Thread Leizhen (ThunderTown)


On 2016/3/7 12:34, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
>> On 2016/3/4 14:38, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
 On 2016/3/4 12:32, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
>> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>>> On 2016/3/3 15:42, Joonsoo Kim wrote:
 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> (cc -mm and Joonsoo Kim)
>
>
> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>> Hi,
>>
>> I came across a suspicious error for CMA stress test:
>>
>> Before the test, I got:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree:  195044 kB
>>
>>
>> After running the test:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree: 6602584 kB
>>
>> So the freed CMA memory is more than total..
>>
>> Also the the MemFree is more than mem total:
>>
>> -bash-4.3# cat /proc/meminfo
>> MemTotal:   16342016 kB
>> MemFree:22367268 kB
>> MemAvailable:   22370528 kB
>>> [...]
> I played with this a bit and can see the same problem. The sanity
> check of CmaFree < CmaTotal generally triggers in
> __move_zone_freepage_state in unset_migratetype_isolate.
> This also seems to be present as far back as v4.0 which was the
> first version to have the updated accounting from Joonsoo.
> Were there known limitations with the new freepage accounting,
> Joonsoo?
 I don't know. I also played with this and looks like there is
 accounting problem, however, for my case, number of free page is 
 slightly less
 than total. I will take a look.

 Hanjun, could you tell me your malloc_size? I tested with 1 and it 
 doesn't
 look like your case.
>>> I tested with malloc_size with 2M, and it grows much bigger than 1M, 
>>> also I
>>> did some other test:
>> Thanks! Now, I can re-generate erronous situation you mentioned.
>>
>>>  - run with single thread with 10 times, everything is fine.
>>>
>>>  - I hack the cam_alloc() and free as below [1] to see if it's lock 
>>> issue, with
>>>the same test with 100 multi-thread, then I got:
>> [1] would not be sufficient to close this race.
>>
>> Try following things [A]. And, for more accurate test, I changed code a 
>> bit more
>> to prevent kernel page allocation from cma area [B]. This will prevent 
>> kernel
>> page allocation from cma area completely so we can focus 
>> cma_alloc/release race.
>>
>> Although, this is not correct fix, it could help that we can guess
>> where the problem is.
> More correct fix is something like below.
> Please test it.
 Hmm, this is not working:
>>> Sad to hear that.
>>>
>>> Could you tell me your system's MAX_ORDER and pageblock_order?
>>>
>>
>> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!
> 
> Hmm... that's same with me.
> 
> Below is similar fix that prevents buddy merging when one of buddy's
> migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
> no idea why previous fix (more correct fix) doesn't work for you.
> (It works for me.) But, maybe there is a bug on the fix
> so I make new one which is more general form. Please test it.

Hi,
Hanjun Guo has gone to Tailand on business, so I help him to run this 
patch. The result
shows that the count of "CmaFree:" is OK now. But sometimes printed some 
information as below:

alloc_contig_range: [28500, 28600) PFNs busy
alloc_contig_range: [28300, 28380) PFNs busy

> 
> Thanks.
> 
> -->8-
>>From dd41e348572948d70b935fc24f82c096ff0fb417 Mon Sep 17 00:00:00 2001
> From: Joonsoo Kim 
> Date: Fri, 4 Mar 2016 13:28:17 +0900
> Subject: [PATCH] mm/cma: fix race
> 
> Signed-off-by: Joonsoo Kim 
> ---
>  mm/page_alloc.c | 33 +++--
>  1 file changed, 19 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c6c38ed..d80d071 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -620,8 +620,8 @@ static inline void rmv_page_order(struct page *page)
>   *
>   * For recording page's order, we use page_private(page).
>   */
> -static inline int page_is_buddy(struct page *page, struct page *buddy,
> -   unsigned int order)
> +static inline int page_is_buddy(struct zone *zone, struct page *page,
> +   struct page *buddy, unsigned int order)
>  {
> if (!pfn_valid_within(page_to_pfn(buddy)))
> return 0;
> @@ -644,6 

Re: Suspicious error for CMA stress test

2016-03-06 Thread Joonsoo Kim
On Fri, Mar 04, 2016 at 02:59:39PM +0800, Hanjun Guo wrote:
> On 2016/3/4 10:02, Joonsoo Kim wrote:
> > On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> >> On 2016/3/3 15:42, Joonsoo Kim wrote:
> >>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
>  (cc -mm and Joonsoo Kim)
> 
> 
>  On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> > Hi,
> >
> > I came across a suspicious error for CMA stress test:
> >
> > Before the test, I got:
> > -bash-4.3# cat /proc/meminfo | grep Cma
> > CmaTotal: 204800 kB
> > CmaFree:  195044 kB
> >
> >
> > After running the test:
> > -bash-4.3# cat /proc/meminfo | grep Cma
> > CmaTotal: 204800 kB
> > CmaFree: 6602584 kB
> >
> > So the freed CMA memory is more than total..
> >
> > Also the the MemFree is more than mem total:
> >
> > -bash-4.3# cat /proc/meminfo
> > MemTotal:   16342016 kB
> > MemFree:22367268 kB
> > MemAvailable:   22370528 kB
> >> [...]
>  I played with this a bit and can see the same problem. The sanity
>  check of CmaFree < CmaTotal generally triggers in
>  __move_zone_freepage_state in unset_migratetype_isolate.
>  This also seems to be present as far back as v4.0 which was the
>  first version to have the updated accounting from Joonsoo.
>  Were there known limitations with the new freepage accounting,
>  Joonsoo?
> >>> I don't know. I also played with this and looks like there is
> >>> accounting problem, however, for my case, number of free page is slightly 
> >>> less
> >>> than total. I will take a look.
> >>>
> >>> Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
> >>> look like your case.
> >> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
> >> did some other test:
> > Thanks! Now, I can re-generate erronous situation you mentioned.
> >
> >>  - run with single thread with 10 times, everything is fine.
> >>
> >>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
> >> with
> >>the same test with 100 multi-thread, then I got:
> > [1] would not be sufficient to close this race.
> >
> > Try following things [A]. And, for more accurate test, I changed code a bit 
> > more
> > to prevent kernel page allocation from cma area [B]. This will prevent 
> > kernel
> > page allocation from cma area completely so we can focus cma_alloc/release 
> > race.
> >
> > Although, this is not correct fix, it could help that we can guess
> > where the problem is.
> >
> > Thanks.
> >
> > [A]
> > diff --git a/mm/cma.c b/mm/cma.c
> > index c003274..43ed02d 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -496,7 +496,9 @@ bool cma_release(struct cma *cma, const struct page 
> > *pages, unsigned int count)
> >  
> > VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
> >  
> > +   mutex_lock(_mutex);
> > free_contig_range(pfn, count);
> > +   mutex_unlock(_mutex);
> > cma_clear_bitmap(cma, pfn, count);
> > trace_cma_release(pfn, pages, count);
> >  
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index c6c38ed..1ce8a59 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2192,7 +2192,8 @@ void free_hot_cold_page(struct page *page, bool cold)
> >  * excessively into the page allocator
> >  */
> > if (migratetype >= MIGRATE_PCPTYPES) {
> > -   if (unlikely(is_migrate_isolate(migratetype))) {
> > +   if (is_migrate_cma(migratetype) ||
> > +   unlikely(is_migrate_isolate(migratetype))) {
> > free_one_page(zone, page, pfn, 0, migratetype);
> > goto out;
> > }
> 
> As I replied in previous email, the solution will fix the problem, the Cma 
> freed memory and
> system freed memory is in sane state after apply above patch.
> 
> I also tested this situation which only apply the code below:
> 
> if (migratetype >= MIGRATE_PCPTYPES) {
> -   if (unlikely(is_migrate_isolate(migratetype))) {
> +   if (is_migrate_cma(migratetype) ||
> +   unlikely(is_migrate_isolate(migratetype))) {
> free_one_page(zone, page, pfn, 0, migratetype);
> goto out;
> }
> 
> 
> This will not fix the problem, but will reduce the errorous freed number of 
> memory,
> hope this helps.
> 
> >
> >
> > [B]
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index f2dccf9..c6c38ed 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1493,6 +1493,7 @@ static int prep_new_page(struct page *page, unsigned 
> > int order, gfp_t gfp_flags,
> > int 
> > alloc_flags)
> >  {
> > int i;
> > +   bool cma = false;
> >  
> > for (i = 0; i < (1 << 

Re: Suspicious error for CMA stress test

2016-03-06 Thread Joonsoo Kim
On Fri, Mar 04, 2016 at 02:59:39PM +0800, Hanjun Guo wrote:
> On 2016/3/4 10:02, Joonsoo Kim wrote:
> > On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> >> On 2016/3/3 15:42, Joonsoo Kim wrote:
> >>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
>  (cc -mm and Joonsoo Kim)
> 
> 
>  On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> > Hi,
> >
> > I came across a suspicious error for CMA stress test:
> >
> > Before the test, I got:
> > -bash-4.3# cat /proc/meminfo | grep Cma
> > CmaTotal: 204800 kB
> > CmaFree:  195044 kB
> >
> >
> > After running the test:
> > -bash-4.3# cat /proc/meminfo | grep Cma
> > CmaTotal: 204800 kB
> > CmaFree: 6602584 kB
> >
> > So the freed CMA memory is more than total..
> >
> > Also the the MemFree is more than mem total:
> >
> > -bash-4.3# cat /proc/meminfo
> > MemTotal:   16342016 kB
> > MemFree:22367268 kB
> > MemAvailable:   22370528 kB
> >> [...]
>  I played with this a bit and can see the same problem. The sanity
>  check of CmaFree < CmaTotal generally triggers in
>  __move_zone_freepage_state in unset_migratetype_isolate.
>  This also seems to be present as far back as v4.0 which was the
>  first version to have the updated accounting from Joonsoo.
>  Were there known limitations with the new freepage accounting,
>  Joonsoo?
> >>> I don't know. I also played with this and looks like there is
> >>> accounting problem, however, for my case, number of free page is slightly 
> >>> less
> >>> than total. I will take a look.
> >>>
> >>> Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
> >>> look like your case.
> >> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
> >> did some other test:
> > Thanks! Now, I can re-generate erronous situation you mentioned.
> >
> >>  - run with single thread with 10 times, everything is fine.
> >>
> >>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
> >> with
> >>the same test with 100 multi-thread, then I got:
> > [1] would not be sufficient to close this race.
> >
> > Try following things [A]. And, for more accurate test, I changed code a bit 
> > more
> > to prevent kernel page allocation from cma area [B]. This will prevent 
> > kernel
> > page allocation from cma area completely so we can focus cma_alloc/release 
> > race.
> >
> > Although, this is not correct fix, it could help that we can guess
> > where the problem is.
> >
> > Thanks.
> >
> > [A]
> > diff --git a/mm/cma.c b/mm/cma.c
> > index c003274..43ed02d 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -496,7 +496,9 @@ bool cma_release(struct cma *cma, const struct page 
> > *pages, unsigned int count)
> >  
> > VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
> >  
> > +   mutex_lock(_mutex);
> > free_contig_range(pfn, count);
> > +   mutex_unlock(_mutex);
> > cma_clear_bitmap(cma, pfn, count);
> > trace_cma_release(pfn, pages, count);
> >  
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index c6c38ed..1ce8a59 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2192,7 +2192,8 @@ void free_hot_cold_page(struct page *page, bool cold)
> >  * excessively into the page allocator
> >  */
> > if (migratetype >= MIGRATE_PCPTYPES) {
> > -   if (unlikely(is_migrate_isolate(migratetype))) {
> > +   if (is_migrate_cma(migratetype) ||
> > +   unlikely(is_migrate_isolate(migratetype))) {
> > free_one_page(zone, page, pfn, 0, migratetype);
> > goto out;
> > }
> 
> As I replied in previous email, the solution will fix the problem, the Cma 
> freed memory and
> system freed memory is in sane state after apply above patch.
> 
> I also tested this situation which only apply the code below:
> 
> if (migratetype >= MIGRATE_PCPTYPES) {
> -   if (unlikely(is_migrate_isolate(migratetype))) {
> +   if (is_migrate_cma(migratetype) ||
> +   unlikely(is_migrate_isolate(migratetype))) {
> free_one_page(zone, page, pfn, 0, migratetype);
> goto out;
> }
> 
> 
> This will not fix the problem, but will reduce the errorous freed number of 
> memory,
> hope this helps.
> 
> >
> >
> > [B]
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index f2dccf9..c6c38ed 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1493,6 +1493,7 @@ static int prep_new_page(struct page *page, unsigned 
> > int order, gfp_t gfp_flags,
> > int 
> > alloc_flags)
> >  {
> > int i;
> > +   bool cma = false;
> >  
> > for (i = 0; i < (1 << order); i++) {
> >

Re: Suspicious error for CMA stress test

2016-03-06 Thread Joonsoo Kim
On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
> On 2016/3/4 14:38, Joonsoo Kim wrote:
> > On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
> >> On 2016/3/4 12:32, Joonsoo Kim wrote:
> >>> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
>  On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> > On 2016/3/3 15:42, Joonsoo Kim wrote:
> >> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> >>> (cc -mm and Joonsoo Kim)
> >>>
> >>>
> >>> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>  Hi,
> 
>  I came across a suspicious error for CMA stress test:
> 
>  Before the test, I got:
>  -bash-4.3# cat /proc/meminfo | grep Cma
>  CmaTotal: 204800 kB
>  CmaFree:  195044 kB
> 
> 
>  After running the test:
>  -bash-4.3# cat /proc/meminfo | grep Cma
>  CmaTotal: 204800 kB
>  CmaFree: 6602584 kB
> 
>  So the freed CMA memory is more than total..
> 
>  Also the the MemFree is more than mem total:
> 
>  -bash-4.3# cat /proc/meminfo
>  MemTotal:   16342016 kB
>  MemFree:22367268 kB
>  MemAvailable:   22370528 kB
> > [...]
> >>> I played with this a bit and can see the same problem. The sanity
> >>> check of CmaFree < CmaTotal generally triggers in
> >>> __move_zone_freepage_state in unset_migratetype_isolate.
> >>> This also seems to be present as far back as v4.0 which was the
> >>> first version to have the updated accounting from Joonsoo.
> >>> Were there known limitations with the new freepage accounting,
> >>> Joonsoo?
> >> I don't know. I also played with this and looks like there is
> >> accounting problem, however, for my case, number of free page is 
> >> slightly less
> >> than total. I will take a look.
> >>
> >> Hanjun, could you tell me your malloc_size? I tested with 1 and it 
> >> doesn't
> >> look like your case.
> > I tested with malloc_size with 2M, and it grows much bigger than 1M, 
> > also I
> > did some other test:
>  Thanks! Now, I can re-generate erronous situation you mentioned.
> 
> >  - run with single thread with 10 times, everything is fine.
> >
> >  - I hack the cam_alloc() and free as below [1] to see if it's lock 
> > issue, with
> >the same test with 100 multi-thread, then I got:
>  [1] would not be sufficient to close this race.
> 
>  Try following things [A]. And, for more accurate test, I changed code a 
>  bit more
>  to prevent kernel page allocation from cma area [B]. This will prevent 
>  kernel
>  page allocation from cma area completely so we can focus 
>  cma_alloc/release race.
> 
>  Although, this is not correct fix, it could help that we can guess
>  where the problem is.
> >>> More correct fix is something like below.
> >>> Please test it.
> >> Hmm, this is not working:
> > Sad to hear that.
> >
> > Could you tell me your system's MAX_ORDER and pageblock_order?
> >
> 
> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!

Hmm... that's same with me.

Below is similar fix that prevents buddy merging when one of buddy's
migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
no idea why previous fix (more correct fix) doesn't work for you.
(It works for me.) But, maybe there is a bug on the fix
so I make new one which is more general form. Please test it.

Thanks.

-->8-
>From dd41e348572948d70b935fc24f82c096ff0fb417 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim 
Date: Fri, 4 Mar 2016 13:28:17 +0900
Subject: [PATCH] mm/cma: fix race

Signed-off-by: Joonsoo Kim 
---
 mm/page_alloc.c | 33 +++--
 1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c6c38ed..d80d071 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -620,8 +620,8 @@ static inline void rmv_page_order(struct page *page)
  *
  * For recording page's order, we use page_private(page).
  */
-static inline int page_is_buddy(struct page *page, struct page *buddy,
-   unsigned int order)
+static inline int page_is_buddy(struct zone *zone, struct page *page,
+   struct page *buddy, unsigned int order)
 {
if (!pfn_valid_within(page_to_pfn(buddy)))
return 0;
@@ -644,6 +644,20 @@ static inline int page_is_buddy(struct page *page, struct 
page *buddy,
if (page_zone_id(page) != page_zone_id(buddy))
return 0;
 
+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone)) &&
+   unlikely(order >= 

Re: Suspicious error for CMA stress test

2016-03-06 Thread Joonsoo Kim
On Fri, Mar 04, 2016 at 03:35:26PM +0800, Hanjun Guo wrote:
> On 2016/3/4 14:38, Joonsoo Kim wrote:
> > On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
> >> On 2016/3/4 12:32, Joonsoo Kim wrote:
> >>> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
>  On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> > On 2016/3/3 15:42, Joonsoo Kim wrote:
> >> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> >>> (cc -mm and Joonsoo Kim)
> >>>
> >>>
> >>> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>  Hi,
> 
>  I came across a suspicious error for CMA stress test:
> 
>  Before the test, I got:
>  -bash-4.3# cat /proc/meminfo | grep Cma
>  CmaTotal: 204800 kB
>  CmaFree:  195044 kB
> 
> 
>  After running the test:
>  -bash-4.3# cat /proc/meminfo | grep Cma
>  CmaTotal: 204800 kB
>  CmaFree: 6602584 kB
> 
>  So the freed CMA memory is more than total..
> 
>  Also the the MemFree is more than mem total:
> 
>  -bash-4.3# cat /proc/meminfo
>  MemTotal:   16342016 kB
>  MemFree:22367268 kB
>  MemAvailable:   22370528 kB
> > [...]
> >>> I played with this a bit and can see the same problem. The sanity
> >>> check of CmaFree < CmaTotal generally triggers in
> >>> __move_zone_freepage_state in unset_migratetype_isolate.
> >>> This also seems to be present as far back as v4.0 which was the
> >>> first version to have the updated accounting from Joonsoo.
> >>> Were there known limitations with the new freepage accounting,
> >>> Joonsoo?
> >> I don't know. I also played with this and looks like there is
> >> accounting problem, however, for my case, number of free page is 
> >> slightly less
> >> than total. I will take a look.
> >>
> >> Hanjun, could you tell me your malloc_size? I tested with 1 and it 
> >> doesn't
> >> look like your case.
> > I tested with malloc_size with 2M, and it grows much bigger than 1M, 
> > also I
> > did some other test:
>  Thanks! Now, I can re-generate erronous situation you mentioned.
> 
> >  - run with single thread with 10 times, everything is fine.
> >
> >  - I hack the cam_alloc() and free as below [1] to see if it's lock 
> > issue, with
> >the same test with 100 multi-thread, then I got:
>  [1] would not be sufficient to close this race.
> 
>  Try following things [A]. And, for more accurate test, I changed code a 
>  bit more
>  to prevent kernel page allocation from cma area [B]. This will prevent 
>  kernel
>  page allocation from cma area completely so we can focus 
>  cma_alloc/release race.
> 
>  Although, this is not correct fix, it could help that we can guess
>  where the problem is.
> >>> More correct fix is something like below.
> >>> Please test it.
> >> Hmm, this is not working:
> > Sad to hear that.
> >
> > Could you tell me your system's MAX_ORDER and pageblock_order?
> >
> 
> MAX_ORDER is 11, pageblock_order is 9, thanks for your help!

Hmm... that's same with me.

Below is similar fix that prevents buddy merging when one of buddy's
migrate type, but, not both, is MIGRATE_ISOLATE. In fact, I have
no idea why previous fix (more correct fix) doesn't work for you.
(It works for me.) But, maybe there is a bug on the fix
so I make new one which is more general form. Please test it.

Thanks.

-->8-
>From dd41e348572948d70b935fc24f82c096ff0fb417 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim 
Date: Fri, 4 Mar 2016 13:28:17 +0900
Subject: [PATCH] mm/cma: fix race

Signed-off-by: Joonsoo Kim 
---
 mm/page_alloc.c | 33 +++--
 1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c6c38ed..d80d071 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -620,8 +620,8 @@ static inline void rmv_page_order(struct page *page)
  *
  * For recording page's order, we use page_private(page).
  */
-static inline int page_is_buddy(struct page *page, struct page *buddy,
-   unsigned int order)
+static inline int page_is_buddy(struct zone *zone, struct page *page,
+   struct page *buddy, unsigned int order)
 {
if (!pfn_valid_within(page_to_pfn(buddy)))
return 0;
@@ -644,6 +644,20 @@ static inline int page_is_buddy(struct page *page, struct 
page *buddy,
if (page_zone_id(page) != page_zone_id(buddy))
return 0;
 
+   if (IS_ENABLED(CONFIG_CMA) &&
+   unlikely(has_isolate_pageblock(zone)) &&
+   unlikely(order >= pageblock_order)) {
+   int page_mt, buddy_mt;
+
+  

Re: Suspicious error for CMA stress test

2016-03-03 Thread Hanjun Guo
On 2016/3/4 14:38, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
>> On 2016/3/4 12:32, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
 On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> On 2016/3/3 15:42, Joonsoo Kim wrote:
>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
>>> (cc -mm and Joonsoo Kim)
>>>
>>>
>>> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
 Hi,

 I came across a suspicious error for CMA stress test:

 Before the test, I got:
 -bash-4.3# cat /proc/meminfo | grep Cma
 CmaTotal: 204800 kB
 CmaFree:  195044 kB


 After running the test:
 -bash-4.3# cat /proc/meminfo | grep Cma
 CmaTotal: 204800 kB
 CmaFree: 6602584 kB

 So the freed CMA memory is more than total..

 Also the the MemFree is more than mem total:

 -bash-4.3# cat /proc/meminfo
 MemTotal:   16342016 kB
 MemFree:22367268 kB
 MemAvailable:   22370528 kB
> [...]
>>> I played with this a bit and can see the same problem. The sanity
>>> check of CmaFree < CmaTotal generally triggers in
>>> __move_zone_freepage_state in unset_migratetype_isolate.
>>> This also seems to be present as far back as v4.0 which was the
>>> first version to have the updated accounting from Joonsoo.
>>> Were there known limitations with the new freepage accounting,
>>> Joonsoo?
>> I don't know. I also played with this and looks like there is
>> accounting problem, however, for my case, number of free page is 
>> slightly less
>> than total. I will take a look.
>>
>> Hanjun, could you tell me your malloc_size? I tested with 1 and it 
>> doesn't
>> look like your case.
> I tested with malloc_size with 2M, and it grows much bigger than 1M, also 
> I
> did some other test:
 Thanks! Now, I can re-generate erronous situation you mentioned.

>  - run with single thread with 10 times, everything is fine.
>
>  - I hack the cam_alloc() and free as below [1] to see if it's lock 
> issue, with
>the same test with 100 multi-thread, then I got:
 [1] would not be sufficient to close this race.

 Try following things [A]. And, for more accurate test, I changed code a 
 bit more
 to prevent kernel page allocation from cma area [B]. This will prevent 
 kernel
 page allocation from cma area completely so we can focus cma_alloc/release 
 race.

 Although, this is not correct fix, it could help that we can guess
 where the problem is.
>>> More correct fix is something like below.
>>> Please test it.
>> Hmm, this is not working:
> Sad to hear that.
>
> Could you tell me your system's MAX_ORDER and pageblock_order?
>

MAX_ORDER is 11, pageblock_order is 9, thanks for your help!

Hanjun



Re: Suspicious error for CMA stress test

2016-03-03 Thread Hanjun Guo
On 2016/3/4 14:38, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
>> On 2016/3/4 12:32, Joonsoo Kim wrote:
>>> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
 On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> On 2016/3/3 15:42, Joonsoo Kim wrote:
>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
>>> (cc -mm and Joonsoo Kim)
>>>
>>>
>>> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
 Hi,

 I came across a suspicious error for CMA stress test:

 Before the test, I got:
 -bash-4.3# cat /proc/meminfo | grep Cma
 CmaTotal: 204800 kB
 CmaFree:  195044 kB


 After running the test:
 -bash-4.3# cat /proc/meminfo | grep Cma
 CmaTotal: 204800 kB
 CmaFree: 6602584 kB

 So the freed CMA memory is more than total..

 Also the the MemFree is more than mem total:

 -bash-4.3# cat /proc/meminfo
 MemTotal:   16342016 kB
 MemFree:22367268 kB
 MemAvailable:   22370528 kB
> [...]
>>> I played with this a bit and can see the same problem. The sanity
>>> check of CmaFree < CmaTotal generally triggers in
>>> __move_zone_freepage_state in unset_migratetype_isolate.
>>> This also seems to be present as far back as v4.0 which was the
>>> first version to have the updated accounting from Joonsoo.
>>> Were there known limitations with the new freepage accounting,
>>> Joonsoo?
>> I don't know. I also played with this and looks like there is
>> accounting problem, however, for my case, number of free page is 
>> slightly less
>> than total. I will take a look.
>>
>> Hanjun, could you tell me your malloc_size? I tested with 1 and it 
>> doesn't
>> look like your case.
> I tested with malloc_size with 2M, and it grows much bigger than 1M, also 
> I
> did some other test:
 Thanks! Now, I can re-generate erronous situation you mentioned.

>  - run with single thread with 10 times, everything is fine.
>
>  - I hack the cam_alloc() and free as below [1] to see if it's lock 
> issue, with
>the same test with 100 multi-thread, then I got:
 [1] would not be sufficient to close this race.

 Try following things [A]. And, for more accurate test, I changed code a 
 bit more
 to prevent kernel page allocation from cma area [B]. This will prevent 
 kernel
 page allocation from cma area completely so we can focus cma_alloc/release 
 race.

 Although, this is not correct fix, it could help that we can guess
 where the problem is.
>>> More correct fix is something like below.
>>> Please test it.
>> Hmm, this is not working:
> Sad to hear that.
>
> Could you tell me your system's MAX_ORDER and pageblock_order?
>

MAX_ORDER is 11, pageblock_order is 9, thanks for your help!

Hanjun



Re: Suspicious error for CMA stress test

2016-03-03 Thread Hanjun Guo
On 2016/3/4 10:02, Joonsoo Kim wrote:
> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>> On 2016/3/3 15:42, Joonsoo Kim wrote:
>>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
 (cc -mm and Joonsoo Kim)


 On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> Hi,
>
> I came across a suspicious error for CMA stress test:
>
> Before the test, I got:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree:  195044 kB
>
>
> After running the test:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree: 6602584 kB
>
> So the freed CMA memory is more than total..
>
> Also the the MemFree is more than mem total:
>
> -bash-4.3# cat /proc/meminfo
> MemTotal:   16342016 kB
> MemFree:22367268 kB
> MemAvailable:   22370528 kB
>> [...]
 I played with this a bit and can see the same problem. The sanity
 check of CmaFree < CmaTotal generally triggers in
 __move_zone_freepage_state in unset_migratetype_isolate.
 This also seems to be present as far back as v4.0 which was the
 first version to have the updated accounting from Joonsoo.
 Were there known limitations with the new freepage accounting,
 Joonsoo?
>>> I don't know. I also played with this and looks like there is
>>> accounting problem, however, for my case, number of free page is slightly 
>>> less
>>> than total. I will take a look.
>>>
>>> Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
>>> look like your case.
>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
>> did some other test:
> Thanks! Now, I can re-generate erronous situation you mentioned.
>
>>  - run with single thread with 10 times, everything is fine.
>>
>>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
>> with
>>the same test with 100 multi-thread, then I got:
> [1] would not be sufficient to close this race.
>
> Try following things [A]. And, for more accurate test, I changed code a bit 
> more
> to prevent kernel page allocation from cma area [B]. This will prevent kernel
> page allocation from cma area completely so we can focus cma_alloc/release 
> race.
>
> Although, this is not correct fix, it could help that we can guess
> where the problem is.
>
> Thanks.
>
> [A]
> diff --git a/mm/cma.c b/mm/cma.c
> index c003274..43ed02d 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -496,7 +496,9 @@ bool cma_release(struct cma *cma, const struct page 
> *pages, unsigned int count)
>  
> VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
>  
> +   mutex_lock(_mutex);
> free_contig_range(pfn, count);
> +   mutex_unlock(_mutex);
> cma_clear_bitmap(cma, pfn, count);
> trace_cma_release(pfn, pages, count);
>  
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c6c38ed..1ce8a59 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2192,7 +2192,8 @@ void free_hot_cold_page(struct page *page, bool cold)
>  * excessively into the page allocator
>  */
> if (migratetype >= MIGRATE_PCPTYPES) {
> -   if (unlikely(is_migrate_isolate(migratetype))) {
> +   if (is_migrate_cma(migratetype) ||
> +   unlikely(is_migrate_isolate(migratetype))) {
> free_one_page(zone, page, pfn, 0, migratetype);
> goto out;
> }

As I replied in previous email, the solution will fix the problem, the Cma 
freed memory and
system freed memory is in sane state after apply above patch.

I also tested this situation which only apply the code below:

if (migratetype >= MIGRATE_PCPTYPES) {
-   if (unlikely(is_migrate_isolate(migratetype))) {
+   if (is_migrate_cma(migratetype) ||
+   unlikely(is_migrate_isolate(migratetype))) {
free_one_page(zone, page, pfn, 0, migratetype);
goto out;
}


This will not fix the problem, but will reduce the errorous freed number of 
memory,
hope this helps.

>
>
> [B]
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index f2dccf9..c6c38ed 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1493,6 +1493,7 @@ static int prep_new_page(struct page *page, unsigned 
> int order, gfp_t gfp_flags,
> int 
> alloc_flags)
>  {
> int i;
> +   bool cma = false;
>  
> for (i = 0; i < (1 << order); i++) {
> struct page *p = page + i;
> @@ -1500,6 +1501,9 @@ static int prep_new_page(struct page *page, unsigned 
> int order, gfp_t gfp_flags,
> return 1;
> }
>  
> +   if (is_migrate_cma(get_pcppage_migratetype(page)))
> +   cma = true;
> +
> 

Re: Suspicious error for CMA stress test

2016-03-03 Thread Hanjun Guo
On 2016/3/4 10:02, Joonsoo Kim wrote:
> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>> On 2016/3/3 15:42, Joonsoo Kim wrote:
>>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
 (cc -mm and Joonsoo Kim)


 On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> Hi,
>
> I came across a suspicious error for CMA stress test:
>
> Before the test, I got:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree:  195044 kB
>
>
> After running the test:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree: 6602584 kB
>
> So the freed CMA memory is more than total..
>
> Also the the MemFree is more than mem total:
>
> -bash-4.3# cat /proc/meminfo
> MemTotal:   16342016 kB
> MemFree:22367268 kB
> MemAvailable:   22370528 kB
>> [...]
 I played with this a bit and can see the same problem. The sanity
 check of CmaFree < CmaTotal generally triggers in
 __move_zone_freepage_state in unset_migratetype_isolate.
 This also seems to be present as far back as v4.0 which was the
 first version to have the updated accounting from Joonsoo.
 Were there known limitations with the new freepage accounting,
 Joonsoo?
>>> I don't know. I also played with this and looks like there is
>>> accounting problem, however, for my case, number of free page is slightly 
>>> less
>>> than total. I will take a look.
>>>
>>> Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
>>> look like your case.
>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
>> did some other test:
> Thanks! Now, I can re-generate erronous situation you mentioned.
>
>>  - run with single thread with 10 times, everything is fine.
>>
>>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
>> with
>>the same test with 100 multi-thread, then I got:
> [1] would not be sufficient to close this race.
>
> Try following things [A]. And, for more accurate test, I changed code a bit 
> more
> to prevent kernel page allocation from cma area [B]. This will prevent kernel
> page allocation from cma area completely so we can focus cma_alloc/release 
> race.
>
> Although, this is not correct fix, it could help that we can guess
> where the problem is.
>
> Thanks.
>
> [A]
> diff --git a/mm/cma.c b/mm/cma.c
> index c003274..43ed02d 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -496,7 +496,9 @@ bool cma_release(struct cma *cma, const struct page 
> *pages, unsigned int count)
>  
> VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
>  
> +   mutex_lock(_mutex);
> free_contig_range(pfn, count);
> +   mutex_unlock(_mutex);
> cma_clear_bitmap(cma, pfn, count);
> trace_cma_release(pfn, pages, count);
>  
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c6c38ed..1ce8a59 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2192,7 +2192,8 @@ void free_hot_cold_page(struct page *page, bool cold)
>  * excessively into the page allocator
>  */
> if (migratetype >= MIGRATE_PCPTYPES) {
> -   if (unlikely(is_migrate_isolate(migratetype))) {
> +   if (is_migrate_cma(migratetype) ||
> +   unlikely(is_migrate_isolate(migratetype))) {
> free_one_page(zone, page, pfn, 0, migratetype);
> goto out;
> }

As I replied in previous email, the solution will fix the problem, the Cma 
freed memory and
system freed memory is in sane state after apply above patch.

I also tested this situation which only apply the code below:

if (migratetype >= MIGRATE_PCPTYPES) {
-   if (unlikely(is_migrate_isolate(migratetype))) {
+   if (is_migrate_cma(migratetype) ||
+   unlikely(is_migrate_isolate(migratetype))) {
free_one_page(zone, page, pfn, 0, migratetype);
goto out;
}


This will not fix the problem, but will reduce the errorous freed number of 
memory,
hope this helps.

>
>
> [B]
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index f2dccf9..c6c38ed 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1493,6 +1493,7 @@ static int prep_new_page(struct page *page, unsigned 
> int order, gfp_t gfp_flags,
> int 
> alloc_flags)
>  {
> int i;
> +   bool cma = false;
>  
> for (i = 0; i < (1 << order); i++) {
> struct page *p = page + i;
> @@ -1500,6 +1501,9 @@ static int prep_new_page(struct page *page, unsigned 
> int order, gfp_t gfp_flags,
> return 1;
> }
>  
> +   if (is_migrate_cma(get_pcppage_migratetype(page)))
> +   cma = true;
> +
> set_page_private(page, 0);
>  

Re: Suspicious error for CMA stress test

2016-03-03 Thread Joonsoo Kim
On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
> On 2016/3/4 12:32, Joonsoo Kim wrote:
> > On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
> >> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> >>> On 2016/3/3 15:42, Joonsoo Kim wrote:
>  2016-03-03 10:25 GMT+09:00 Laura Abbott :
> > (cc -mm and Joonsoo Kim)
> >
> >
> > On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> >> Hi,
> >>
> >> I came across a suspicious error for CMA stress test:
> >>
> >> Before the test, I got:
> >> -bash-4.3# cat /proc/meminfo | grep Cma
> >> CmaTotal: 204800 kB
> >> CmaFree:  195044 kB
> >>
> >>
> >> After running the test:
> >> -bash-4.3# cat /proc/meminfo | grep Cma
> >> CmaTotal: 204800 kB
> >> CmaFree: 6602584 kB
> >>
> >> So the freed CMA memory is more than total..
> >>
> >> Also the the MemFree is more than mem total:
> >>
> >> -bash-4.3# cat /proc/meminfo
> >> MemTotal:   16342016 kB
> >> MemFree:22367268 kB
> >> MemAvailable:   22370528 kB
> >>> [...]
> > I played with this a bit and can see the same problem. The sanity
> > check of CmaFree < CmaTotal generally triggers in
> > __move_zone_freepage_state in unset_migratetype_isolate.
> > This also seems to be present as far back as v4.0 which was the
> > first version to have the updated accounting from Joonsoo.
> > Were there known limitations with the new freepage accounting,
> > Joonsoo?
>  I don't know. I also played with this and looks like there is
>  accounting problem, however, for my case, number of free page is 
>  slightly less
>  than total. I will take a look.
> 
>  Hanjun, could you tell me your malloc_size? I tested with 1 and it 
>  doesn't
>  look like your case.
> >>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also 
> >>> I
> >>> did some other test:
> >> Thanks! Now, I can re-generate erronous situation you mentioned.
> >>
> >>>  - run with single thread with 10 times, everything is fine.
> >>>
> >>>  - I hack the cam_alloc() and free as below [1] to see if it's lock 
> >>> issue, with
> >>>the same test with 100 multi-thread, then I got:
> >> [1] would not be sufficient to close this race.
> >>
> >> Try following things [A]. And, for more accurate test, I changed code a 
> >> bit more
> >> to prevent kernel page allocation from cma area [B]. This will prevent 
> >> kernel
> >> page allocation from cma area completely so we can focus cma_alloc/release 
> >> race.
> >>
> >> Although, this is not correct fix, it could help that we can guess
> >> where the problem is.
> > More correct fix is something like below.
> > Please test it.
> 
> Hmm, this is not working:

Sad to hear that.

Could you tell me your system's MAX_ORDER and pageblock_order?

Thanks.


Re: Suspicious error for CMA stress test

2016-03-03 Thread Joonsoo Kim
On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote:
> On 2016/3/4 12:32, Joonsoo Kim wrote:
> > On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
> >> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> >>> On 2016/3/3 15:42, Joonsoo Kim wrote:
>  2016-03-03 10:25 GMT+09:00 Laura Abbott :
> > (cc -mm and Joonsoo Kim)
> >
> >
> > On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> >> Hi,
> >>
> >> I came across a suspicious error for CMA stress test:
> >>
> >> Before the test, I got:
> >> -bash-4.3# cat /proc/meminfo | grep Cma
> >> CmaTotal: 204800 kB
> >> CmaFree:  195044 kB
> >>
> >>
> >> After running the test:
> >> -bash-4.3# cat /proc/meminfo | grep Cma
> >> CmaTotal: 204800 kB
> >> CmaFree: 6602584 kB
> >>
> >> So the freed CMA memory is more than total..
> >>
> >> Also the the MemFree is more than mem total:
> >>
> >> -bash-4.3# cat /proc/meminfo
> >> MemTotal:   16342016 kB
> >> MemFree:22367268 kB
> >> MemAvailable:   22370528 kB
> >>> [...]
> > I played with this a bit and can see the same problem. The sanity
> > check of CmaFree < CmaTotal generally triggers in
> > __move_zone_freepage_state in unset_migratetype_isolate.
> > This also seems to be present as far back as v4.0 which was the
> > first version to have the updated accounting from Joonsoo.
> > Were there known limitations with the new freepage accounting,
> > Joonsoo?
>  I don't know. I also played with this and looks like there is
>  accounting problem, however, for my case, number of free page is 
>  slightly less
>  than total. I will take a look.
> 
>  Hanjun, could you tell me your malloc_size? I tested with 1 and it 
>  doesn't
>  look like your case.
> >>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also 
> >>> I
> >>> did some other test:
> >> Thanks! Now, I can re-generate erronous situation you mentioned.
> >>
> >>>  - run with single thread with 10 times, everything is fine.
> >>>
> >>>  - I hack the cam_alloc() and free as below [1] to see if it's lock 
> >>> issue, with
> >>>the same test with 100 multi-thread, then I got:
> >> [1] would not be sufficient to close this race.
> >>
> >> Try following things [A]. And, for more accurate test, I changed code a 
> >> bit more
> >> to prevent kernel page allocation from cma area [B]. This will prevent 
> >> kernel
> >> page allocation from cma area completely so we can focus cma_alloc/release 
> >> race.
> >>
> >> Although, this is not correct fix, it could help that we can guess
> >> where the problem is.
> > More correct fix is something like below.
> > Please test it.
> 
> Hmm, this is not working:

Sad to hear that.

Could you tell me your system's MAX_ORDER and pageblock_order?

Thanks.


Re: Suspicious error for CMA stress test

2016-03-03 Thread Hanjun Guo
On 2016/3/4 10:09, Joonsoo Kim wrote:
> On Thu, Mar 03, 2016 at 10:52:17AM -0800, Laura Abbott wrote:
>> On 03/03/2016 04:49 AM, Hanjun Guo wrote:
>>> On 2016/3/3 15:42, Joonsoo Kim wrote:
 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> (cc -mm and Joonsoo Kim)
>
>
> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>> Hi,
>>
>> I came across a suspicious error for CMA stress test:
>>
>> Before the test, I got:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree:  195044 kB
>>
>>
>> After running the test:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree: 6602584 kB
>>
>> So the freed CMA memory is more than total..
>>
>> Also the the MemFree is more than mem total:
>>
>> -bash-4.3# cat /proc/meminfo
>> MemTotal:   16342016 kB
>> MemFree:22367268 kB
>> MemAvailable:   22370528 kB
>>> [...]
> I played with this a bit and can see the same problem. The sanity
> check of CmaFree < CmaTotal generally triggers in
> __move_zone_freepage_state in unset_migratetype_isolate.
> This also seems to be present as far back as v4.0 which was the
> first version to have the updated accounting from Joonsoo.
> Were there known limitations with the new freepage accounting,
> Joonsoo?
 I don't know. I also played with this and looks like there is
 accounting problem, however, for my case, number of free page is slightly 
 less
 than total. I will take a look.

 Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
 look like your case.
>>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
>>> did some other test:
>>>
>>>  - run with single thread with 10 times, everything is fine.
>>>
>>>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
>>> with
>>>the same test with 100 multi-thread, then I got:
>>>
>>> -bash-4.3# cat /proc/meminfo | grep Cma
>>> CmaTotal: 204800 kB
>>> CmaFree: 225112 kB
>>>
>>> It only increased about 30M for free, not 6G+ in previous test, although
>>> the problem is not solved, the problem is less serious, is it a 
>>> synchronization
>>> problem?
>>>
>> 'only' 30M is still an issue although I think you are right about something 
>> related
>> to synchronization. When I put the cma_mutex around free_contig_range I 
>> don't see
> Hmm... I can see the issue even if putting the cma_mutex around
> free_contig_range().

Yes, I can confirm that too, it can reduce the number of erronous freed memory, 
but
the problem is still there.

Thanks
Hanjun



Re: Suspicious error for CMA stress test

2016-03-03 Thread Hanjun Guo
On 2016/3/4 10:09, Joonsoo Kim wrote:
> On Thu, Mar 03, 2016 at 10:52:17AM -0800, Laura Abbott wrote:
>> On 03/03/2016 04:49 AM, Hanjun Guo wrote:
>>> On 2016/3/3 15:42, Joonsoo Kim wrote:
 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> (cc -mm and Joonsoo Kim)
>
>
> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>> Hi,
>>
>> I came across a suspicious error for CMA stress test:
>>
>> Before the test, I got:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree:  195044 kB
>>
>>
>> After running the test:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree: 6602584 kB
>>
>> So the freed CMA memory is more than total..
>>
>> Also the the MemFree is more than mem total:
>>
>> -bash-4.3# cat /proc/meminfo
>> MemTotal:   16342016 kB
>> MemFree:22367268 kB
>> MemAvailable:   22370528 kB
>>> [...]
> I played with this a bit and can see the same problem. The sanity
> check of CmaFree < CmaTotal generally triggers in
> __move_zone_freepage_state in unset_migratetype_isolate.
> This also seems to be present as far back as v4.0 which was the
> first version to have the updated accounting from Joonsoo.
> Were there known limitations with the new freepage accounting,
> Joonsoo?
 I don't know. I also played with this and looks like there is
 accounting problem, however, for my case, number of free page is slightly 
 less
 than total. I will take a look.

 Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
 look like your case.
>>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
>>> did some other test:
>>>
>>>  - run with single thread with 10 times, everything is fine.
>>>
>>>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
>>> with
>>>the same test with 100 multi-thread, then I got:
>>>
>>> -bash-4.3# cat /proc/meminfo | grep Cma
>>> CmaTotal: 204800 kB
>>> CmaFree: 225112 kB
>>>
>>> It only increased about 30M for free, not 6G+ in previous test, although
>>> the problem is not solved, the problem is less serious, is it a 
>>> synchronization
>>> problem?
>>>
>> 'only' 30M is still an issue although I think you are right about something 
>> related
>> to synchronization. When I put the cma_mutex around free_contig_range I 
>> don't see
> Hmm... I can see the issue even if putting the cma_mutex around
> free_contig_range().

Yes, I can confirm that too, it can reduce the number of erronous freed memory, 
but
the problem is still there.

Thanks
Hanjun



Re: Suspicious error for CMA stress test

2016-03-03 Thread Hanjun Guo
On 2016/3/4 12:32, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
>> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>>> On 2016/3/3 15:42, Joonsoo Kim wrote:
 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> (cc -mm and Joonsoo Kim)
>
>
> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>> Hi,
>>
>> I came across a suspicious error for CMA stress test:
>>
>> Before the test, I got:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree:  195044 kB
>>
>>
>> After running the test:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree: 6602584 kB
>>
>> So the freed CMA memory is more than total..
>>
>> Also the the MemFree is more than mem total:
>>
>> -bash-4.3# cat /proc/meminfo
>> MemTotal:   16342016 kB
>> MemFree:22367268 kB
>> MemAvailable:   22370528 kB
>>> [...]
> I played with this a bit and can see the same problem. The sanity
> check of CmaFree < CmaTotal generally triggers in
> __move_zone_freepage_state in unset_migratetype_isolate.
> This also seems to be present as far back as v4.0 which was the
> first version to have the updated accounting from Joonsoo.
> Were there known limitations with the new freepage accounting,
> Joonsoo?
 I don't know. I also played with this and looks like there is
 accounting problem, however, for my case, number of free page is slightly 
 less
 than total. I will take a look.

 Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
 look like your case.
>>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
>>> did some other test:
>> Thanks! Now, I can re-generate erronous situation you mentioned.
>>
>>>  - run with single thread with 10 times, everything is fine.
>>>
>>>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
>>> with
>>>the same test with 100 multi-thread, then I got:
>> [1] would not be sufficient to close this race.
>>
>> Try following things [A]. And, for more accurate test, I changed code a bit 
>> more
>> to prevent kernel page allocation from cma area [B]. This will prevent kernel
>> page allocation from cma area completely so we can focus cma_alloc/release 
>> race.
>>
>> Although, this is not correct fix, it could help that we can guess
>> where the problem is.
> More correct fix is something like below.
> Please test it.

Hmm, this is not working:

-bash-4.3# cat /proc/meminfo  |grep Cma 
   
CmaTotal: 204800 kB 
   
CmaFree:19388216 kB

-bash-4.3# cat /proc/meminfo
   
MemTotal:   16342016 kB 
   
MemFree:35146212 kB 
   
MemAvailable:   35158008 kB 
   
Buffers:4236 kB 
   
Cached:45032 kB 
   
SwapCached:0 kB 
   
Active:19276 kB 
   
Inactive:  36492 kB 
   
Active(anon):   6724 kB 
   
Inactive(anon):   52 kB 
   
Active(file):  12552 kB 
   
Inactive(file):36440 kB 
   
Unevictable:   0 kB 
   
Mlocked:   0 kB 
   
SwapTotal: 0 kB  

Re: Suspicious error for CMA stress test

2016-03-03 Thread Hanjun Guo
On 2016/3/4 12:32, Joonsoo Kim wrote:
> On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
>> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>>> On 2016/3/3 15:42, Joonsoo Kim wrote:
 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> (cc -mm and Joonsoo Kim)
>
>
> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
>> Hi,
>>
>> I came across a suspicious error for CMA stress test:
>>
>> Before the test, I got:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree:  195044 kB
>>
>>
>> After running the test:
>> -bash-4.3# cat /proc/meminfo | grep Cma
>> CmaTotal: 204800 kB
>> CmaFree: 6602584 kB
>>
>> So the freed CMA memory is more than total..
>>
>> Also the the MemFree is more than mem total:
>>
>> -bash-4.3# cat /proc/meminfo
>> MemTotal:   16342016 kB
>> MemFree:22367268 kB
>> MemAvailable:   22370528 kB
>>> [...]
> I played with this a bit and can see the same problem. The sanity
> check of CmaFree < CmaTotal generally triggers in
> __move_zone_freepage_state in unset_migratetype_isolate.
> This also seems to be present as far back as v4.0 which was the
> first version to have the updated accounting from Joonsoo.
> Were there known limitations with the new freepage accounting,
> Joonsoo?
 I don't know. I also played with this and looks like there is
 accounting problem, however, for my case, number of free page is slightly 
 less
 than total. I will take a look.

 Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
 look like your case.
>>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
>>> did some other test:
>> Thanks! Now, I can re-generate erronous situation you mentioned.
>>
>>>  - run with single thread with 10 times, everything is fine.
>>>
>>>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
>>> with
>>>the same test with 100 multi-thread, then I got:
>> [1] would not be sufficient to close this race.
>>
>> Try following things [A]. And, for more accurate test, I changed code a bit 
>> more
>> to prevent kernel page allocation from cma area [B]. This will prevent kernel
>> page allocation from cma area completely so we can focus cma_alloc/release 
>> race.
>>
>> Although, this is not correct fix, it could help that we can guess
>> where the problem is.
> More correct fix is something like below.
> Please test it.

Hmm, this is not working:

-bash-4.3# cat /proc/meminfo  |grep Cma 
   
CmaTotal: 204800 kB 
   
CmaFree:19388216 kB

-bash-4.3# cat /proc/meminfo
   
MemTotal:   16342016 kB 
   
MemFree:35146212 kB 
   
MemAvailable:   35158008 kB 
   
Buffers:4236 kB 
   
Cached:45032 kB 
   
SwapCached:0 kB 
   
Active:19276 kB 
   
Inactive:  36492 kB 
   
Active(anon):   6724 kB 
   
Inactive(anon):   52 kB 
   
Active(file):  12552 kB 
   
Inactive(file):36440 kB 
   
Unevictable:   0 kB 
   
Mlocked:   0 kB 
   
SwapTotal: 0 kB 

Re: Suspicious error for CMA stress test

2016-03-03 Thread Hanjun Guo
Hi Joonsoo,

On 2016/3/4 10:02, Joonsoo Kim wrote:
> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>> On 2016/3/3 15:42, Joonsoo Kim wrote:
>>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
 (cc -mm and Joonsoo Kim)


 On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> Hi,
>
> I came across a suspicious error for CMA stress test:
>
> Before the test, I got:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree:  195044 kB
>
>
> After running the test:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree: 6602584 kB
>
> So the freed CMA memory is more than total..
>
> Also the the MemFree is more than mem total:
>
> -bash-4.3# cat /proc/meminfo
> MemTotal:   16342016 kB
> MemFree:22367268 kB
> MemAvailable:   22370528 kB
>> [...]
 I played with this a bit and can see the same problem. The sanity
 check of CmaFree < CmaTotal generally triggers in
 __move_zone_freepage_state in unset_migratetype_isolate.
 This also seems to be present as far back as v4.0 which was the
 first version to have the updated accounting from Joonsoo.
 Were there known limitations with the new freepage accounting,
 Joonsoo?
>>> I don't know. I also played with this and looks like there is
>>> accounting problem, however, for my case, number of free page is slightly 
>>> less
>>> than total. I will take a look.
>>>
>>> Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
>>> look like your case.
>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
>> did some other test:
> Thanks! Now, I can re-generate erronous situation you mentioned.
>
>>  - run with single thread with 10 times, everything is fine.
>>
>>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
>> with
>>the same test with 100 multi-thread, then I got:
> [1] would not be sufficient to close this race.
>
> Try following things [A]. And, for more accurate test, I changed code a bit 
> more
> to prevent kernel page allocation from cma area [B]. This will prevent kernel
> page allocation from cma area completely so we can focus cma_alloc/release 
> race.
>
> Although, this is not correct fix, it could help that we can guess
> where the problem is.
>
> Thanks.
>
> [A]

I tested this solution [A], it can fix the problem, as you are posting a new 
patch, I will
test that one and leave [B] alone :)

Thanks
Hanjun




Re: Suspicious error for CMA stress test

2016-03-03 Thread Hanjun Guo
Hi Joonsoo,

On 2016/3/4 10:02, Joonsoo Kim wrote:
> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
>> On 2016/3/3 15:42, Joonsoo Kim wrote:
>>> 2016-03-03 10:25 GMT+09:00 Laura Abbott :
 (cc -mm and Joonsoo Kim)


 On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> Hi,
>
> I came across a suspicious error for CMA stress test:
>
> Before the test, I got:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree:  195044 kB
>
>
> After running the test:
> -bash-4.3# cat /proc/meminfo | grep Cma
> CmaTotal: 204800 kB
> CmaFree: 6602584 kB
>
> So the freed CMA memory is more than total..
>
> Also the the MemFree is more than mem total:
>
> -bash-4.3# cat /proc/meminfo
> MemTotal:   16342016 kB
> MemFree:22367268 kB
> MemAvailable:   22370528 kB
>> [...]
 I played with this a bit and can see the same problem. The sanity
 check of CmaFree < CmaTotal generally triggers in
 __move_zone_freepage_state in unset_migratetype_isolate.
 This also seems to be present as far back as v4.0 which was the
 first version to have the updated accounting from Joonsoo.
 Were there known limitations with the new freepage accounting,
 Joonsoo?
>>> I don't know. I also played with this and looks like there is
>>> accounting problem, however, for my case, number of free page is slightly 
>>> less
>>> than total. I will take a look.
>>>
>>> Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
>>> look like your case.
>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
>> did some other test:
> Thanks! Now, I can re-generate erronous situation you mentioned.
>
>>  - run with single thread with 10 times, everything is fine.
>>
>>  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
>> with
>>the same test with 100 multi-thread, then I got:
> [1] would not be sufficient to close this race.
>
> Try following things [A]. And, for more accurate test, I changed code a bit 
> more
> to prevent kernel page allocation from cma area [B]. This will prevent kernel
> page allocation from cma area completely so we can focus cma_alloc/release 
> race.
>
> Although, this is not correct fix, it could help that we can guess
> where the problem is.
>
> Thanks.
>
> [A]

I tested this solution [A], it can fix the problem, as you are posting a new 
patch, I will
test that one and leave [B] alone :)

Thanks
Hanjun




Re: Suspicious error for CMA stress test

2016-03-03 Thread Joonsoo Kim
On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> > On 2016/3/3 15:42, Joonsoo Kim wrote:
> > > 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> > >> (cc -mm and Joonsoo Kim)
> > >>
> > >>
> > >> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> > >>> Hi,
> > >>>
> > >>> I came across a suspicious error for CMA stress test:
> > >>>
> > >>> Before the test, I got:
> > >>> -bash-4.3# cat /proc/meminfo | grep Cma
> > >>> CmaTotal: 204800 kB
> > >>> CmaFree:  195044 kB
> > >>>
> > >>>
> > >>> After running the test:
> > >>> -bash-4.3# cat /proc/meminfo | grep Cma
> > >>> CmaTotal: 204800 kB
> > >>> CmaFree: 6602584 kB
> > >>>
> > >>> So the freed CMA memory is more than total..
> > >>>
> > >>> Also the the MemFree is more than mem total:
> > >>>
> > >>> -bash-4.3# cat /proc/meminfo
> > >>> MemTotal:   16342016 kB
> > >>> MemFree:22367268 kB
> > >>> MemAvailable:   22370528 kB
> > [...]
> > >>
> > >> I played with this a bit and can see the same problem. The sanity
> > >> check of CmaFree < CmaTotal generally triggers in
> > >> __move_zone_freepage_state in unset_migratetype_isolate.
> > >> This also seems to be present as far back as v4.0 which was the
> > >> first version to have the updated accounting from Joonsoo.
> > >> Were there known limitations with the new freepage accounting,
> > >> Joonsoo?
> > > I don't know. I also played with this and looks like there is
> > > accounting problem, however, for my case, number of free page is slightly 
> > > less
> > > than total. I will take a look.
> > >
> > > Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
> > > look like your case.
> > 
> > I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
> > did some other test:
> 
> Thanks! Now, I can re-generate erronous situation you mentioned.
> 
> > 
> >  - run with single thread with 10 times, everything is fine.
> > 
> >  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
> > with
> >the same test with 100 multi-thread, then I got:
> 
> [1] would not be sufficient to close this race.
> 
> Try following things [A]. And, for more accurate test, I changed code a bit 
> more
> to prevent kernel page allocation from cma area [B]. This will prevent kernel
> page allocation from cma area completely so we can focus cma_alloc/release 
> race.
> 
> Although, this is not correct fix, it could help that we can guess
> where the problem is.

More correct fix is something like below.
Please test it.

It checks problematic buddy merging and prevent it.
I will try to find another way that is less intrusive for freepath performance.

Thanks.

>8---
>From 855cb11368487a0f02a5ad5b3d9de375dfbb061c Mon Sep 17 00:00:00 2001
From: Joonsoo Kim 
Date: Fri, 4 Mar 2016 13:28:17 +0900
Subject: [PATCH] mm/cma: fix race

Signed-off-by: Joonsoo Kim 
---
 mm/page_alloc.c | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c6c38ed..a01c3b5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -620,8 +620,8 @@ static inline void rmv_page_order(struct page *page)
  *
  * For recording page's order, we use page_private(page).
  */
-static inline int page_is_buddy(struct page *page, struct page *buddy,
-   unsigned int order)
+static inline int page_is_buddy(struct zone *zone, struct page *page,
+   struct page *buddy, unsigned int order)
 {
if (!pfn_valid_within(page_to_pfn(buddy)))
return 0;
@@ -644,6 +644,12 @@ static inline int page_is_buddy(struct page *page, struct 
page *buddy,
if (page_zone_id(page) != page_zone_id(buddy))
return 0;
 
+   if (IS_ENABLED(CONFIG_CMA) &&
+   has_isolate_pageblock(zone) &&
+   order >= pageblock_order &&
+   is_migrate_isolate(get_pageblock_migratetype(buddy)))
+   return 0;
+
VM_BUG_ON_PAGE(page_count(buddy) != 0, buddy);
 
return 1;
@@ -711,7 +717,7 @@ static inline void __free_one_page(struct page *page,
while (order < max_order - 1) {
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
-   if (!page_is_buddy(page, buddy, order))
+   if (!page_is_buddy(zone, page, buddy, order))
break;
/*
 * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
@@ -745,7 +751,7 @@ static inline void __free_one_page(struct page *page,
higher_page = page + (combined_idx - page_idx);
buddy_idx = 

Re: Suspicious error for CMA stress test

2016-03-03 Thread Joonsoo Kim
On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote:
> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote:
> > On 2016/3/3 15:42, Joonsoo Kim wrote:
> > > 2016-03-03 10:25 GMT+09:00 Laura Abbott :
> > >> (cc -mm and Joonsoo Kim)
> > >>
> > >>
> > >> On 03/02/2016 05:52 AM, Hanjun Guo wrote:
> > >>> Hi,
> > >>>
> > >>> I came across a suspicious error for CMA stress test:
> > >>>
> > >>> Before the test, I got:
> > >>> -bash-4.3# cat /proc/meminfo | grep Cma
> > >>> CmaTotal: 204800 kB
> > >>> CmaFree:  195044 kB
> > >>>
> > >>>
> > >>> After running the test:
> > >>> -bash-4.3# cat /proc/meminfo | grep Cma
> > >>> CmaTotal: 204800 kB
> > >>> CmaFree: 6602584 kB
> > >>>
> > >>> So the freed CMA memory is more than total..
> > >>>
> > >>> Also the the MemFree is more than mem total:
> > >>>
> > >>> -bash-4.3# cat /proc/meminfo
> > >>> MemTotal:   16342016 kB
> > >>> MemFree:22367268 kB
> > >>> MemAvailable:   22370528 kB
> > [...]
> > >>
> > >> I played with this a bit and can see the same problem. The sanity
> > >> check of CmaFree < CmaTotal generally triggers in
> > >> __move_zone_freepage_state in unset_migratetype_isolate.
> > >> This also seems to be present as far back as v4.0 which was the
> > >> first version to have the updated accounting from Joonsoo.
> > >> Were there known limitations with the new freepage accounting,
> > >> Joonsoo?
> > > I don't know. I also played with this and looks like there is
> > > accounting problem, however, for my case, number of free page is slightly 
> > > less
> > > than total. I will take a look.
> > >
> > > Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't
> > > look like your case.
> > 
> > I tested with malloc_size with 2M, and it grows much bigger than 1M, also I
> > did some other test:
> 
> Thanks! Now, I can re-generate erronous situation you mentioned.
> 
> > 
> >  - run with single thread with 10 times, everything is fine.
> > 
> >  - I hack the cam_alloc() and free as below [1] to see if it's lock issue, 
> > with
> >the same test with 100 multi-thread, then I got:
> 
> [1] would not be sufficient to close this race.
> 
> Try following things [A]. And, for more accurate test, I changed code a bit 
> more
> to prevent kernel page allocation from cma area [B]. This will prevent kernel
> page allocation from cma area completely so we can focus cma_alloc/release 
> race.
> 
> Although, this is not correct fix, it could help that we can guess
> where the problem is.

More correct fix is something like below.
Please test it.

It checks problematic buddy merging and prevent it.
I will try to find another way that is less intrusive for freepath performance.

Thanks.

>8---
>From 855cb11368487a0f02a5ad5b3d9de375dfbb061c Mon Sep 17 00:00:00 2001
From: Joonsoo Kim 
Date: Fri, 4 Mar 2016 13:28:17 +0900
Subject: [PATCH] mm/cma: fix race

Signed-off-by: Joonsoo Kim 
---
 mm/page_alloc.c | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c6c38ed..a01c3b5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -620,8 +620,8 @@ static inline void rmv_page_order(struct page *page)
  *
  * For recording page's order, we use page_private(page).
  */
-static inline int page_is_buddy(struct page *page, struct page *buddy,
-   unsigned int order)
+static inline int page_is_buddy(struct zone *zone, struct page *page,
+   struct page *buddy, unsigned int order)
 {
if (!pfn_valid_within(page_to_pfn(buddy)))
return 0;
@@ -644,6 +644,12 @@ static inline int page_is_buddy(struct page *page, struct 
page *buddy,
if (page_zone_id(page) != page_zone_id(buddy))
return 0;
 
+   if (IS_ENABLED(CONFIG_CMA) &&
+   has_isolate_pageblock(zone) &&
+   order >= pageblock_order &&
+   is_migrate_isolate(get_pageblock_migratetype(buddy)))
+   return 0;
+
VM_BUG_ON_PAGE(page_count(buddy) != 0, buddy);
 
return 1;
@@ -711,7 +717,7 @@ static inline void __free_one_page(struct page *page,
while (order < max_order - 1) {
buddy_idx = __find_buddy_index(page_idx, order);
buddy = page + (buddy_idx - page_idx);
-   if (!page_is_buddy(page, buddy, order))
+   if (!page_is_buddy(zone, page, buddy, order))
break;
/*
 * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
@@ -745,7 +751,7 @@ static inline void __free_one_page(struct page *page,
higher_page = page + (combined_idx - page_idx);
buddy_idx = __find_buddy_index(combined_idx, order + 1);
higher_buddy = higher_page + 

  1   2   >