[PATCH RESEND] memory hotplug: fix a double register section info bug

2012-09-13 Thread qiuxishi
There may be a bug when registering section info. For example, on
my Itanium platform, the pfn range of node0 includes the other nodes,
so other nodes' section info will be double registered, and memmap's
page count will equal to 3.

node0: start_pfn=0x100,spanned_pfn=0x20fb00, present_pfn=0x7f8a3, => 
0x000100-0x20fc00
node1: start_pfn=0x8,  spanned_pfn=0x8,  present_pfn=0x8, => 
0x08-0x10
node2: start_pfn=0x10, spanned_pfn=0x8,  present_pfn=0x8, => 
0x10-0x18
node3: start_pfn=0x18, spanned_pfn=0x8,  present_pfn=0x8, => 
0x18-0x20

free_all_bootmem_node()
register_page_bootmem_info_node()
register_page_bootmem_info_section()

When hot remove memory, we can't free the memmap's page because
page_count() is 2 after put_page_bootmem().

sparse_remove_one_section()
free_section_usemap()
free_map_bootmem()
put_page_bootmem()

Signed-off-by: Xishi Qiu 
Signed-off-by: Jiang Liu 
---
 mm/memory_hotplug.c |   10 --
 1 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 2adbcac..cf493c7 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -126,9 +126,6 @@ static void register_page_bootmem_info_section(unsigned 
long start_pfn)
struct mem_section *ms;
struct page *page, *memmap;

-   if (!pfn_valid(start_pfn))
-   return;
-
section_nr = pfn_to_section_nr(start_pfn);
ms = __nr_to_section(section_nr);

@@ -187,9 +184,10 @@ void register_page_bootmem_info_node(struct pglist_data 
*pgdat)
end_pfn = pfn + pgdat->node_spanned_pages;

/* register_section info */
-   for (; pfn < end_pfn; pfn += PAGES_PER_SECTION)
-   register_page_bootmem_info_section(pfn);
-
+   for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
+   if (pfn_valid(pfn) && (pfn_to_nid(pfn) == node))
+   register_page_bootmem_info_section(pfn);
+   }
 }
 #endif /* !CONFIG_SPARSEMEM_VMEMMAP */

-- 
1.7.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH RESEND] memory hotplug: fix a double register section info bug

2012-09-13 Thread qiuxishi
There may be a bug when registering section info. For example, on
my Itanium platform, the pfn range of node0 includes the other nodes,
so other nodes' section info will be double registered, and memmap's
page count will equal to 3.

node0: start_pfn=0x100,spanned_pfn=0x20fb00, present_pfn=0x7f8a3, = 
0x000100-0x20fc00
node1: start_pfn=0x8,  spanned_pfn=0x8,  present_pfn=0x8, = 
0x08-0x10
node2: start_pfn=0x10, spanned_pfn=0x8,  present_pfn=0x8, = 
0x10-0x18
node3: start_pfn=0x18, spanned_pfn=0x8,  present_pfn=0x8, = 
0x18-0x20

free_all_bootmem_node()
register_page_bootmem_info_node()
register_page_bootmem_info_section()

When hot remove memory, we can't free the memmap's page because
page_count() is 2 after put_page_bootmem().

sparse_remove_one_section()
free_section_usemap()
free_map_bootmem()
put_page_bootmem()

Signed-off-by: Xishi Qiu qiuxi...@huawei.com
Signed-off-by: Jiang Liu jiang@huawei.com
---
 mm/memory_hotplug.c |   10 --
 1 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 2adbcac..cf493c7 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -126,9 +126,6 @@ static void register_page_bootmem_info_section(unsigned 
long start_pfn)
struct mem_section *ms;
struct page *page, *memmap;

-   if (!pfn_valid(start_pfn))
-   return;
-
section_nr = pfn_to_section_nr(start_pfn);
ms = __nr_to_section(section_nr);

@@ -187,9 +184,10 @@ void register_page_bootmem_info_node(struct pglist_data 
*pgdat)
end_pfn = pfn + pgdat-node_spanned_pages;

/* register_section info */
-   for (; pfn  end_pfn; pfn += PAGES_PER_SECTION)
-   register_page_bootmem_info_section(pfn);
-
+   for (; pfn  end_pfn; pfn += PAGES_PER_SECTION) {
+   if (pfn_valid(pfn)  (pfn_to_nid(pfn) == node))
+   register_page_bootmem_info_section(pfn);
+   }
 }
 #endif /* !CONFIG_SPARSEMEM_VMEMMAP */

-- 
1.7.1
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/3] memory-hotplug: bug fix race between isolation and allocation

2012-09-05 Thread qiuxishi
On 2012/9/5 17:40, Mel Gorman wrote:

> On Wed, Sep 05, 2012 at 04:26:02PM +0900, Minchan Kim wrote:
>> Like below, memory-hotplug makes race between page-isolation
>> and page-allocation so it can hit BUG_ON in __offline_isolated_pages.
>>
>>  CPU A   CPU B
>>
>> start_isolate_page_range
>> set_migratetype_isolate
>> spin_lock_irqsave(zone->lock)
>>
>>  free_hot_cold_page(Page A)
>>  /* without zone->lock */
>>  migratetype = get_pageblock_migratetype(Page A);
>>  /*
>>   * Page could be moved into MIGRATE_MOVABLE
>>   * of per_cpu_pages
>>   */
>>  list_add_tail(>lru, 
>> >lists[migratetype]);
>>
>> set_pageblock_isolate
>> move_freepages_block
>> drain_all_pages

I think here is the problem you want to fix, it is not sure that pcp will be 
moved
into MIGRATE_ISOLATE list. They may be moved into MIGRATE_MOVABLE list because
page_private() maybe 2, it uses page_private() not get_pageblock_migratetype()

So when finish migrating pages, the free pages from pcp may be allocated again, 
and
failed in check_pages_isolated().

drain_all_pages()
drain_local_pages()
drain_pages()
free_pcppages_bulk()
__free_one_page(page, zone, 0, 
page_private(page))

I reported this problem too. http://marc.info/?l=linux-mm=134555113706068=2
How about this change:
free_pcppages_bulk()
__free_one_page(page, zone, 0, get_pageblock_migratetype(page))

Thanks
Xishi Qiu

>>
>>  /* Page A could be in MIGRATE_MOVABLE of 
>> free_list. */
>>
>> check_pages_isolated
>> __test_page_isolated_in_pageblock
>> /*
>>  * We can't catch freed page which
>>  * is free_list[MIGRATE_MOVABLE]
>>  */
>> if (PageBuddy(page A))
>>  pfn += 1 << page_order(page A);
>>
>>  /* So, Page A could be allocated */
>>
>> __offline_isolated_pages
>> /*
>>  * BUG_ON hit or offline page
>>  * which is used by someone
>>  */
>> BUG_ON(!PageBuddy(page A));
>>
>
> offline_page calling BUG_ON because someone allocated the page is
> ridiculous. I did not spot where that check is but it should be changed. The
> correct action is to retry the isolation.
>
>> Signed-off-by: Minchan Kim 
>
> At no point in the changelog do you actually say what he patch does :/
>
>> ---
>>  mm/page_isolation.c |5 -
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
>> index acf65a7..4699d1f 100644
>> --- a/mm/page_isolation.c
>> +++ b/mm/page_isolation.c
>> @@ -196,8 +196,11 @@ __test_page_isolated_in_pageblock(unsigned long pfn, 
>> unsigned long end_pfn)
>>  continue;
>>  }
>>  page = pfn_to_page(pfn);
>> -if (PageBuddy(page))
>> +if (PageBuddy(page)) {
>> +if (get_page_migratetype(page) != MIGRATE_ISOLATE)
>> +break;
>>  pfn += 1 << page_order(page);
>> +}
>
> It is possible the page is moved to the MIGRATE_ISOLATE list between when
> the page was freed to the buddy allocator and this check was made. The
> page->index information is stale and the impact is that the hotplug
> operation fails when it could have succeeded. That said, I think it is a
> very unlikely race that will never happen in practice.
>
> More importantly, the effect of this path is that EBUSY gets bubbled all
> the way up and the hotplug operations fails. This is fine but as the page
> is free at the time this problem is detected you also have the option
> of moving the PageBuddy page to the MIGRATE_ISOLATE list at this time
> if you take the zone lock. This will mean you need to change the name of
> test_pages_isolated() of course.
>
>>  else if (page_count(page) == 0 &&
>>  get_page_migratetype(page) == MIGRATE_ISOLATE)
>>  pfn += 1;
>> --
>> 1.7.9.5
>>
>


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 3/3] memory-hotplug: bug fix race between isolation and allocation

2012-09-05 Thread qiuxishi
On 2012/9/5 17:40, Mel Gorman wrote:

 On Wed, Sep 05, 2012 at 04:26:02PM +0900, Minchan Kim wrote:
 Like below, memory-hotplug makes race between page-isolation
 and page-allocation so it can hit BUG_ON in __offline_isolated_pages.

  CPU A   CPU B

 start_isolate_page_range
 set_migratetype_isolate
 spin_lock_irqsave(zone-lock)

  free_hot_cold_page(Page A)
  /* without zone-lock */
  migratetype = get_pageblock_migratetype(Page A);
  /*
   * Page could be moved into MIGRATE_MOVABLE
   * of per_cpu_pages
   */
  list_add_tail(page-lru, 
 pcp-lists[migratetype]);

 set_pageblock_isolate
 move_freepages_block
 drain_all_pages

I think here is the problem you want to fix, it is not sure that pcp will be 
moved
into MIGRATE_ISOLATE list. They may be moved into MIGRATE_MOVABLE list because
page_private() maybe 2, it uses page_private() not get_pageblock_migratetype()

So when finish migrating pages, the free pages from pcp may be allocated again, 
and
failed in check_pages_isolated().

drain_all_pages()
drain_local_pages()
drain_pages()
free_pcppages_bulk()
__free_one_page(page, zone, 0, 
page_private(page))

I reported this problem too. http://marc.info/?l=linux-mmm=134555113706068w=2
How about this change:
free_pcppages_bulk()
__free_one_page(page, zone, 0, get_pageblock_migratetype(page))

Thanks
Xishi Qiu


  /* Page A could be in MIGRATE_MOVABLE of 
 free_list. */

 check_pages_isolated
 __test_page_isolated_in_pageblock
 /*
  * We can't catch freed page which
  * is free_list[MIGRATE_MOVABLE]
  */
 if (PageBuddy(page A))
  pfn += 1  page_order(page A);

  /* So, Page A could be allocated */

 __offline_isolated_pages
 /*
  * BUG_ON hit or offline page
  * which is used by someone
  */
 BUG_ON(!PageBuddy(page A));


 offline_page calling BUG_ON because someone allocated the page is
 ridiculous. I did not spot where that check is but it should be changed. The
 correct action is to retry the isolation.

 Signed-off-by: Minchan Kim minc...@kernel.org

 At no point in the changelog do you actually say what he patch does :/

 ---
  mm/page_isolation.c |5 -
  1 file changed, 4 insertions(+), 1 deletion(-)

 diff --git a/mm/page_isolation.c b/mm/page_isolation.c
 index acf65a7..4699d1f 100644
 --- a/mm/page_isolation.c
 +++ b/mm/page_isolation.c
 @@ -196,8 +196,11 @@ __test_page_isolated_in_pageblock(unsigned long pfn, 
 unsigned long end_pfn)
  continue;
  }
  page = pfn_to_page(pfn);
 -if (PageBuddy(page))
 +if (PageBuddy(page)) {
 +if (get_page_migratetype(page) != MIGRATE_ISOLATE)
 +break;
  pfn += 1  page_order(page);
 +}

 It is possible the page is moved to the MIGRATE_ISOLATE list between when
 the page was freed to the buddy allocator and this check was made. The
 page-index information is stale and the impact is that the hotplug
 operation fails when it could have succeeded. That said, I think it is a
 very unlikely race that will never happen in practice.

 More importantly, the effect of this path is that EBUSY gets bubbled all
 the way up and the hotplug operations fails. This is fine but as the page
 is free at the time this problem is detected you also have the option
 of moving the PageBuddy page to the MIGRATE_ISOLATE list at this time
 if you take the zone lock. This will mean you need to change the name of
 test_pages_isolated() of course.

  else if (page_count(page) == 0 
  get_page_migratetype(page) == MIGRATE_ISOLATE)
  pfn += 1;
 --
 1.7.9.5




--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] memory-hotplug: fix a drain pcp bug when offline pages

2012-08-29 Thread qiuxishi
On 2012-8-22 16:37, Minchan Kim wrote:
> Hi Jiang,
> 
> On Wed, Aug 22, 2012 at 04:30:09PM +0800, Jiang Liu wrote:
>> On 2012-8-22 16:14, Minchan Kim wrote:
>>> On Wed, Aug 22, 2012 at 03:57:45PM +0800, qiuxishi wrote:
>>>> On 2012-8-22 11:34, Minchan Kim wrote:
>>>>> Hello Xishi,
>>>>>
>>>>> On Tue, Aug 21, 2012 at 08:12:05PM +0800, qiuxishi wrote:
>>>>>> From: Xishi Qiu 
>>>>>>
>>>>>> When offline a section, we move all the free pages and pcp into 
>>>>>> MIGRATE_ISOLATE list first.
>>>>>> start_isolate_page_range()
>>>>>>  set_migratetype_isolate()
>>>>>>  drain_all_pages(),
>>>>>>
>>>>>> Here is a problem, it is not sure that pcp will be moved into 
>>>>>> MIGRATE_ISOLATE list. They may
>>>>>> be moved into MIGRATE_MOVABLE list because page_private() maybe 2. So 
>>>>>> when finish migrating
>>>>>> pages, the free pages from pcp may be allocated again, and faild in 
>>>>>> check_pages_isolated().
>>>>>> drain_all_pages()
>>>>>>  drain_local_pages()
>>>>>>  drain_pages()
>>>>>>  free_pcppages_bulk()
>>>>>>  __free_one_page(page, zone, 0, 
>>>>>> page_private(page));
>>>>>>
>>>>>> If we add move_freepages_block() after drain_all_pages(), it can not 
>>>>>> sure that all the pcp
>>>>>> will be moved into MIGRATE_ISOLATE list when the system works on high 
>>>>>> load. The free pages
>>>>>> which from pcp may immediately be allocated again.
>>>>>>
>>>>>> I think the similar bug described in 
>>>>>> http://marc.info/?t=13425088233=1=2
>>>>>
>>>>> Yes. I reported the problem a few month ago but it's not real bug in 
>>>>> practice
>>>>> but found by my eyes during looking the code so I wanted to confirm the 
>>>>> problem.
>>>>>
>>>>> Do you find that problem in real practice? or just code review?
>>>>>
>>>>
>>>> I use /sys/devices/system/memory/soft_offline_page to offline a lot of 
>>>> pages when the
>>>> system works on high load, then I find some unknown zero refcount pages, 
>>>> such as
>>>> get_any_page: 0x650422: unknown zero refcount page type 19400c
>>>> get_any_page: 0x650867: unknown zero refcount page type 19400c
>>>>
>>>> soft_offline_page()
>>>>get_any_page()
>>>>set_migratetype_isolate()
>>>>drain_all_pages()
>>>>
>>>> I think after drain_all_pages(), pcp are moved into MIGRATE_MOVABLE list 
>>>> which managed by
>>>> buddy allocator, but they are allocated and becaome pcp again as the 
>>>> system works on high
>>>> load. There will be no this problem by applying this patch.
>>>>
>>>>> Anyway, I don't like your approach which I already considered because it 
>>>>> hurts hotpath
>>>>> while the race is really unlikely. Get_pageblock_migratetype is never 
>>>>> trivial.
>>>>> We should avoid the overhead in hotpath and move into memory-hotplug 
>>>>> itself.
>>>>> Do you see my patch in https://patchwork.kernel.org/patch/1225081/ ?
>>>>
>>>> Yes, you are right, I will try to find another way to fix this problem.
>>>> How about doing this work in set_migratetype_isolate(), find the pcp and 
>>>> change the value
>>>> of private to get_pageblock_migratetype(page)?
>>>>
>>>
>>> Allocator doesn't have any lock when he allocates the page from pcp.
>>> How could you prevent race between allocator and memory-hotplug
>>> routine(ie, set_migratetype_isolate) without hurting hotpath?
>> Hi Minchan,
>>  I have thought about using a jump label in the hot path, which won't 
>> cause big
>> performance drop, but it seems a little dirty. What's your thoughts?
> 
> I don't know static_key_false internal well.
> Questions.
> 
> 1. Is it implemented by all archs?
> 2. How is it work? It's almost zero on all archs?
> 3. Don't we really have any solution other than hacking the hotpath
>(ie, order-0 page allocation)?
> 4. Please see my solution on above URL. Does it has any problem?
> 

Hi Minchan,

Yes, your patch does resolve this problem, it returns the failed flag in
__test_page_isolated_in_pageblock(), so memory offline will be failed.

My patch resolve this problem too, it drain pcp to MIGRATE_ISOLATE list,
so memory offline will be successful, but it causes big performance drop.

I think Gerry's method looks fine.

Thanks
Xishi Qiu

>>
>>  migrate_type = page_private(page);
>>  if (static_key_false(_hotplug_inprogress))
>>  migrate_type = get_pageblock_migratetype(page);
>>  __free_one_page(page, zone, 0, migrate_type);
>>
>>  Regards!
>>  Gerry
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majord...@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] memory-hotplug: fix a drain pcp bug when offline pages

2012-08-29 Thread qiuxishi
On 2012-8-22 16:37, Minchan Kim wrote:
 Hi Jiang,
 
 On Wed, Aug 22, 2012 at 04:30:09PM +0800, Jiang Liu wrote:
 On 2012-8-22 16:14, Minchan Kim wrote:
 On Wed, Aug 22, 2012 at 03:57:45PM +0800, qiuxishi wrote:
 On 2012-8-22 11:34, Minchan Kim wrote:
 Hello Xishi,

 On Tue, Aug 21, 2012 at 08:12:05PM +0800, qiuxishi wrote:
 From: Xishi Qiu qiuxi...@huawei.com

 When offline a section, we move all the free pages and pcp into 
 MIGRATE_ISOLATE list first.
 start_isolate_page_range()
  set_migratetype_isolate()
  drain_all_pages(),

 Here is a problem, it is not sure that pcp will be moved into 
 MIGRATE_ISOLATE list. They may
 be moved into MIGRATE_MOVABLE list because page_private() maybe 2. So 
 when finish migrating
 pages, the free pages from pcp may be allocated again, and faild in 
 check_pages_isolated().
 drain_all_pages()
  drain_local_pages()
  drain_pages()
  free_pcppages_bulk()
  __free_one_page(page, zone, 0, 
 page_private(page));

 If we add move_freepages_block() after drain_all_pages(), it can not 
 sure that all the pcp
 will be moved into MIGRATE_ISOLATE list when the system works on high 
 load. The free pages
 which from pcp may immediately be allocated again.

 I think the similar bug described in 
 http://marc.info/?t=13425088233r=1w=2

 Yes. I reported the problem a few month ago but it's not real bug in 
 practice
 but found by my eyes during looking the code so I wanted to confirm the 
 problem.

 Do you find that problem in real practice? or just code review?


 I use /sys/devices/system/memory/soft_offline_page to offline a lot of 
 pages when the
 system works on high load, then I find some unknown zero refcount pages, 
 such as
 get_any_page: 0x650422: unknown zero refcount page type 19400c
 get_any_page: 0x650867: unknown zero refcount page type 19400c

 soft_offline_page()
get_any_page()
set_migratetype_isolate()
drain_all_pages()

 I think after drain_all_pages(), pcp are moved into MIGRATE_MOVABLE list 
 which managed by
 buddy allocator, but they are allocated and becaome pcp again as the 
 system works on high
 load. There will be no this problem by applying this patch.

 Anyway, I don't like your approach which I already considered because it 
 hurts hotpath
 while the race is really unlikely. Get_pageblock_migratetype is never 
 trivial.
 We should avoid the overhead in hotpath and move into memory-hotplug 
 itself.
 Do you see my patch in https://patchwork.kernel.org/patch/1225081/ ?

 Yes, you are right, I will try to find another way to fix this problem.
 How about doing this work in set_migratetype_isolate(), find the pcp and 
 change the value
 of private to get_pageblock_migratetype(page)?


 Allocator doesn't have any lock when he allocates the page from pcp.
 How could you prevent race between allocator and memory-hotplug
 routine(ie, set_migratetype_isolate) without hurting hotpath?
 Hi Minchan,
  I have thought about using a jump label in the hot path, which won't 
 cause big
 performance drop, but it seems a little dirty. What's your thoughts?
 
 I don't know static_key_false internal well.
 Questions.
 
 1. Is it implemented by all archs?
 2. How is it work? It's almost zero on all archs?
 3. Don't we really have any solution other than hacking the hotpath
(ie, order-0 page allocation)?
 4. Please see my solution on above URL. Does it has any problem?
 

Hi Minchan,

Yes, your patch does resolve this problem, it returns the failed flag in
__test_page_isolated_in_pageblock(), so memory offline will be failed.

My patch resolve this problem too, it drain pcp to MIGRATE_ISOLATE list,
so memory offline will be successful, but it causes big performance drop.

I think Gerry's method looks fine.

Thanks
Xishi Qiu


  migrate_type = page_private(page);
  if (static_key_false(memory_hotplug_inprogress))
  migrate_type = get_pageblock_migratetype(page);
  __free_one_page(page, zone, 0, migrate_type);

  Regards!
  Gerry

 --
 To unsubscribe, send a message with 'unsubscribe linux-mm' in
 the body to majord...@kvack.org.  For more info on Linux MM,
 see: http://www.linux-mm.org/ .
 Don't email: a href=mailto:d...@kvack.org; em...@kvack.org /a
 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH V2] memory-hotplug: add build zonelists when offline pages

2012-08-26 Thread qiuxishi
From: Xishi Qiu 

online_pages() does build_all_zonelists() and zone_pcp_update(),
I think offline_pages() should do it too.
When the zone has no  memory to allocate, remove it form other
nodes' zonelists. zone_batchsize() depends on zone's present pages,
if zone's present pages are changed, zone's pcp should be updated.


Signed-off-by: Xishi Qiu 
---
 mm/memory_hotplug.c |7 ++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index bc7e7a2..5f6997f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -973,8 +973,13 @@ repeat:

init_per_zone_wmark_min();

-   if (!populated_zone(zone))
+   if (!populated_zone(zone)) {
zone_pcp_reset(zone);
+   mutex_lock(_mutex);
+   build_all_zonelists(NULL, NULL);
+   mutex_unlock(_mutex);
+   } else
+   zone_pcp_update(zone);

if (!node_present_pages(node)) {
node_clear_state(node, N_HIGH_MEMORY);
-- 
1.7.6.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH V2] memory-hotplug: add build zonelists when offline pages

2012-08-26 Thread qiuxishi
From: Xishi Qiu qiuxi...@huawei.com

online_pages() does build_all_zonelists() and zone_pcp_update(),
I think offline_pages() should do it too.
When the zone has no  memory to allocate, remove it form other
nodes' zonelists. zone_batchsize() depends on zone's present pages,
if zone's present pages are changed, zone's pcp should be updated.


Signed-off-by: Xishi Qiu qiuxi...@huawei.com
---
 mm/memory_hotplug.c |7 ++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index bc7e7a2..5f6997f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -973,8 +973,13 @@ repeat:

init_per_zone_wmark_min();

-   if (!populated_zone(zone))
+   if (!populated_zone(zone)) {
zone_pcp_reset(zone);
+   mutex_lock(zonelists_mutex);
+   build_all_zonelists(NULL, NULL);
+   mutex_unlock(zonelists_mutex);
+   } else
+   zone_pcp_update(zone);

if (!node_present_pages(node)) {
node_clear_state(node, N_HIGH_MEMORY);
-- 
1.7.6.1
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] memory-hotplug: add build zonelists when offline pages

2012-08-22 Thread qiuxishi
On 2012-8-22 14:15, Wen Congyang wrote:
> At 08/21/2012 08:51 PM, qiuxishi Wrote:
>> From: Xishi Qiu 
>>
>> online_pages() does build_all_zonelists() and zone_pcp_update(),
>> I think offline_pages() should do it too. The node has no memory
>> to allocate, so remove this node's zones form other nodes' zonelists.
>>
>>
>> Signed-off-by: Xishi Qiu 
>> ---
>>  mm/memory_hotplug.c |6 +-
>>  1 files changed, 5 insertions(+), 1 deletions(-)
>>
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index bc7e7a2..5172bd4 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -979,7 +979,11 @@ repeat:
>>  if (!node_present_pages(node)) {
>>  node_clear_state(node, N_HIGH_MEMORY);
>>  kswapd_stop(node);
>> -}
>> +mutex_lock(_mutex);
>> +build_all_zonelists(NODE_DATA(node), NULL);
> 
> The node is still onlined now, so there is no need to pass
> this node's pgdat to build_all_zonelists().
> 
> I think we should build all zonelists when the zone has no
> pages.
> 
>> +mutex_unlock(_mutex);
>> +} else
>> +zone_pcp_update(zone);
> 
> There is more than one zone in a node. So the zone can have
> no pages when the node has some pages.
> 

Yes, you are right. Here is the new patch,

Signed-off-by: Xishi Qiu 
---
 mm/memory_hotplug.c |7 ++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index bc7e7a2..5f6997f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -973,8 +973,13 @@ repeat:

init_per_zone_wmark_min();

-   if (!populated_zone(zone))
+   if (!populated_zone(zone)) {
zone_pcp_reset(zone);
+   mutex_lock(_mutex);
+   build_all_zonelists(NULL, NULL);
+   mutex_unlock(_mutex);
+   } else
+   zone_pcp_update(zone);

if (!node_present_pages(node)) {
node_clear_state(node, N_HIGH_MEMORY);
-- 
1.7.6.1

> And we have called drain_all_pages(), I think there is no need
> to call zone_pcp_update() here.
> 
> Thanks
> Wen Congyang
> 

In zone_pcp_update(), it calculates zone_batchsize() which does
not calculated in drain_all_pages().

Thanks
Xishi Qiu

>>
>>  vm_total_pages = nr_free_pagecache_pages();
>>  writeback_set_ratelimit();
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] memory-hotplug: fix a drain pcp bug when offline pages

2012-08-22 Thread qiuxishi
On 2012-8-22 11:34, Minchan Kim wrote:
> Hello Xishi,
> 
> On Tue, Aug 21, 2012 at 08:12:05PM +0800, qiuxishi wrote:
>> From: Xishi Qiu 
>>
>> When offline a section, we move all the free pages and pcp into 
>> MIGRATE_ISOLATE list first.
>> start_isolate_page_range()
>>  set_migratetype_isolate()
>>  drain_all_pages(),
>>
>> Here is a problem, it is not sure that pcp will be moved into 
>> MIGRATE_ISOLATE list. They may
>> be moved into MIGRATE_MOVABLE list because page_private() maybe 2. So when 
>> finish migrating
>> pages, the free pages from pcp may be allocated again, and faild in 
>> check_pages_isolated().
>> drain_all_pages()
>>  drain_local_pages()
>>  drain_pages()
>>  free_pcppages_bulk()
>>  __free_one_page(page, zone, 0, 
>> page_private(page));
>>
>> If we add move_freepages_block() after drain_all_pages(), it can not sure 
>> that all the pcp
>> will be moved into MIGRATE_ISOLATE list when the system works on high load. 
>> The free pages
>> which from pcp may immediately be allocated again.
>>
>> I think the similar bug described in 
>> http://marc.info/?t=13425088233=1=2
> 
> Yes. I reported the problem a few month ago but it's not real bug in practice
> but found by my eyes during looking the code so I wanted to confirm the 
> problem.
> 
> Do you find that problem in real practice? or just code review?
> 

I use /sys/devices/system/memory/soft_offline_page to offline a lot of pages 
when the
system works on high load, then I find some unknown zero refcount pages, such as
get_any_page: 0x650422: unknown zero refcount page type 19400c
get_any_page: 0x650867: unknown zero refcount page type 19400c

soft_offline_page()
get_any_page()
set_migratetype_isolate()
drain_all_pages()

I think after drain_all_pages(), pcp are moved into MIGRATE_MOVABLE list which 
managed by
buddy allocator, but they are allocated and becaome pcp again as the system 
works on high
load. There will be no this problem by applying this patch.

> Anyway, I don't like your approach which I already considered because it 
> hurts hotpath
> while the race is really unlikely. Get_pageblock_migratetype is never trivial.
> We should avoid the overhead in hotpath and move into memory-hotplug itself.
> Do you see my patch in https://patchwork.kernel.org/patch/1225081/ ?

Yes, you are right, I will try to find another way to fix this problem.
How about doing this work in set_migratetype_isolate(), find the pcp and change 
the value
of private to get_pageblock_migratetype(page)?

Thanks
Xishi Qiu

>>
>>
>> Signed-off-by: Xishi Qiu 
>> ---
>>  mm/page_alloc.c |3 ++-
>>  1 files changed, 2 insertions(+), 1 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index d0723b2..501f6de 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -673,7 +673,8 @@ static void free_pcppages_bulk(struct zone *zone, int 
>> count,
>>  /* must delete as __free_one_page list manipulates */
>>  list_del(>lru);
>>  /* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
>> -__free_one_page(page, zone, 0, page_private(page));
>> +__free_one_page(page, zone, 0,
>> +get_pageblock_migratetype(page));
>>  trace_mm_page_pcpu_drain(page, 0, page_private(page));
>>  } while (--to_free && --batch_free && !list_empty(list));
>>  }
>> -- 1.7.6.1 .
>>
>>
>>
>> .
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majord...@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] memory-hotplug: fix a drain pcp bug when offline pages

2012-08-22 Thread qiuxishi
On 2012-8-22 11:34, Minchan Kim wrote:
 Hello Xishi,
 
 On Tue, Aug 21, 2012 at 08:12:05PM +0800, qiuxishi wrote:
 From: Xishi Qiu qiuxi...@huawei.com

 When offline a section, we move all the free pages and pcp into 
 MIGRATE_ISOLATE list first.
 start_isolate_page_range()
  set_migratetype_isolate()
  drain_all_pages(),

 Here is a problem, it is not sure that pcp will be moved into 
 MIGRATE_ISOLATE list. They may
 be moved into MIGRATE_MOVABLE list because page_private() maybe 2. So when 
 finish migrating
 pages, the free pages from pcp may be allocated again, and faild in 
 check_pages_isolated().
 drain_all_pages()
  drain_local_pages()
  drain_pages()
  free_pcppages_bulk()
  __free_one_page(page, zone, 0, 
 page_private(page));

 If we add move_freepages_block() after drain_all_pages(), it can not sure 
 that all the pcp
 will be moved into MIGRATE_ISOLATE list when the system works on high load. 
 The free pages
 which from pcp may immediately be allocated again.

 I think the similar bug described in 
 http://marc.info/?t=13425088233r=1w=2
 
 Yes. I reported the problem a few month ago but it's not real bug in practice
 but found by my eyes during looking the code so I wanted to confirm the 
 problem.
 
 Do you find that problem in real practice? or just code review?
 

I use /sys/devices/system/memory/soft_offline_page to offline a lot of pages 
when the
system works on high load, then I find some unknown zero refcount pages, such as
get_any_page: 0x650422: unknown zero refcount page type 19400c
get_any_page: 0x650867: unknown zero refcount page type 19400c

soft_offline_page()
get_any_page()
set_migratetype_isolate()
drain_all_pages()

I think after drain_all_pages(), pcp are moved into MIGRATE_MOVABLE list which 
managed by
buddy allocator, but they are allocated and becaome pcp again as the system 
works on high
load. There will be no this problem by applying this patch.

 Anyway, I don't like your approach which I already considered because it 
 hurts hotpath
 while the race is really unlikely. Get_pageblock_migratetype is never trivial.
 We should avoid the overhead in hotpath and move into memory-hotplug itself.
 Do you see my patch in https://patchwork.kernel.org/patch/1225081/ ?

Yes, you are right, I will try to find another way to fix this problem.
How about doing this work in set_migratetype_isolate(), find the pcp and change 
the value
of private to get_pageblock_migratetype(page)?

Thanks
Xishi Qiu



 Signed-off-by: Xishi Qiu qiuxi...@huawei.com
 ---
  mm/page_alloc.c |3 ++-
  1 files changed, 2 insertions(+), 1 deletions(-)

 diff --git a/mm/page_alloc.c b/mm/page_alloc.c
 index d0723b2..501f6de 100644
 --- a/mm/page_alloc.c
 +++ b/mm/page_alloc.c
 @@ -673,7 +673,8 @@ static void free_pcppages_bulk(struct zone *zone, int 
 count,
  /* must delete as __free_one_page list manipulates */
  list_del(page-lru);
  /* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
 -__free_one_page(page, zone, 0, page_private(page));
 +__free_one_page(page, zone, 0,
 +get_pageblock_migratetype(page));
  trace_mm_page_pcpu_drain(page, 0, page_private(page));
  } while (--to_free  --batch_free  !list_empty(list));
  }
 -- 1.7.6.1 .



 .

 --
 To unsubscribe, send a message with 'unsubscribe linux-mm' in
 the body to majord...@kvack.org.  For more info on Linux MM,
 see: http://www.linux-mm.org/ .
 Don't email: a href=mailto:d...@kvack.org; em...@kvack.org /a
 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] memory-hotplug: add build zonelists when offline pages

2012-08-22 Thread qiuxishi
On 2012-8-22 14:15, Wen Congyang wrote:
 At 08/21/2012 08:51 PM, qiuxishi Wrote:
 From: Xishi Qiu qiuxi...@huawei.com

 online_pages() does build_all_zonelists() and zone_pcp_update(),
 I think offline_pages() should do it too. The node has no memory
 to allocate, so remove this node's zones form other nodes' zonelists.


 Signed-off-by: Xishi Qiu qiuxi...@huawei.com
 ---
  mm/memory_hotplug.c |6 +-
  1 files changed, 5 insertions(+), 1 deletions(-)

 diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
 index bc7e7a2..5172bd4 100644
 --- a/mm/memory_hotplug.c
 +++ b/mm/memory_hotplug.c
 @@ -979,7 +979,11 @@ repeat:
  if (!node_present_pages(node)) {
  node_clear_state(node, N_HIGH_MEMORY);
  kswapd_stop(node);
 -}
 +mutex_lock(zonelists_mutex);
 +build_all_zonelists(NODE_DATA(node), NULL);
 
 The node is still onlined now, so there is no need to pass
 this node's pgdat to build_all_zonelists().
 
 I think we should build all zonelists when the zone has no
 pages.
 
 +mutex_unlock(zonelists_mutex);
 +} else
 +zone_pcp_update(zone);
 
 There is more than one zone in a node. So the zone can have
 no pages when the node has some pages.
 

Yes, you are right. Here is the new patch,

Signed-off-by: Xishi Qiu qiuxi...@huawei.com
---
 mm/memory_hotplug.c |7 ++-
 1 files changed, 6 insertions(+), 1 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index bc7e7a2..5f6997f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -973,8 +973,13 @@ repeat:

init_per_zone_wmark_min();

-   if (!populated_zone(zone))
+   if (!populated_zone(zone)) {
zone_pcp_reset(zone);
+   mutex_lock(zonelists_mutex);
+   build_all_zonelists(NULL, NULL);
+   mutex_unlock(zonelists_mutex);
+   } else
+   zone_pcp_update(zone);

if (!node_present_pages(node)) {
node_clear_state(node, N_HIGH_MEMORY);
-- 
1.7.6.1

 And we have called drain_all_pages(), I think there is no need
 to call zone_pcp_update() here.
 
 Thanks
 Wen Congyang
 

In zone_pcp_update(), it calculates zone_batchsize() which does
not calculated in drain_all_pages().

Thanks
Xishi Qiu


  vm_total_pages = nr_free_pagecache_pages();
  writeback_set_ratelimit();
 
 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] memory-hotplug: add build zonelists when offline pages

2012-08-21 Thread qiuxishi
From: Xishi Qiu 

online_pages() does build_all_zonelists() and zone_pcp_update(),
I think offline_pages() should do it too. The node has no memory
to allocate, so remove this node's zones form other nodes' zonelists.


Signed-off-by: Xishi Qiu 
---
 mm/memory_hotplug.c |6 +-
 1 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index bc7e7a2..5172bd4 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -979,7 +979,11 @@ repeat:
if (!node_present_pages(node)) {
node_clear_state(node, N_HIGH_MEMORY);
kswapd_stop(node);
-   }
+   mutex_lock(_mutex);
+   build_all_zonelists(NODE_DATA(node), NULL);
+   mutex_unlock(_mutex);
+   } else
+   zone_pcp_update(zone);

vm_total_pages = nr_free_pagecache_pages();
writeback_set_ratelimit();
-- 
1.7.6.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] memory-hotplug: fix a drain pcp bug when offline pages

2012-08-21 Thread qiuxishi
From: Xishi Qiu 

When offline a section, we move all the free pages and pcp into MIGRATE_ISOLATE 
list first.
start_isolate_page_range()
set_migratetype_isolate()
drain_all_pages(),

Here is a problem, it is not sure that pcp will be moved into MIGRATE_ISOLATE 
list. They may
be moved into MIGRATE_MOVABLE list because page_private() maybe 2. So when 
finish migrating
pages, the free pages from pcp may be allocated again, and faild in 
check_pages_isolated().
drain_all_pages()
drain_local_pages()
drain_pages()
free_pcppages_bulk()
__free_one_page(page, zone, 0, 
page_private(page));

If we add move_freepages_block() after drain_all_pages(), it can not sure that 
all the pcp
will be moved into MIGRATE_ISOLATE list when the system works on high load. The 
free pages
which from pcp may immediately be allocated again.

I think the similar bug described in http://marc.info/?t=13425088233=1=2


Signed-off-by: Xishi Qiu 
---
 mm/page_alloc.c |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d0723b2..501f6de 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -673,7 +673,8 @@ static void free_pcppages_bulk(struct zone *zone, int count,
/* must delete as __free_one_page list manipulates */
list_del(>lru);
/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
-   __free_one_page(page, zone, 0, page_private(page));
+   __free_one_page(page, zone, 0,
+   get_pageblock_migratetype(page));
trace_mm_page_pcpu_drain(page, 0, page_private(page));
} while (--to_free && --batch_free && !list_empty(list));
}
-- 1.7.6.1 .



.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] memory-hotplug: fix a drain pcp bug when offline pages

2012-08-21 Thread qiuxishi
From: Xishi Qiu qiuxi...@huawei.com

When offline a section, we move all the free pages and pcp into MIGRATE_ISOLATE 
list first.
start_isolate_page_range()
set_migratetype_isolate()
drain_all_pages(),

Here is a problem, it is not sure that pcp will be moved into MIGRATE_ISOLATE 
list. They may
be moved into MIGRATE_MOVABLE list because page_private() maybe 2. So when 
finish migrating
pages, the free pages from pcp may be allocated again, and faild in 
check_pages_isolated().
drain_all_pages()
drain_local_pages()
drain_pages()
free_pcppages_bulk()
__free_one_page(page, zone, 0, 
page_private(page));

If we add move_freepages_block() after drain_all_pages(), it can not sure that 
all the pcp
will be moved into MIGRATE_ISOLATE list when the system works on high load. The 
free pages
which from pcp may immediately be allocated again.

I think the similar bug described in http://marc.info/?t=13425088233r=1w=2


Signed-off-by: Xishi Qiu qiuxi...@huawei.com
---
 mm/page_alloc.c |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d0723b2..501f6de 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -673,7 +673,8 @@ static void free_pcppages_bulk(struct zone *zone, int count,
/* must delete as __free_one_page list manipulates */
list_del(page-lru);
/* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */
-   __free_one_page(page, zone, 0, page_private(page));
+   __free_one_page(page, zone, 0,
+   get_pageblock_migratetype(page));
trace_mm_page_pcpu_drain(page, 0, page_private(page));
} while (--to_free  --batch_free  !list_empty(list));
}
-- 1.7.6.1 .



.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] memory-hotplug: add build zonelists when offline pages

2012-08-21 Thread qiuxishi
From: Xishi Qiu qiuxi...@huawei.com

online_pages() does build_all_zonelists() and zone_pcp_update(),
I think offline_pages() should do it too. The node has no memory
to allocate, so remove this node's zones form other nodes' zonelists.


Signed-off-by: Xishi Qiu qiuxi...@huawei.com
---
 mm/memory_hotplug.c |6 +-
 1 files changed, 5 insertions(+), 1 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index bc7e7a2..5172bd4 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -979,7 +979,11 @@ repeat:
if (!node_present_pages(node)) {
node_clear_state(node, N_HIGH_MEMORY);
kswapd_stop(node);
-   }
+   mutex_lock(zonelists_mutex);
+   build_all_zonelists(NODE_DATA(node), NULL);
+   mutex_unlock(zonelists_mutex);
+   } else
+   zone_pcp_update(zone);

vm_total_pages = nr_free_pagecache_pages();
writeback_set_ratelimit();
-- 
1.7.6.1
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] memory hotplug: avoid double registration on ia64 platform

2012-08-17 Thread qiuxishi
From: Xishi Qiu 

Hi all,
There may be have a bug when register section info. For example, on
an Itanium platform, the pfn range of node0 includes the other nodes.
So when hot remove memory, we can't free the memmap's page because
page_count() is 2 after put_page_bootmem().

sparse_remove_one_section()->free_section_usemap()->free_map_bootmem()
->put_page_bootmem()

pgdat0: start_pfn=0x100,spanned_pfn=0x20fb00, present_pfn=0x7f8a3, => 
0x100-0x20fc00
pgdat1: start_pfn=0x8,  spanned_pfn=0x8,  present_pfn=0x8, => 
0x8-0x10
pgdat2: start_pfn=0x10, spanned_pfn=0x8,  present_pfn=0x8, => 
0x10-0x18
pgdat3: start_pfn=0x18, spanned_pfn=0x8,  present_pfn=0x8, => 
0x18-0x20


Signed-off-by: Xishi Qiu 
---
 mm/memory_hotplug.c |   10 --
 1 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 2adbcac..cf493c7 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -126,9 +126,6 @@ static void register_page_bootmem_info_section(unsigned 
long start_pfn)
struct mem_section *ms;
struct page *page, *memmap;

-   if (!pfn_valid(start_pfn))
-   return;
-
section_nr = pfn_to_section_nr(start_pfn);
ms = __nr_to_section(section_nr);

@@ -187,9 +184,10 @@ void register_page_bootmem_info_node(struct pglist_data 
*pgdat)
end_pfn = pfn + pgdat->node_spanned_pages;

/* register_section info */
-   for (; pfn < end_pfn; pfn += PAGES_PER_SECTION)
-   register_page_bootmem_info_section(pfn);
-
+   for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
+   if (pfn_valid(pfn) && (pfn_to_nid(pfn) == node))
+   register_page_bootmem_info_section(pfn);
+   }
 }
 #endif /* !CONFIG_SPARSEMEM_VMEMMAP */

-- 1.7.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] memory hotplug: avoid double registration on ia64 platform

2012-08-17 Thread qiuxishi
From: Xishi Qiu qiuxi...@huawei.com

Hi all,
There may be have a bug when register section info. For example, on
an Itanium platform, the pfn range of node0 includes the other nodes.
So when hot remove memory, we can't free the memmap's page because
page_count() is 2 after put_page_bootmem().

sparse_remove_one_section()-free_section_usemap()-free_map_bootmem()
-put_page_bootmem()

pgdat0: start_pfn=0x100,spanned_pfn=0x20fb00, present_pfn=0x7f8a3, = 
0x100-0x20fc00
pgdat1: start_pfn=0x8,  spanned_pfn=0x8,  present_pfn=0x8, = 
0x8-0x10
pgdat2: start_pfn=0x10, spanned_pfn=0x8,  present_pfn=0x8, = 
0x10-0x18
pgdat3: start_pfn=0x18, spanned_pfn=0x8,  present_pfn=0x8, = 
0x18-0x20


Signed-off-by: Xishi Qiu qiuxi...@huawei.com
---
 mm/memory_hotplug.c |   10 --
 1 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 2adbcac..cf493c7 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -126,9 +126,6 @@ static void register_page_bootmem_info_section(unsigned 
long start_pfn)
struct mem_section *ms;
struct page *page, *memmap;

-   if (!pfn_valid(start_pfn))
-   return;
-
section_nr = pfn_to_section_nr(start_pfn);
ms = __nr_to_section(section_nr);

@@ -187,9 +184,10 @@ void register_page_bootmem_info_node(struct pglist_data 
*pgdat)
end_pfn = pfn + pgdat-node_spanned_pages;

/* register_section info */
-   for (; pfn  end_pfn; pfn += PAGES_PER_SECTION)
-   register_page_bootmem_info_section(pfn);
-
+   for (; pfn  end_pfn; pfn += PAGES_PER_SECTION) {
+   if (pfn_valid(pfn)  (pfn_to_nid(pfn) == node))
+   register_page_bootmem_info_section(pfn);
+   }
 }
 #endif /* !CONFIG_SPARSEMEM_VMEMMAP */

-- 1.7.1
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/