On 04/04/2017 11:50 AM, Jan Beulich wrote:
On 04.04.17 at 17:39, wrote:
>> On 04/04/2017 11:29 AM, Jan Beulich wrote:
>> On 04.04.17 at 17:14, wrote:
On 04/04/2017 10:46 AM, Jan Beulich wrote:
>> @@ -933,6 +952,10 @@ static bool_t can_merge(struct page_info *buddy,
>> unsig
>>> On 04.04.17 at 17:39, wrote:
> On 04/04/2017 11:29 AM, Jan Beulich wrote:
> On 04.04.17 at 17:14, wrote:
>>> On 04/04/2017 10:46 AM, Jan Beulich wrote:
> @@ -933,6 +952,10 @@ static bool_t can_merge(struct page_info *buddy,
> unsigned int node,
> (phys_to_nid(page_t
On 04/04/2017 11:29 AM, Jan Beulich wrote:
On 04.04.17 at 17:14, wrote:
>> On 04/04/2017 10:46 AM, Jan Beulich wrote:
@@ -933,6 +952,10 @@ static bool_t can_merge(struct page_info *buddy,
unsigned int node,
(phys_to_nid(page_to_maddr(buddy)) != node) )
>>> On 04.04.17 at 17:14, wrote:
> On 04/04/2017 10:46 AM, Jan Beulich wrote:
>>> @@ -933,6 +952,10 @@ static bool_t can_merge(struct page_info *buddy,
>>> unsigned int node,
>>> (phys_to_nid(page_to_maddr(buddy)) != node) )
>>> return false;
>>>
>>> +if ( need_scrub !=
>
On 04/04/2017 10:46 AM, Jan Beulich wrote:
>> @@ -897,8 +916,8 @@ static int reserve_offlined_page(struct page_info *head)
>> {
>> merge:
>> /* We don't consider merging outside the head_order. */
>> -page_list_add_tail(cur_head, &heap(node
>>> On 03.04.17 at 18:50, wrote:
> @@ -856,6 +874,7 @@ static int reserve_offlined_page(struct page_info *head)
> int zone = page_to_zone(head), i, head_order = PFN_ORDER(head), count =
> 0;
> struct page_info *cur_head;
> int cur_order;
> +bool_t need_scrub = !!test_bit(_PGC_n
. so that it's easy to find pages that need to be scrubbed (those pages are
now marked with _PGC_need_scrub bit).
Signed-off-by: Boris Ostrovsky
---
Changes in v2:
* Added page_list_add_scrub()
* Mark pages as needing a scrub irrespective on tanted in free_heap_pages()
xen/common/page_alloc.c