On 09/22/2016 11:12 AM, Dave Hansen wrote:
> On 09/22/2016 09:29 AM, Gerald Schaefer wrote:
>>  static void dissolve_free_huge_page(struct page *page)
>>  {
>> +    struct page *head = compound_head(page);
>> +    struct hstate *h = page_hstate(head);
>> +    int nid = page_to_nid(head);
>> +
>>      spin_lock(&hugetlb_lock);
>> -    if (PageHuge(page) && !page_count(page)) {
>> -            struct hstate *h = page_hstate(page);
>> -            int nid = page_to_nid(page);
>> -            list_del(&page->lru);
>> -            h->free_huge_pages--;
>> -            h->free_huge_pages_node[nid]--;
>> -            h->max_huge_pages--;
>> -            update_and_free_page(h, page);
>> -    }
>> +    list_del(&head->lru);
>> +    h->free_huge_pages--;
>> +    h->free_huge_pages_node[nid]--;
>> +    h->max_huge_pages--;
>> +    update_and_free_page(h, head);
>>      spin_unlock(&hugetlb_lock);
>>  }
> 
> Do you need to revalidate anything once you acquire the lock?  Can this,
> for instance, race with another thread doing vm.nr_hugepages=0?  Or a
> thread faulting in and allocating the large page that's being dissolved?

I originally suggested the locking change, but this is not quite right.
The page count for huge pages is adjusted while holding hugetlb_lock.
So, that check or a revalidation needs to be done while holding the lock.

That question made me think about huge page reservations.  I don't think
the offline code takes this into account.  But, you would not want your
huge page count to drop below the reserved huge page count
(resv_huge_pages).
So, shouldn't this be another condition to check before allowing the huge
page to be dissolved?

-- 
Mike Kravetz

Reply via email to