On 09/18/2013 03:17 AM, Bob Liu wrote:
> On 09/17/2013 10:22 PM, Vlastimil Babka wrote:
>> --- a/mm/mlock.c
>> +++ b/mm/mlock.c
>> @@ -379,10 +379,14 @@ static unsigned long __munlock_pagevec_fill(struct
>> pagevec *pvec,
>>
>> /*
>> * Initialize pte walk starting at the already
On 09/18/2013 03:17 AM, Bob Liu wrote:
On 09/17/2013 10:22 PM, Vlastimil Babka wrote:
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -379,10 +379,14 @@ static unsigned long __munlock_pagevec_fill(struct
pagevec *pvec,
/*
* Initialize pte walk starting at the already pinned page where
On 09/17/2013 10:22 PM, Vlastimil Babka wrote:
> The function __munlock_pagevec_fill() introduced in commit 7a8010cd3
> ("mm: munlock: manual pte walk in fast path instead of follow_page_mask()")
> uses pmd_addr_end() for restricting its operation within current page table.
> This is insufficient
The function __munlock_pagevec_fill() introduced in commit 7a8010cd3
("mm: munlock: manual pte walk in fast path instead of follow_page_mask()")
uses pmd_addr_end() for restricting its operation within current page table.
This is insufficient on architectures/configurations where pmd is folded
and
The function __munlock_pagevec_fill() introduced in commit 7a8010cd3
(mm: munlock: manual pte walk in fast path instead of follow_page_mask())
uses pmd_addr_end() for restricting its operation within current page table.
This is insufficient on architectures/configurations where pmd is folded
and
On 09/17/2013 10:22 PM, Vlastimil Babka wrote:
The function __munlock_pagevec_fill() introduced in commit 7a8010cd3
(mm: munlock: manual pte walk in fast path instead of follow_page_mask())
uses pmd_addr_end() for restricting its operation within current page table.
This is insufficient on
6 matches
Mail list logo