On 05/23/2017 02:42 PM, Vlastimil Babka wrote:
> On 05/16/2017 10:29 PM, Andrea Arcangeli wrote:
>> On Wed, Apr 12, 2017 at 03:33:35PM +0200, Vlastimil Babka wrote:
>>>
>>> pmdp_invalidate() does:
>>>
>>> pmd_t entry = *pmdp;
>>> set_pmd_at(vma->vm_mm, address, pmdp,
On 05/23/2017 02:42 PM, Vlastimil Babka wrote:
> On 05/16/2017 10:29 PM, Andrea Arcangeli wrote:
>> On Wed, Apr 12, 2017 at 03:33:35PM +0200, Vlastimil Babka wrote:
>>>
>>> pmdp_invalidate() does:
>>>
>>> pmd_t entry = *pmdp;
>>> set_pmd_at(vma->vm_mm, address, pmdp,
On 05/16/2017 10:29 PM, Andrea Arcangeli wrote:
> On Wed, Apr 12, 2017 at 03:33:35PM +0200, Vlastimil Babka wrote:
>>
>> pmdp_invalidate() does:
>>
>> pmd_t entry = *pmdp;
>> set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(entry));
>>
>> so it's not atomic and if CPU sets
On 05/16/2017 10:29 PM, Andrea Arcangeli wrote:
> On Wed, Apr 12, 2017 at 03:33:35PM +0200, Vlastimil Babka wrote:
>>
>> pmdp_invalidate() does:
>>
>> pmd_t entry = *pmdp;
>> set_pmd_at(vma->vm_mm, address, pmdp, pmd_mknotpresent(entry));
>>
>> so it's not atomic and if CPU sets
On Wed, Apr 12, 2017 at 03:33:35PM +0200, Vlastimil Babka wrote:
> On 03/02/2017 04:10 PM, Kirill A. Shutemov wrote:
> > In case prot_numa, we are under down_read(mmap_sem). It's critical
> > to not clear pmd intermittently to avoid race with MADV_DONTNEED
> > which is also under
On Wed, Apr 12, 2017 at 03:33:35PM +0200, Vlastimil Babka wrote:
> On 03/02/2017 04:10 PM, Kirill A. Shutemov wrote:
> > In case prot_numa, we are under down_read(mmap_sem). It's critical
> > to not clear pmd intermittently to avoid race with MADV_DONTNEED
> > which is also under
On 04/12/2017 03:33 PM, Vlastimil Babka wrote:
> On 03/02/2017 04:10 PM, Kirill A. Shutemov wrote:
>> In case prot_numa, we are under down_read(mmap_sem). It's critical
>> to not clear pmd intermittently to avoid race with MADV_DONTNEED
>> which is also under down_read(mmap_sem):
>>
>> CPU0:
On 04/12/2017 03:33 PM, Vlastimil Babka wrote:
> On 03/02/2017 04:10 PM, Kirill A. Shutemov wrote:
>> In case prot_numa, we are under down_read(mmap_sem). It's critical
>> to not clear pmd intermittently to avoid race with MADV_DONTNEED
>> which is also under down_read(mmap_sem):
>>
>> CPU0:
On 03/02/2017 04:10 PM, Kirill A. Shutemov wrote:
> In case prot_numa, we are under down_read(mmap_sem). It's critical
> to not clear pmd intermittently to avoid race with MADV_DONTNEED
> which is also under down_read(mmap_sem):
>
> CPU0: CPU1:
>
On 03/02/2017 04:10 PM, Kirill A. Shutemov wrote:
> In case prot_numa, we are under down_read(mmap_sem). It's critical
> to not clear pmd intermittently to avoid race with MADV_DONTNEED
> which is also under down_read(mmap_sem):
>
> CPU0: CPU1:
>
On 03/02/2017 07:10 AM, Kirill A. Shutemov wrote:
> @@ -1744,7 +1744,39 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t
> *pmd,
> if (prot_numa && pmd_protnone(*pmd))
> goto unlock;
>
> - entry = pmdp_huge_get_and_clear_notify(mm, addr, pmd);
Are there any
On 03/02/2017 07:10 AM, Kirill A. Shutemov wrote:
> @@ -1744,7 +1744,39 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t
> *pmd,
> if (prot_numa && pmd_protnone(*pmd))
> goto unlock;
>
> - entry = pmdp_huge_get_and_clear_notify(mm, addr, pmd);
Are there any
In case prot_numa, we are under down_read(mmap_sem). It's critical
to not clear pmd intermittently to avoid race with MADV_DONTNEED
which is also under down_read(mmap_sem):
CPU0: CPU1:
change_huge_pmd(prot_numa=1)
In case prot_numa, we are under down_read(mmap_sem). It's critical
to not clear pmd intermittently to avoid race with MADV_DONTNEED
which is also under down_read(mmap_sem):
CPU0: CPU1:
change_huge_pmd(prot_numa=1)
14 matches
Mail list logo