On 4/4/19 1:56 AM, Peter Zijlstra wrote:
> On Wed, Apr 03, 2019 at 11:34:12AM -0600, Khalid Aziz wrote:
>> From: Julian Stecklina <[email protected]>
>>
>> Only the xpfo_kunmap call that needs to actually unmap the page
>> needs to be serialized. We need to be careful to handle the case,
>> where after the atomic decrement of the mapcount, a xpfo_kmap
>> increased the mapcount again. In this case, we can safely skip
>> modifying the page table.
>>
>> Model-checked with up to 4 concurrent callers with Spin.
>>
>> Signed-off-by: Julian Stecklina <[email protected]>
>> Signed-off-by: Khalid Aziz <[email protected]>
>> Cc: Khalid Aziz <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Cc: Vasileios P. Kemerlis <[email protected]>
>> Cc: Juerg Haefliger <[email protected]>
>> Cc: Tycho Andersen <[email protected]>
>> Cc: Marco Benatto <[email protected]>
>> Cc: David Woodhouse <[email protected]>
>> ---
>>  include/linux/xpfo.h | 24 +++++++++++++++---------
>>  1 file changed, 15 insertions(+), 9 deletions(-)
>>
>> diff --git a/include/linux/xpfo.h b/include/linux/xpfo.h
>> index 2318c7eb5fb7..37e7f52fa6ce 100644
>> --- a/include/linux/xpfo.h
>> +++ b/include/linux/xpfo.h
>> @@ -61,6 +61,7 @@ static inline void xpfo_kmap(void *kaddr, struct page 
>> *page)
>>  static inline void xpfo_kunmap(void *kaddr, struct page *page)
>>  {
>>      unsigned long flags;
>> +    bool flush_tlb = false;
>>  
>>      if (!static_branch_unlikely(&xpfo_inited))
>>              return;
>> @@ -72,18 +73,23 @@ static inline void xpfo_kunmap(void *kaddr, struct page 
>> *page)
>>       * The page is to be allocated back to user space, so unmap it from
>>       * the kernel, flush the TLB and tag it as a user page.
>>       */
>> -    spin_lock_irqsave(&page->xpfo_lock, flags);
>> -
>>      if (atomic_dec_return(&page->xpfo_mapcount) == 0) {
>> -#ifdef CONFIG_XPFO_DEBUG
>> -            WARN_ON(PageXpfoUnmapped(page));
>> -#endif
>> -            SetPageXpfoUnmapped(page);
>> -            set_kpte(kaddr, page, __pgprot(0));
>> -            xpfo_flush_kernel_tlb(page, 0);
>> +            spin_lock_irqsave(&page->xpfo_lock, flags);
>> +
>> +            /*
>> +             * In the case, where we raced with kmap after the
>> +             * atomic_dec_return, we must not nuke the mapping.
>> +             */
>> +            if (atomic_read(&page->xpfo_mapcount) == 0) {
>> +                    SetPageXpfoUnmapped(page);
>> +                    set_kpte(kaddr, page, __pgprot(0));
>> +                    flush_tlb = true;
>> +            }
>> +            spin_unlock_irqrestore(&page->xpfo_lock, flags);
>>      }
>>  
>> -    spin_unlock_irqrestore(&page->xpfo_lock, flags);
>> +    if (flush_tlb)
>> +            xpfo_flush_kernel_tlb(page, 0);
>>  }
> 
> This doesn't help with the TLB invalidation issue, AFAICT this is still
> completely buggered. kunmap_atomic() can be called from IRQ context.
> 

OK. xpfo_kmap/xpfo_kunmap need redesign.

--
Khalid

_______________________________________________
iommu mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to