Re: A problem of Intel IOMMU hardware ?

2021-03-26 Thread Lu Baolu

Hi Nadav,

On 3/27/21 12:36 PM, Nadav Amit wrote:




On Mar 26, 2021, at 7:31 PM, Lu Baolu  wrote:

Hi Nadav,

On 3/19/21 12:46 AM, Nadav Amit wrote:

So here is my guess:
Intel probably used as a basis for the IOTLB an implementation of
some other (regular) TLB design.
Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
"Software wishing to prevent this uncertainty should not write to
a paging-structure entry in a way that would change, for any linear
address, both the page size and either the page frame, access rights,
or other attributes.”
Now the aforementioned uncertainty is a bit different (multiple
*valid*  translations of a single address). Yet, perhaps this is
yet another thing that might happen.
 From a brief look on the handling of MMU (not IOMMU) hugepages
in Linux, indeed the PMD is first cleared and flushed before a
new valid PMD is set. This is possible for MMUs since they
allow the software to handle spurious page-faults gracefully.
This is not the case for the IOMMU though (without PRI).
Not sure this explains everything though. If that is the problem,
then during a mapping that changes page-sizes, a TLB flush is
needed, similarly to the one Longpeng did manually.


I have been working with Longpeng on this issue these days. It turned
out that your guess is right. The PMD is first cleared but not flushed
before a new valid one is set. The previous entry might be cached in the
paging structure caches hence leads to disaster.

In __domain_mapping():

2352 /*
2353  * Ensure that old small page tables are
2354  * removed to make room for superpage(s).
2355  * We're adding new large pages, so make 
sure
2356  * we don't remove their parent tables.
2357  */
2358 dma_pte_free_pagetable(domain, iov_pfn, 
end_pfn,
2359 largepage_lvl + 1);

I guess adding a cache flush operation after PMD switching should solve
the problem.

I am still not clear about this comment:

"
This is possible for MMUs since they allow the software to handle
spurious page-faults gracefully. This is not the case for the IOMMU
though (without PRI).
"

Can you please shed more light on this?


I was looking at the code in more detail, and apparently my concern
is incorrect.

I was under the assumption that the IOMMU map/unmap can merge/split
(specifically split) huge-pages. For instance, if you map 2MB and
then unmap 4KB out of the 2MB, then you would split the hugepage
and keep the rest of the mappings alive. This is the way MMU is
usually managed. To my defense, I also saw such partial unmappings
in Longpeng’s first scenario.

If this was possible, then you would have a case in which out of 2MB
(for instance), 4KB were unmapped, and you need to split the 2MB
hugepage into 4KB pages. If you try to clear the PMD, flush, and then
set the PMD to point to table with live 4KB PTES, you can have
an interim state in which the PMD is not present. DMAs that arrive
at this stage might fault, and without PRI (and device support)
you do not have a way of restarting the DMA after the hugepage split
is completed.


Get you and thanks a lot for sharing.

For current IOMMU usage, I can't see any case to split a huge page into
4KB pages, but in the near future, we do have a need of splitting huge
pages. For example, when we want to use the A/D bit to track the DMA
dirty pages during VM migration, it's an optimization if we could split
a huge page into 4K ones. So far, the solution I have considered is:

1) Prepare the split subtables in advance;
   [it's identical to the existing one only use 4k pages instead of huge
page.]
2) Switch the super (huge) page's leaf entry;
   [at this point, hardware could use both subtables. I am not sure
whether the hardware allows a dynamic switch of page table entry
from on valid entry to another valid one.]
3) Flush the cache.
   [hardware will use the new subtable]

As long as we can make sure that the old subtable won't be used by
hardware, we can safely release the old table.



Anyhow, this concern is apparently not relevant. I guess I was too
naive to assume the IOMMU management is similar to the MMU. I now
see that there is a comment in intel_iommu_unmap() saying:

 /* Cope with horrid API which requires us to unmap more than the
size argument if it happens to be a large-page mapping. */

Regards,
Nadav



Best regards,
baolu


Re: A problem of Intel IOMMU hardware ?

2021-03-26 Thread Nadav Amit


> On Mar 26, 2021, at 7:31 PM, Lu Baolu  wrote:
> 
> Hi Nadav,
> 
> On 3/19/21 12:46 AM, Nadav Amit wrote:
>> So here is my guess:
>> Intel probably used as a basis for the IOTLB an implementation of
>> some other (regular) TLB design.
>> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
>> "Software wishing to prevent this uncertainty should not write to
>> a paging-structure entry in a way that would change, for any linear
>> address, both the page size and either the page frame, access rights,
>> or other attributes.”
>> Now the aforementioned uncertainty is a bit different (multiple
>> *valid*  translations of a single address). Yet, perhaps this is
>> yet another thing that might happen.
>> From a brief look on the handling of MMU (not IOMMU) hugepages
>> in Linux, indeed the PMD is first cleared and flushed before a
>> new valid PMD is set. This is possible for MMUs since they
>> allow the software to handle spurious page-faults gracefully.
>> This is not the case for the IOMMU though (without PRI).
>> Not sure this explains everything though. If that is the problem,
>> then during a mapping that changes page-sizes, a TLB flush is
>> needed, similarly to the one Longpeng did manually.
> 
> I have been working with Longpeng on this issue these days. It turned
> out that your guess is right. The PMD is first cleared but not flushed
> before a new valid one is set. The previous entry might be cached in the
> paging structure caches hence leads to disaster.
> 
> In __domain_mapping():
> 
> 2352 /*
> 2353  * Ensure that old small page tables are
> 2354  * removed to make room for superpage(s).
> 2355  * We're adding new large pages, so make 
> sure
> 2356  * we don't remove their parent tables.
> 2357  */
> 2358 dma_pte_free_pagetable(domain, iov_pfn, 
> end_pfn,
> 2359 largepage_lvl + 1);
> 
> I guess adding a cache flush operation after PMD switching should solve
> the problem.
> 
> I am still not clear about this comment:
> 
> "
> This is possible for MMUs since they allow the software to handle
> spurious page-faults gracefully. This is not the case for the IOMMU
> though (without PRI).
> "
> 
> Can you please shed more light on this?

I was looking at the code in more detail, and apparently my concern
is incorrect.

I was under the assumption that the IOMMU map/unmap can merge/split
(specifically split) huge-pages. For instance, if you map 2MB and
then unmap 4KB out of the 2MB, then you would split the hugepage
and keep the rest of the mappings alive. This is the way MMU is
usually managed. To my defense, I also saw such partial unmappings
in Longpeng’s first scenario.

If this was possible, then you would have a case in which out of 2MB
(for instance), 4KB were unmapped, and you need to split the 2MB
hugepage into 4KB pages. If you try to clear the PMD, flush, and then
set the PMD to point to table with live 4KB PTES, you can have
an interim state in which the PMD is not present. DMAs that arrive
at this stage might fault, and without PRI (and device support)
you do not have a way of restarting the DMA after the hugepage split
is completed.

Anyhow, this concern is apparently not relevant. I guess I was too
naive to assume the IOMMU management is similar to the MMU. I now
see that there is a comment in intel_iommu_unmap() saying:

/* Cope with horrid API which requires us to unmap more than the
   size argument if it happens to be a large-page mapping. */

Regards,
Nadav


signature.asc
Description: Message signed with OpenPGP


Re: A problem of Intel IOMMU hardware ?

2021-03-26 Thread Lu Baolu

Hi Nadav,

On 3/19/21 12:46 AM, Nadav Amit wrote:

So here is my guess:

Intel probably used as a basis for the IOTLB an implementation of
some other (regular) TLB design.

Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):

"Software wishing to prevent this uncertainty should not write to
a paging-structure entry in a way that would change, for any linear
address, both the page size and either the page frame, access rights,
or other attributes.”


Now the aforementioned uncertainty is a bit different (multiple
*valid*  translations of a single address). Yet, perhaps this is
yet another thing that might happen.

 From a brief look on the handling of MMU (not IOMMU) hugepages
in Linux, indeed the PMD is first cleared and flushed before a
new valid PMD is set. This is possible for MMUs since they
allow the software to handle spurious page-faults gracefully.
This is not the case for the IOMMU though (without PRI).

Not sure this explains everything though. If that is the problem,
then during a mapping that changes page-sizes, a TLB flush is
needed, similarly to the one Longpeng did manually.


I have been working with Longpeng on this issue these days. It turned
out that your guess is right. The PMD is first cleared but not flushed
before a new valid one is set. The previous entry might be cached in the
paging structure caches hence leads to disaster.

In __domain_mapping():

2352 /*
2353  * Ensure that old small page 
tables are
2354  * removed to make room for 
superpage(s).
2355  * We're adding new large pages, so 
make sure

2356  * we don't remove their parent tables.
2357  */
2358 dma_pte_free_pagetable(domain, 
iov_pfn, end_pfn,
2359 
largepage_lvl + 1);


I guess adding a cache flush operation after PMD switching should solve
the problem.

I am still not clear about this comment:

"
 This is possible for MMUs since they allow the software to handle
 spurious page-faults gracefully. This is not the case for the IOMMU
 though (without PRI).
"

Can you please shed more light on this?

Best regards,
baolu


RE: A problem of Intel IOMMU hardware ?

2021-03-21 Thread Longpeng (Mike, Cloud Infrastructure Service Product Dept.)


> -Original Message-
> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> Sent: Monday, March 22, 2021 7:51 AM
> To: 'Nadav Amit' 
> Cc: Tian, Kevin ; chenjiashang
> ; David Woodhouse ;
> io...@lists.linux-foundation.org; LKML ;
> alex.william...@redhat.com; Gonglei (Arei) ;
> w...@kernel.org; 'Lu Baolu' ; 'Joerg Roedel'
> 
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> Hi Nadav,
> 
> > -Original Message-
> > From: Nadav Amit [mailto:nadav.a...@gmail.com]
> > Sent: Friday, March 19, 2021 12:46 AM
> > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > 
> > Cc: Tian, Kevin ; chenjiashang
> > ; David Woodhouse ;
> > io...@lists.linux-foundation.org; LKML ;
> > alex.william...@redhat.com; Gonglei (Arei) ;
> > w...@kernel.org
> > Subject: Re: A problem of Intel IOMMU hardware ?
> >
> >
> >
> > > On Mar 18, 2021, at 2:25 AM, Longpeng (Mike, Cloud Infrastructure
> > > Service
> > Product Dept.)  wrote:
> > >
> > >
> > >
> > >> -Original Message-
> > >> From: Tian, Kevin [mailto:kevin.t...@intel.com]
> > >> Sent: Thursday, March 18, 2021 4:56 PM
> > >> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >> ; Nadav Amit 
> > >> Cc: chenjiashang ; David Woodhouse
> > >> ; io...@lists.linux-foundation.org; LKML
> > >> ; alex.william...@redhat.com; Gonglei
> > >> (Arei) ; w...@kernel.org
> > >> Subject: RE: A problem of Intel IOMMU hardware ?
> > >>
> > >>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>> 
> > >>>
> > >>>> -Original Message-
> > >>>> From: Tian, Kevin [mailto:kevin.t...@intel.com]
> > >>>> Sent: Thursday, March 18, 2021 4:27 PM
> > >>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>>> ; Nadav Amit 
> > >>>> Cc: chenjiashang ; David Woodhouse
> > >>>> ; io...@lists.linux-foundation.org; LKML
> > >>>> ; alex.william...@redhat.com;
> > >>>> Gonglei
> > >>> (Arei)
> > >>>> ; w...@kernel.org
> > >>>> Subject: RE: A problem of Intel IOMMU hardware ?
> > >>>>
> > >>>>> From: iommu  On Behalf
> > >>>>> Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>>>>
> > >>>>>> 2. Consider ensuring that the problem is not somehow related to
> > >>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead
> > >>>>>> of
> > >>>> qi_flush_iotlb().
> > >>>>>>
> > >>>>>
> > >>>>> I tried to force to use __iommu_flush_iotlb(), but maybe
> > >>>>> something wrong, the system crashed, so I prefer to lower the
> > >>>>> priority of this
> > >>> operation.
> > >>>>>
> > >>>>
> > >>>> The VT-d spec clearly says that register-based invalidation can
> > >>>> be used only
> > >>> when
> > >>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> > >>>> provide
> > >>> an
> > >>>> option to disable queued-invalidation though, when the hardware
> > >>>> is
> > >>> capable. If you
> > >>>> really want to try, tweak the code in intel_iommu_init_qi.
> > >>>>
> > >>>
> > >>> Hi Kevin,
> > >>>
> > >>> Thanks to point out this. Do you have any ideas about this problem ?
> > >>> I tried to descript the problem much clear in my reply to Alex,
> > >>> hope you could have a look if you're interested.
> > >>>
> > >>
> > >> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> > >>
> > >
> > > Not test yet. It's hard to upgrade kernel in our environment.
> > >
> > >> Also one way to separate sw/hw bug is to trace the low level
> > >> interface (e.g.,
> > >> qi_flush_iotlb) which actually sends invalidation descriptors to
> > >> the IOMMU hardware. Check the window between b) and c) and see
> > >> whether the software does the right thing as expected the

RE: A problem of Intel IOMMU hardware ?

2021-03-21 Thread Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Hi Nadav,

> -Original Message-
> From: Nadav Amit [mailto:nadav.a...@gmail.com]
> Sent: Friday, March 19, 2021 12:46 AM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> 
> Cc: Tian, Kevin ; chenjiashang
> ; David Woodhouse ;
> io...@lists.linux-foundation.org; LKML ;
> alex.william...@redhat.com; Gonglei (Arei) ;
> w...@kernel.org
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> 
> 
> > On Mar 18, 2021, at 2:25 AM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.)  wrote:
> >
> >
> >
> >> -Original Message-
> >> From: Tian, Kevin [mailto:kevin.t...@intel.com]
> >> Sent: Thursday, March 18, 2021 4:56 PM
> >> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >> ; Nadav Amit 
> >> Cc: chenjiashang ; David Woodhouse
> >> ; io...@lists.linux-foundation.org; LKML
> >> ; alex.william...@redhat.com; Gonglei
> >> (Arei) ; w...@kernel.org
> >> Subject: RE: A problem of Intel IOMMU hardware ?
> >>
> >>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>> 
> >>>
> >>>> -Original Message-
> >>>> From: Tian, Kevin [mailto:kevin.t...@intel.com]
> >>>> Sent: Thursday, March 18, 2021 4:27 PM
> >>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>>> ; Nadav Amit 
> >>>> Cc: chenjiashang ; David Woodhouse
> >>>> ; io...@lists.linux-foundation.org; LKML
> >>>> ; alex.william...@redhat.com; Gonglei
> >>> (Arei)
> >>>> ; w...@kernel.org
> >>>> Subject: RE: A problem of Intel IOMMU hardware ?
> >>>>
> >>>>> From: iommu  On Behalf
> >>>>> Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>>>>
> >>>>>> 2. Consider ensuring that the problem is not somehow related to
> >>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead of
> >>>> qi_flush_iotlb().
> >>>>>>
> >>>>>
> >>>>> I tried to force to use __iommu_flush_iotlb(), but maybe something
> >>>>> wrong, the system crashed, so I prefer to lower the priority of
> >>>>> this
> >>> operation.
> >>>>>
> >>>>
> >>>> The VT-d spec clearly says that register-based invalidation can be
> >>>> used only
> >>> when
> >>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> >>>> provide
> >>> an
> >>>> option to disable queued-invalidation though, when the hardware is
> >>> capable. If you
> >>>> really want to try, tweak the code in intel_iommu_init_qi.
> >>>>
> >>>
> >>> Hi Kevin,
> >>>
> >>> Thanks to point out this. Do you have any ideas about this problem ?
> >>> I tried to descript the problem much clear in my reply to Alex, hope
> >>> you could have a look if you're interested.
> >>>
> >>
> >> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> >>
> >
> > Not test yet. It's hard to upgrade kernel in our environment.
> >
> >> Also one way to separate sw/hw bug is to trace the low level
> >> interface (e.g.,
> >> qi_flush_iotlb) which actually sends invalidation descriptors to the
> >> IOMMU hardware. Check the window between b) and c) and see whether
> >> the software does the right thing as expected there.
> >>
> >
> > We add some log in iommu driver these days, the software seems fine.
> > But we didn't look inside the qi_submit_sync yet, I'll try it tonight.
> 
> So here is my guess:
> 
> Intel probably used as a basis for the IOTLB an implementation of some other
> (regular) TLB design.
> 
> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
> 
> "Software wishing to prevent this uncertainty should not write to a
> paging-structure entry in a way that would change, for any linear address, 
> both the
> page size and either the page frame, access rights, or other attributes.”
> 
> 
> Now the aforementioned uncertainty is a bit different (multiple
> *valid* translations of a single address). Yet, perhaps this is yet another 
> thing that
> might happen.
> 
> From a brief look on the handling of MMU (not IOMMU) hugepages in Linux, 
> indeed
> the PMD is first 

Re: A problem of Intel IOMMU hardware ?

2021-03-18 Thread Lu Baolu

On 3/18/21 4:56 PM, Tian, Kevin wrote:

From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)



-Original Message-
From: Tian, Kevin [mailto:kevin.t...@intel.com]
Sent: Thursday, March 18, 2021 4:27 PM
To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
; Nadav Amit 
Cc: chenjiashang ; David Woodhouse
; io...@lists.linux-foundation.org; LKML
; alex.william...@redhat.com; Gonglei

(Arei)

; w...@kernel.org
Subject: RE: A problem of Intel IOMMU hardware ?


From: iommu  On Behalf Of
Longpeng (Mike, Cloud Infrastructure Service Product Dept.)


2. Consider ensuring that the problem is not somehow related to
queued invalidations. Try to use __iommu_flush_iotlb() instead of

qi_flush_iotlb().




I tried to force to use __iommu_flush_iotlb(), but maybe something
wrong, the system crashed, so I prefer to lower the priority of this

operation.




The VT-d spec clearly says that register-based invalidation can be used only

when

queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide

an

option to disable queued-invalidation though, when the hardware is

capable. If you

really want to try, tweak the code in intel_iommu_init_qi.



Hi Kevin,

Thanks to point out this. Do you have any ideas about this problem ? I tried
to descript the problem much clear in my reply to Alex, hope you could have
a look if you're interested.



btw I saw you used 4.18 kernel in this test. What about latest kernel?

Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
hardware. Check the window between b) and c) and see whether the
software does the right thing as expected there.


Yes. It's better if we can reproduce this with the latest kernel which
has debugfs files to expose page tables and the invalidation queues etc.

Best regards,
baolu


Re: A problem of Intel IOMMU hardware ?

2021-03-18 Thread Nadav Amit


> On Mar 18, 2021, at 2:25 AM, Longpeng (Mike, Cloud Infrastructure Service 
> Product Dept.)  wrote:
> 
> 
> 
>> -Original Message-
>> From: Tian, Kevin [mailto:kevin.t...@intel.com]
>> Sent: Thursday, March 18, 2021 4:56 PM
>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>> ; Nadav Amit 
>> Cc: chenjiashang ; David Woodhouse
>> ; io...@lists.linux-foundation.org; LKML
>> ; alex.william...@redhat.com; Gonglei (Arei)
>> ; w...@kernel.org
>> Subject: RE: A problem of Intel IOMMU hardware ?
>> 
>>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>> 
>>> 
>>>> -Original Message-
>>>> From: Tian, Kevin [mailto:kevin.t...@intel.com]
>>>> Sent: Thursday, March 18, 2021 4:27 PM
>>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>>> ; Nadav Amit 
>>>> Cc: chenjiashang ; David Woodhouse
>>>> ; io...@lists.linux-foundation.org; LKML
>>>> ; alex.william...@redhat.com; Gonglei
>>> (Arei)
>>>> ; w...@kernel.org
>>>> Subject: RE: A problem of Intel IOMMU hardware ?
>>>> 
>>>>> From: iommu  On Behalf
>>>>> Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>>>> 
>>>>>> 2. Consider ensuring that the problem is not somehow related to
>>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead
>>>>>> of
>>>> qi_flush_iotlb().
>>>>>> 
>>>>> 
>>>>> I tried to force to use __iommu_flush_iotlb(), but maybe something
>>>>> wrong, the system crashed, so I prefer to lower the priority of
>>>>> this
>>> operation.
>>>>> 
>>>> 
>>>> The VT-d spec clearly says that register-based invalidation can be
>>>> used only
>>> when
>>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't
>>>> provide
>>> an
>>>> option to disable queued-invalidation though, when the hardware is
>>> capable. If you
>>>> really want to try, tweak the code in intel_iommu_init_qi.
>>>> 
>>> 
>>> Hi Kevin,
>>> 
>>> Thanks to point out this. Do you have any ideas about this problem ? I
>>> tried to descript the problem much clear in my reply to Alex, hope you
>>> could have a look if you're interested.
>>> 
>> 
>> btw I saw you used 4.18 kernel in this test. What about latest kernel?
>> 
> 
> Not test yet. It's hard to upgrade kernel in our environment.
> 
>> Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
>> qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
>> hardware. Check the window between b) and c) and see whether the software 
>> does
>> the right thing as expected there.
>> 
> 
> We add some log in iommu driver these days, the software seems fine. But we
> didn't look inside the qi_submit_sync yet, I'll try it tonight.

So here is my guess:

Intel probably used as a basis for the IOTLB an implementation of
some other (regular) TLB design.

Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):

"Software wishing to prevent this uncertainty should not write to
a paging-structure entry in a way that would change, for any linear
address, both the page size and either the page frame, access rights,
or other attributes.”


Now the aforementioned uncertainty is a bit different (multiple
*valid* translations of a single address). Yet, perhaps this is
yet another thing that might happen.

From a brief look on the handling of MMU (not IOMMU) hugepages
in Linux, indeed the PMD is first cleared and flushed before a
new valid PMD is set. This is possible for MMUs since they
allow the software to handle spurious page-faults gracefully.
This is not the case for the IOMMU though (without PRI).

Not sure this explains everything though. If that is the problem,
then during a mapping that changes page-sizes, a TLB flush is
needed, similarly to the one Longpeng did manually.




signature.asc
Description: Message signed with OpenPGP


RE: A problem of Intel IOMMU hardware ?

2021-03-18 Thread Longpeng (Mike, Cloud Infrastructure Service Product Dept.)


> -Original Message-
> From: Tian, Kevin [mailto:kevin.t...@intel.com]
> Sent: Thursday, March 18, 2021 4:56 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> ; Nadav Amit 
> Cc: chenjiashang ; David Woodhouse
> ; io...@lists.linux-foundation.org; LKML
> ; alex.william...@redhat.com; Gonglei (Arei)
> ; w...@kernel.org
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> > From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > 
> >
> > > -Original Message-
> > > From: Tian, Kevin [mailto:kevin.t...@intel.com]
> > > Sent: Thursday, March 18, 2021 4:27 PM
> > > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > ; Nadav Amit 
> > > Cc: chenjiashang ; David Woodhouse
> > > ; io...@lists.linux-foundation.org; LKML
> > > ; alex.william...@redhat.com; Gonglei
> > (Arei)
> > > ; w...@kernel.org
> > > Subject: RE: A problem of Intel IOMMU hardware ?
> > >
> > > > From: iommu  On Behalf
> > > > Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > >
> > > > > 2. Consider ensuring that the problem is not somehow related to
> > > > > queued invalidations. Try to use __iommu_flush_iotlb() instead
> > > > > of
> > > qi_flush_iotlb().
> > > > >
> > > >
> > > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > > wrong, the system crashed, so I prefer to lower the priority of
> > > > this
> > operation.
> > > >
> > >
> > > The VT-d spec clearly says that register-based invalidation can be
> > > used only
> > when
> > > queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> > > provide
> > an
> > > option to disable queued-invalidation though, when the hardware is
> > capable. If you
> > > really want to try, tweak the code in intel_iommu_init_qi.
> > >
> >
> > Hi Kevin,
> >
> > Thanks to point out this. Do you have any ideas about this problem ? I
> > tried to descript the problem much clear in my reply to Alex, hope you
> > could have a look if you're interested.
> >
> 
> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> 

Not test yet. It's hard to upgrade kernel in our environment.

> Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
> qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
> hardware. Check the window between b) and c) and see whether the software does
> the right thing as expected there.
> 

We add some log in iommu driver these days, the software seems fine. But we
didn't look inside the qi_submit_sync yet, I'll try it tonight.

> Thanks
> Kevin


RE: A problem of Intel IOMMU hardware ?

2021-03-18 Thread Tian, Kevin
> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> 
> 
> > -Original Message-
> > From: Tian, Kevin [mailto:kevin.t...@intel.com]
> > Sent: Thursday, March 18, 2021 4:27 PM
> > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > ; Nadav Amit 
> > Cc: chenjiashang ; David Woodhouse
> > ; io...@lists.linux-foundation.org; LKML
> > ; alex.william...@redhat.com; Gonglei
> (Arei)
> > ; w...@kernel.org
> > Subject: RE: A problem of Intel IOMMU hardware ?
> >
> > > From: iommu  On Behalf Of
> > > Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >
> > > > 2. Consider ensuring that the problem is not somehow related to
> > > > queued invalidations. Try to use __iommu_flush_iotlb() instead of
> > qi_flush_iotlb().
> > > >
> > >
> > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > wrong, the system crashed, so I prefer to lower the priority of this
> operation.
> > >
> >
> > The VT-d spec clearly says that register-based invalidation can be used only
> when
> > queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide
> an
> > option to disable queued-invalidation though, when the hardware is
> capable. If you
> > really want to try, tweak the code in intel_iommu_init_qi.
> >
> 
> Hi Kevin,
> 
> Thanks to point out this. Do you have any ideas about this problem ? I tried
> to descript the problem much clear in my reply to Alex, hope you could have
> a look if you're interested.
> 

btw I saw you used 4.18 kernel in this test. What about latest kernel?

Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
hardware. Check the window between b) and c) and see whether the
software does the right thing as expected there. 

Thanks
Kevin


RE: A problem of Intel IOMMU hardware ?

2021-03-18 Thread Longpeng (Mike, Cloud Infrastructure Service Product Dept.)


> -Original Message-
> From: Tian, Kevin [mailto:kevin.t...@intel.com]
> Sent: Thursday, March 18, 2021 4:43 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> ; Nadav Amit 
> Cc: chenjiashang ; David Woodhouse
> ; io...@lists.linux-foundation.org; LKML
> ; alex.william...@redhat.com; Gonglei (Arei)
> ; w...@kernel.org
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> > From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > 
> >
> >
> > > -Original Message-
> > > From: Tian, Kevin [mailto:kevin.t...@intel.com]
> > > Sent: Thursday, March 18, 2021 4:27 PM
> > > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > ; Nadav Amit 
> > > Cc: chenjiashang ; David Woodhouse
> > > ; io...@lists.linux-foundation.org; LKML
> > > ; alex.william...@redhat.com; Gonglei
> > (Arei)
> > > ; w...@kernel.org
> > > Subject: RE: A problem of Intel IOMMU hardware ?
> > >
> > > > From: iommu  On Behalf
> > > > Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > >
> > > > > 2. Consider ensuring that the problem is not somehow related to
> > > > > queued invalidations. Try to use __iommu_flush_iotlb() instead
> > > > > of
> > > qi_flush_iotlb().
> > > > >
> > > >
> > > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > > wrong, the system crashed, so I prefer to lower the priority of
> > > > this
> > operation.
> > > >
> > >
> > > The VT-d spec clearly says that register-based invalidation can be
> > > used only
> > when
> > > queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> > > provide
> > an
> > > option to disable queued-invalidation though, when the hardware is
> > capable. If you
> > > really want to try, tweak the code in intel_iommu_init_qi.
> > >
> >
> > Hi Kevin,
> >
> > Thanks to point out this. Do you have any ideas about this problem ? I
> > tried to descript the problem much clear in my reply to Alex, hope you
> > could have a look if you're interested.
> >
> 
> I agree with Nadav. Looks this implies some stale paging structure cache 
> entry (e.g.
> PMD) is not invalidated properly. It's better if Baolu can reproduce this 
> problem in
> his local environment and then do more debug to identify whether it's a 
> software or
> hardware defect.
> 
> btw what is the device under test? Does it support ATS?
> 

The device is our offload card, it does not support ATS cap.

> Thanks
> Kevin


RE: A problem of Intel IOMMU hardware ?

2021-03-18 Thread Tian, Kevin
> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> 
> 
> 
> > -Original Message-
> > From: Tian, Kevin [mailto:kevin.t...@intel.com]
> > Sent: Thursday, March 18, 2021 4:27 PM
> > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > ; Nadav Amit 
> > Cc: chenjiashang ; David Woodhouse
> > ; io...@lists.linux-foundation.org; LKML
> > ; alex.william...@redhat.com; Gonglei
> (Arei)
> > ; w...@kernel.org
> > Subject: RE: A problem of Intel IOMMU hardware ?
> >
> > > From: iommu  On Behalf Of
> > > Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >
> > > > 2. Consider ensuring that the problem is not somehow related to
> > > > queued invalidations. Try to use __iommu_flush_iotlb() instead of
> > qi_flush_iotlb().
> > > >
> > >
> > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > wrong, the system crashed, so I prefer to lower the priority of this
> operation.
> > >
> >
> > The VT-d spec clearly says that register-based invalidation can be used only
> when
> > queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide
> an
> > option to disable queued-invalidation though, when the hardware is
> capable. If you
> > really want to try, tweak the code in intel_iommu_init_qi.
> >
> 
> Hi Kevin,
> 
> Thanks to point out this. Do you have any ideas about this problem ? I tried
> to descript the problem much clear in my reply to Alex, hope you could have
> a look if you're interested.
> 

I agree with Nadav. Looks this implies some stale paging structure cache entry
(e.g. PMD) is not invalidated properly. It's better if Baolu can reproduce this
problem in his local environment and then do more debug to identify whether
it's a software or hardware defect.

btw what is the device under test? Does it support ATS?

Thanks
Kevin


RE: A problem of Intel IOMMU hardware ?

2021-03-18 Thread Longpeng (Mike, Cloud Infrastructure Service Product Dept.)


> -Original Message-
> From: Tian, Kevin [mailto:kevin.t...@intel.com]
> Sent: Thursday, March 18, 2021 4:27 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> ; Nadav Amit 
> Cc: chenjiashang ; David Woodhouse
> ; io...@lists.linux-foundation.org; LKML
> ; alex.william...@redhat.com; Gonglei (Arei)
> ; w...@kernel.org
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> > From: iommu  On Behalf Of
> > Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >
> > > 2. Consider ensuring that the problem is not somehow related to
> > > queued invalidations. Try to use __iommu_flush_iotlb() instead of
> qi_flush_iotlb().
> > >
> >
> > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > wrong, the system crashed, so I prefer to lower the priority of this 
> > operation.
> >
> 
> The VT-d spec clearly says that register-based invalidation can be used only 
> when
> queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide an
> option to disable queued-invalidation though, when the hardware is capable. 
> If you
> really want to try, tweak the code in intel_iommu_init_qi.
> 

Hi Kevin,

Thanks to point out this. Do you have any ideas about this problem ? I tried
to descript the problem much clear in my reply to Alex, hope you could have
a look if you're interested.

> Thanks
> Kevin


RE: A problem of Intel IOMMU hardware ?

2021-03-18 Thread Tian, Kevin
> From: iommu  On Behalf Of
> Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> 
> > 2. Consider ensuring that the problem is not somehow related to queued
> > invalidations. Try to use __iommu_flush_iotlb() instead of qi_flush_iotlb().
> >
> 
> I tried to force to use __iommu_flush_iotlb(), but maybe something wrong,
> the system crashed, so I prefer to lower the priority of this operation.
> 

The VT-d spec clearly says that register-based invalidation can be used
only when queued-invalidations are not enabled. Intel-IOMMU driver
doesn't provide an option to disable queued-invalidation though, when
the hardware is capable. If you really want to try, tweak the code in
intel_iommu_init_qi.

Thanks
Kevin


RE: A problem of Intel IOMMU hardware ?

2021-03-18 Thread Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Hi Nadav,

> -Original Message-
> From: Nadav Amit [mailto:nadav.a...@gmail.com]
> Sent: Thursday, March 18, 2021 2:13 AM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> 
> Cc: David Woodhouse ; Lu Baolu
> ; Joerg Roedel ; w...@kernel.org;
> alex.william...@redhat.com; chenjiashang ;
> io...@lists.linux-foundation.org; Gonglei (Arei) ;
> LKML 
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> 
> 
> > On Mar 17, 2021, at 2:35 AM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.)  wrote:
> >
> > Hi Nadav,
> >
> >> -Original Message-
> >> From: Nadav Amit [mailto:nadav.a...@gmail.com]
> >>>  reproduce the problem with high probability (~50%).
> >>
> >> I saw Lu replied, and he is much more knowledgable than I am (I was
> >> just intrigued by your email).
> >>
> >> However, if I were you I would try also to remove some
> >> “optimizations” to look for the root-cause (e.g., use domain specific
> invalidations instead of page-specific).
> >>
> >
> > Good suggestion! But we did it these days, we tried to use global 
> > invalidations as
> follow:
> > iommu->flush.flush_iotlb(iommu, did, 0, 0,
> > DMA_TLB_DSI_FLUSH);
> > But can not resolve the problem.
> >
> >> The first thing that comes to my mind is the invalidation hint (ih)
> >> in iommu_flush_iotlb_psi(). I would remove it to see whether you get
> >> the failure without it.
> >
> > We also notice the IH, but the IH is always ZERO in our case, as the spec 
> > says:
> > '''
> > Paging-structure-cache entries caching second-level mappings
> > associated with the specified domain-id and the
> > second-level-input-address range are invalidated, if the Invalidation
> > Hint
> > (IH) field is Clear.
> > '''
> >
> > It seems the software is everything fine, so we've no choice but to suspect 
> > the
> hardware.
> 
> Ok, I am pretty much out of ideas. I have two more suggestions, but they are 
> much
> less likely to help. Yet, they can further help to rule out software bugs:
> 
> 1. dma_clear_pte() seems to be wrong IMHO. It should have used WRITE_ONCE()
> to prevent split-write, which might potentially cause “invalid” (partially
> cleared) PTE to be stored in the TLB. Having said that, the subsequent IOTLB 
> flush
> should have prevented the problem.
> 

Yes, use WRITE_ONCE is much safer, however I was just testing the following 
code,
it didn't resolved my problem.

static inline void dma_clear_pte(struct dma_pte *pte)
{
WRITE_ONCE(pte->val, 0ULL);
}

> 2. Consider ensuring that the problem is not somehow related to queued
> invalidations. Try to use __iommu_flush_iotlb() instead of qi_flush_iotlb().
> 

I tried to force to use __iommu_flush_iotlb(), but maybe something wrong,
the system crashed, so I prefer to lower the priority of this operation.

> Regards,
> Nadav


Re: A problem of Intel IOMMU hardware ?

2021-03-18 Thread Nadav Amit

> On Mar 17, 2021, at 9:46 PM, Longpeng (Mike, Cloud Infrastructure Service 
> Product Dept.)  wrote:
> 

[Snip]

> 
> NOTE, the magical thing happen...(*Operation-4*) we write the PTE
> of Operation-1 from 0 to 0x3 which means can Read/Write, and then
> we trigger DMA read again, it success and return the data of HPA 0 !!
> 
> Why we modify the older page table would make sense ? As we
> have discussed previously, the cache flush part of the driver is correct,
> it call flush_iotlb after (b) and no need to flush after (c). But the result
> of the experiment shows the older page table or older caches is effective
> actually.
> 
> Any ideas ?

Interesting. Sounds as if there is some page-walk cache that was not
invalidated properly.



signature.asc
Description: Message signed with OpenPGP


RE: A problem of Intel IOMMU hardware ?

2021-03-17 Thread Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Hi guys,

I provide more information, please see below

> -Original Message-
> From: Lu Baolu [mailto:baolu...@linux.intel.com]
> Sent: Thursday, March 18, 2021 10:59 AM
> To: Alex Williamson 
> Cc: baolu...@linux.intel.com; Longpeng (Mike, Cloud Infrastructure Service 
> Product
> Dept.) ; dw...@infradead.org; j...@8bytes.org;
> w...@kernel.org; io...@lists.linux-foundation.org; LKML
> ; Gonglei (Arei) ;
> chenjiashang 
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> Hi Alex,
> 
> On 3/17/21 11:18 PM, Alex Williamson wrote:
> >>>   {MAP,   0x0, 0xc000}, - (b)
> >>>   use GDB to pause at here, and then DMA read
> >>> IOVA=0,
> >> IOVA 0 seems to be a special one. Have you verified with other
> >> addresses than IOVA 0?
> > It is???  That would be a problem.
> >
> 
> No problem from hardware point of view as far as I can see. Just thought about
> software might handle it specially.
> 

We simplify the reproducer, use the following map/unmap sequences can also 
reproduce the problem.

1. use 2M hugetlbfs to mmap 4G memory

2. run the while loop:
While (1) {
DMA MAP (0, 0xa) - - - - - - - - - - - - - -(a)
DMA UNMAP (0, 0xa) - - - - - - - - - - - (b)
  Operation-1 : dump DMAR table
DMA MAP (0, 0xc000) - - - - - - - - - - -(c)
  Operation-2 :
 use GDB to pause at here, then DMA read IOVA=0,
 sometimes DMA success (as expected),
 but sometimes DMA error (report not-present).
  Operation-3 : dump DMAR table
  Operation-4 (when DMA error) : please see below
DMA UNMAP (0, 0xc000) - - - - - - - - -(d)
}

The DMAR table of Operation-1 is (only show the entries about IOVA 0):

PML4: 0x  1a34fbb003
  PDPE: 0x  1a34fbb003
   PDE: 0x  1a34fbf003
PTE: 0x   0

And the table of Operation-3 is:

PML4: 0x  1a34fbb003
  PDPE: 0x  1a34fbb003
   PDE: 0x   15ec00883 < - - 2M superpage

So we can see the IOVA 0 is mapped, but the DMA read is error:

dmar_fault: 131757 callbacks suppressed
DRHD: handling fault status reg 402
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read 
access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read 
access is not set
DRHD: handling fault status reg 600
DRHD: handling fault status reg 602
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read 
access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read 
access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read 
access is not set

NOTE, the magical thing happen...(*Operation-4*) we write the PTE
of Operation-1 from 0 to 0x3 which means can Read/Write, and then
we trigger DMA read again, it success and return the data of HPA 0 !!

Why we modify the older page table would make sense ? As we
have discussed previously, the cache flush part of the driver is correct,
it call flush_iotlb after (b) and no need to flush after (c). But the result
of the experiment shows the older page table or older caches is effective
actually.

Any ideas ?

> Best regards,
> baolu


Re: A problem of Intel IOMMU hardware ?

2021-03-17 Thread Lu Baolu

Hi Nadav,

On 3/18/21 2:12 AM, Nadav Amit wrote:




On Mar 17, 2021, at 2:35 AM, Longpeng (Mike, Cloud Infrastructure Service Product 
Dept.)  wrote:

Hi Nadav,


-Original Message-
From: Nadav Amit [mailto:nadav.a...@gmail.com]

  reproduce the problem with high probability (~50%).


I saw Lu replied, and he is much more knowledgable than I am (I was just 
intrigued
by your email).

However, if I were you I would try also to remove some “optimizations” to look 
for
the root-cause (e.g., use domain specific invalidations instead of 
page-specific).



Good suggestion! But we did it these days, we tried to use global invalidations 
as follow:
iommu->flush.flush_iotlb(iommu, did, 0, 0,
DMA_TLB_DSI_FLUSH);
But can not resolve the problem.


The first thing that comes to my mind is the invalidation hint (ih) in
iommu_flush_iotlb_psi(). I would remove it to see whether you get the failure
without it.


We also notice the IH, but the IH is always ZERO in our case, as the spec says:
'''
Paging-structure-cache entries caching second-level mappings associated with 
the specified
domain-id and the second-level-input-address range are invalidated, if the 
Invalidation Hint
(IH) field is Clear.
'''

It seems the software is everything fine, so we've no choice but to suspect the 
hardware.


Ok, I am pretty much out of ideas. I have two more suggestions, but
they are much less likely to help. Yet, they can further help to rule
out software bugs:

1. dma_clear_pte() seems to be wrong IMHO. It should have used WRITE_ONCE()
to prevent split-write, which might potentially cause “invalid” (partially
cleared) PTE to be stored in the TLB. Having said that, the subsequent
IOTLB flush should have prevented the problem.


Agreed. The pte read/write should use READ/WRITE_ONCE() instead.



2. Consider ensuring that the problem is not somehow related to queued
invalidations. Try to use __iommu_flush_iotlb() instead of
qi_flush_iotlb().

Regards,
Nadav



Best regards,
baolu


Re: A problem of Intel IOMMU hardware ?

2021-03-17 Thread Lu Baolu

Hi Alex,

On 3/17/21 11:18 PM, Alex Williamson wrote:

  {MAP,   0x0, 0xc000}, - (b)
  use GDB to pause at here, and then DMA read IOVA=0,

IOVA 0 seems to be a special one. Have you verified with other addresses
than IOVA 0?

It is???  That would be a problem.



No problem from hardware point of view as far as I can see. Just
thought about software might handle it specially.

Best regards,
baolu


Re: A problem of Intel IOMMU hardware ?

2021-03-17 Thread Nadav Amit


> On Mar 17, 2021, at 2:35 AM, Longpeng (Mike, Cloud Infrastructure Service 
> Product Dept.)  wrote:
> 
> Hi Nadav,
> 
>> -Original Message-
>> From: Nadav Amit [mailto:nadav.a...@gmail.com]
>>>  reproduce the problem with high probability (~50%).
>> 
>> I saw Lu replied, and he is much more knowledgable than I am (I was just 
>> intrigued
>> by your email).
>> 
>> However, if I were you I would try also to remove some “optimizations” to 
>> look for
>> the root-cause (e.g., use domain specific invalidations instead of 
>> page-specific).
>> 
> 
> Good suggestion! But we did it these days, we tried to use global 
> invalidations as follow:
>   iommu->flush.flush_iotlb(iommu, did, 0, 0,
>   DMA_TLB_DSI_FLUSH);
> But can not resolve the problem.
> 
>> The first thing that comes to my mind is the invalidation hint (ih) in
>> iommu_flush_iotlb_psi(). I would remove it to see whether you get the failure
>> without it.
> 
> We also notice the IH, but the IH is always ZERO in our case, as the spec 
> says:
> '''
> Paging-structure-cache entries caching second-level mappings associated with 
> the specified
> domain-id and the second-level-input-address range are invalidated, if the 
> Invalidation Hint
> (IH) field is Clear.
> '''
> 
> It seems the software is everything fine, so we've no choice but to suspect 
> the hardware.

Ok, I am pretty much out of ideas. I have two more suggestions, but
they are much less likely to help. Yet, they can further help to rule
out software bugs:

1. dma_clear_pte() seems to be wrong IMHO. It should have used WRITE_ONCE()
to prevent split-write, which might potentially cause “invalid” (partially
cleared) PTE to be stored in the TLB. Having said that, the subsequent
IOTLB flush should have prevented the problem.

2. Consider ensuring that the problem is not somehow related to queued
invalidations. Try to use __iommu_flush_iotlb() instead of
qi_flush_iotlb().

Regards,
Nadav


signature.asc
Description: Message signed with OpenPGP


Re: A problem of Intel IOMMU hardware ?

2021-03-17 Thread Alex Williamson
On Wed, 17 Mar 2021 13:16:58 +0800
Lu Baolu  wrote:

> Hi Longpeng,
> 
> On 3/17/21 11:16 AM, Longpeng (Mike, Cloud Infrastructure Service 
> Product Dept.) wrote:
> > Hi guys,
> > 
> > We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
> > situation, it would cause DMA fails or get wrong data.
> > 
> > The reproducer (based on Alex's vfio testsuite[1]) is in attachment, it can
> > reproduce the problem with high probability (~50%).
> > 
> > The machine we used is:
> > processor   : 47
> > vendor_id   : GenuineIntel
> > cpu family  : 6
> > model   : 85
> > model name  : Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
> > stepping: 4
> > microcode   : 0x269
> > 
> > And the iommu capability reported is:
> > ver 1:0 cap 8d2078c106f0466 ecap f020df
> > (caching mode = 0 , page-selective invalidation = 1)
> > 
> > (The problem is also on 'Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz' and
> > 'Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz')
> > 
> > We run the reproducer on Linux 4.18 and it works as follow:
> > 
> > Step 1. alloc 4G *2M-hugetlb* memory (N.B. no problem with 4K-page mapping) 
> >  
> 
> I don't understand 2M-hugetlb here means exactly. The IOMMU hardware
> supports both 2M and 1G super page. The mapping physical memory is 4G.
> Why couldn't it use 1G super page?
> 
> > Step 2. DMA Map 4G memory
> > Step 3.
> >  while (1) {
> >  {UNMAP, 0x0, 0xa},  (a)
> >  {UNMAP, 0xc, 0xbff4},  
> 
> Have these two ranges been mapped before? Does the IOMMU driver
> complains when you trying to unmap a range which has never been
> mapped? The IOMMU driver implicitly assumes that mapping and
> unmapping are paired.
>
> >  {MAP,   0x0, 0xc000}, - (b)
> >  use GDB to pause at here, and then DMA read IOVA=0,  
> 
> IOVA 0 seems to be a special one. Have you verified with other addresses
> than IOVA 0?

It is???  That would be a problem.

> >  sometimes DMA success (as expected),
> >  but sometimes DMA error (report not-present).
> >  {UNMAP, 0x0, 0xc000}, - (c)
> >  {MAP,   0x0, 0xa},
> >  {MAP,   0xc, 0xbff4},
> >  }

The interesting thing about this test sequence seems to be how it will
implicitly switch between super pages and regular pages.  Also note
that the test is using the original vfio type1 API rather than the v2
API that's more commonly used today.  This older API allows unmaps to
split mappings, but we don't really know how much the IOMMU is
unmapping without reading the unmap.size field returned by the ioctl.
What I expect to happen is that the IOMMU will make use of superpages
when mapping the full range.  When we unmap {0-b}, that's likely
going to be covered by a 2M (or more) superpage, therefore the unmap
will actually unmap {0-1f}.  The subsequent unmap starting at
0xc might already have {a-1f} unmapped.  However, when we
then map {0 - b} the IOMMU will (should) switch back to 4K pages.
The mapping at 0xc should use 4K pages up through 0x1f, then
might switch to 2M or 1G pages depending on physical memory layout.  So
the {0-2MB} IOVA range could be switching back and forth between a
superpage mapping and 4K mapping, and I can certainly imagine that
could lead to page table, if not cache management bugs.  Thanks,

Alex


> > 
> > The DMA read operations sholud success between (b) and (c), it should NOT 
> > report
> > not-present at least!
> > 
> > After analysis the problem, we think maybe it's caused by the Intel iommu 
> > iotlb.
> > It seems the DMA Remapping hardware still uses the IOTLB or other caches of 
> > (a).
> > 
> > When do DMA unmap at (a), the iotlb will be flush:
> >  intel_iommu_unmap
> >  domain_unmap
> >  iommu_flush_iotlb_psi
> > 
> > When do DMA map at (b), no need to flush the iotlb according to the 
> > capability
> > of this iommu:
> >  intel_iommu_map
> >  domain_pfn_mapping
> >  domain_mapping
> >  __mapping_notify_one
> >  if (cap_caching_mode(iommu->cap)) // FALSE
> >  iommu_flush_iotlb_psi  
> 
> That's true. The iotlb flushing is not needed in case of PTE been
> changed from non-present to present unless caching mode.
> 
> > But the problem will disappear if we FORCE flush here. So we suspect the 
> > iommu
> > hardware.
> > 
> > Do you have any suggestion ?  
> 
> Best regards,
> baolu
> 



RE: A problem of Intel IOMMU hardware ?

2021-03-17 Thread Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Hi Baolu,

> -Original Message-
> From: Lu Baolu [mailto:baolu...@linux.intel.com]
> Sent: Wednesday, March 17, 2021 1:17 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> ; dw...@infradead.org; j...@8bytes.org;
> w...@kernel.org; alex.william...@redhat.com
> Cc: baolu...@linux.intel.com; io...@lists.linux-foundation.org; LKML
> ; Gonglei (Arei) ;
> chenjiashang 
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> Hi Longpeng,
> 
> On 3/17/21 11:16 AM, Longpeng (Mike, Cloud Infrastructure Service Product 
> Dept.)
> wrote:
> > Hi guys,
> >
> > We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a
> > special situation, it would cause DMA fails or get wrong data.
> >
> > The reproducer (based on Alex's vfio testsuite[1]) is in attachment,
> > it can reproduce the problem with high probability (~50%).
> >
> > The machine we used is:
> > processor   : 47
> > vendor_id   : GenuineIntel
> > cpu family  : 6
> > model   : 85
> > model name  : Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
> > stepping: 4
> > microcode   : 0x269
> >
> > And the iommu capability reported is:
> > ver 1:0 cap 8d2078c106f0466 ecap f020df (caching mode = 0 ,
> > page-selective invalidation = 1)
> >
> > (The problem is also on 'Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz'
> > and
> > 'Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz')
> >
> > We run the reproducer on Linux 4.18 and it works as follow:
> >
> > Step 1. alloc 4G *2M-hugetlb* memory (N.B. no problem with 4K-page
> > mapping)
> 
> I don't understand 2M-hugetlb here means exactly. The IOMMU hardware
> supports both 2M and 1G super page. The mapping physical memory is 4G.
> Why couldn't it use 1G super page?
> 

We use hugetlbfs(support 1G and 2M, but we choose 2M in our case) to request
the memory in userspace: 
vaddr = (unsigned long)mmap(0, MAP_SIZE, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS | 
*MAP_HUGETLB*, 0, 0);

Yep, IOMMU support both 2M and 1G superpage, we just haven't to test the 1G case
yet, because our productions use 2M hugetlbfs page.

> > Step 2. DMA Map 4G memory
> > Step 3.
> >  while (1) {
> >  {UNMAP, 0x0, 0xa},  (a)
> >  {UNMAP, 0xc, 0xbff4},
> 
> Have these two ranges been mapped before? Does the IOMMU driver complains
> when you trying to unmap a range which has never been mapped? The IOMMU
> driver implicitly assumes that mapping and unmapping are paired.
> 

Of course yes, please Step-2, we DMA mapped all the memory(4G) before the while 
loop.
The driver never complained during MAP and UNMAP operations.

> >  {MAP,   0x0, 0xc000}, - (b)
> >  use GDB to pause at here, and then DMA read IOVA=0,
> 
> IOVA 0 seems to be a special one. Have you verified with other addresses than
> IOVA 0?
> 

Yes, we also test IOVA=0x1000, it has problem too.

But one of the differeces between (0x0, 0xa) and (0x0, 0xc000) is the 
former
can only use 4K mapping in DMA pagetable but the latter uses 2M mapping. Is it 
possible
the hardware cache management works something wrong in this case?

> >  sometimes DMA success (as expected),
> >  but sometimes DMA error (report not-present).
> >  {UNMAP, 0x0, 0xc000}, - (c)
> >  {MAP,   0x0, 0xa},
> >  {MAP,   0xc, 0xbff4},
> >  }
> >
> > The DMA read operations sholud success between (b) and (c), it should
> > NOT report not-present at least!
> >
> > After analysis the problem, we think maybe it's caused by the Intel iommu 
> > iotlb.
> > It seems the DMA Remapping hardware still uses the IOTLB or other caches of
> (a).
> >
> > When do DMA unmap at (a), the iotlb will be flush:
> >  intel_iommu_unmap
> >  domain_unmap
> >  iommu_flush_iotlb_psi
> >
> > When do DMA map at (b), no need to flush the iotlb according to the
> > capability of this iommu:
> >  intel_iommu_map
> >  domain_pfn_mapping
> >  domain_mapping
> >  __mapping_notify_one
> >  if (cap_caching_mode(iommu->cap)) // FALSE
> >  iommu_flush_iotlb_psi
> 
> That's true. The iotlb flushing is not needed in case of PTE been changed from
> non-present to present unless caching mode.
> 

Yes, I also think the driver code is correct. But it's so confused that the 
problem
is disappear if we force it to flush here.

> > But the problem will disappear if we FORCE flush here. So we suspect
> > the iommu hardware.
> >
> > Do you have any suggestion ?
> 
> Best regards,
> baolu


RE: A problem of Intel IOMMU hardware ?

2021-03-17 Thread Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Hi Nadav,

> -Original Message-
> From: Nadav Amit [mailto:nadav.a...@gmail.com]
> Sent: Wednesday, March 17, 2021 1:46 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> 
> Cc: David Woodhouse ; Lu Baolu
> ; Joerg Roedel ; w...@kernel.org;
> alex.william...@redhat.com; chenjiashang ;
> io...@lists.linux-foundation.org; Gonglei (Arei) ;
> LKML 
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> 
> 
> > On Mar 16, 2021, at 8:16 PM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.)  wrote:
> >
> > Hi guys,
> >
> > We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a
> > special situation, it would cause DMA fails or get wrong data.
> >
> > The reproducer (based on Alex's vfio testsuite[1]) is in attachment,
> > it can reproduce the problem with high probability (~50%).
> 
> I saw Lu replied, and he is much more knowledgable than I am (I was just 
> intrigued
> by your email).
> 
> However, if I were you I would try also to remove some “optimizations” to 
> look for
> the root-cause (e.g., use domain specific invalidations instead of 
> page-specific).
> 

Good suggestion! But we did it these days, we tried to use global invalidations 
as follow:
iommu->flush.flush_iotlb(iommu, did, 0, 0,
DMA_TLB_DSI_FLUSH);
But can not resolve the problem.

> The first thing that comes to my mind is the invalidation hint (ih) in
> iommu_flush_iotlb_psi(). I would remove it to see whether you get the failure
> without it.

We also notice the IH, but the IH is always ZERO in our case, as the spec says:
'''
Paging-structure-cache entries caching second-level mappings associated with 
the specified
domain-id and the second-level-input-address range are invalidated, if the 
Invalidation Hint
(IH) field is Clear.
'''

It seems the software is everything fine, so we've no choice but to suspect the 
hardware.


Re: A problem of Intel IOMMU hardware ?

2021-03-16 Thread Nadav Amit


> On Mar 16, 2021, at 8:16 PM, Longpeng (Mike, Cloud Infrastructure Service 
> Product Dept.)  wrote:
> 
> Hi guys,
> 
> We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
> situation, it would cause DMA fails or get wrong data.
> 
> The reproducer (based on Alex's vfio testsuite[1]) is in attachment, it can
> reproduce the problem with high probability (~50%).

I saw Lu replied, and he is much more knowledgable than I am (I was just
intrigued by your email).

However, if I were you I would try also to remove some “optimizations” to
look for the root-cause (e.g., use domain specific invalidations instead
of page-specific).

The first thing that comes to my mind is the invalidation hint (ih) in
iommu_flush_iotlb_psi(). I would remove it to see whether you get the
failure without it.



signature.asc
Description: Message signed with OpenPGP


Re: A problem of Intel IOMMU hardware ?

2021-03-16 Thread Lu Baolu

Hi Longpeng,

On 3/17/21 11:16 AM, Longpeng (Mike, Cloud Infrastructure Service 
Product Dept.) wrote:

Hi guys,

We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
situation, it would cause DMA fails or get wrong data.

The reproducer (based on Alex's vfio testsuite[1]) is in attachment, it can
reproduce the problem with high probability (~50%).

The machine we used is:
processor   : 47
vendor_id   : GenuineIntel
cpu family  : 6
model   : 85
model name  : Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
stepping: 4
microcode   : 0x269

And the iommu capability reported is:
ver 1:0 cap 8d2078c106f0466 ecap f020df
(caching mode = 0 , page-selective invalidation = 1)

(The problem is also on 'Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz' and
'Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz')

We run the reproducer on Linux 4.18 and it works as follow:

Step 1. alloc 4G *2M-hugetlb* memory (N.B. no problem with 4K-page mapping)


I don't understand 2M-hugetlb here means exactly. The IOMMU hardware
supports both 2M and 1G super page. The mapping physical memory is 4G.
Why couldn't it use 1G super page?


Step 2. DMA Map 4G memory
Step 3.
 while (1) {
 {UNMAP, 0x0, 0xa},  (a)
 {UNMAP, 0xc, 0xbff4},


Have these two ranges been mapped before? Does the IOMMU driver
complains when you trying to unmap a range which has never been
mapped? The IOMMU driver implicitly assumes that mapping and
unmapping are paired.


 {MAP,   0x0, 0xc000}, - (b)
 use GDB to pause at here, and then DMA read IOVA=0,


IOVA 0 seems to be a special one. Have you verified with other addresses
than IOVA 0?


 sometimes DMA success (as expected),
 but sometimes DMA error (report not-present).
 {UNMAP, 0x0, 0xc000}, - (c)
 {MAP,   0x0, 0xa},
 {MAP,   0xc, 0xbff4},
 }

The DMA read operations sholud success between (b) and (c), it should NOT report
not-present at least!

After analysis the problem, we think maybe it's caused by the Intel iommu iotlb.
It seems the DMA Remapping hardware still uses the IOTLB or other caches of (a).

When do DMA unmap at (a), the iotlb will be flush:
 intel_iommu_unmap
 domain_unmap
 iommu_flush_iotlb_psi

When do DMA map at (b), no need to flush the iotlb according to the capability
of this iommu:
 intel_iommu_map
 domain_pfn_mapping
 domain_mapping
 __mapping_notify_one
 if (cap_caching_mode(iommu->cap)) // FALSE
 iommu_flush_iotlb_psi


That's true. The iotlb flushing is not needed in case of PTE been
changed from non-present to present unless caching mode.


But the problem will disappear if we FORCE flush here. So we suspect the iommu
hardware.

Do you have any suggestion ?


Best regards,
baolu