Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-12 Thread Julien Grall

Hello,

On 12/06/2016 07:55, Tian, Kevin wrote:

From: Julien Grall [mailto:julien.gr...@arm.com]
Sent: Thursday, June 09, 2016 8:40 PM

Hi Jan,

On 09/06/16 13:32, Jan Beulich wrote:

On 09.06.16 at 14:24,  wrote:

So the behavior of iommu_iotlb_flush is up to the IOMMU driver. Whilst
the behavior of iommu_{,un}map_page are defined by the common code.


I'm certainly up for moving the logic up to the generic IOMMU layer,
if that's feasible.


That would be my preference.



I suppose above comment is for "[Patch v11 0/3] VT-d Device-TLB flush
issue", where the crash logic better be moved up. Do you have further
against on this patch then?


None. I trust you to ensure that the patch to move the crash logic is 
upstreamed soon:


Acked-by: Julien Grall 

Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-12 Thread Xu, Quan
On June 12, 2016 2:56 PM, Tian, Kevin  wrote:
> > From: Julien Grall [mailto:julien.gr...@arm.com]
> > Sent: Thursday, June 09, 2016 8:40 PM
> >
> > Hi Jan,
> >
> > On 09/06/16 13:32, Jan Beulich wrote:
> >  On 09.06.16 at 14:24,  wrote:
> > >> So the behavior of iommu_iotlb_flush is up to the IOMMU driver.
> > >> Whilst the behavior of iommu_{,un}map_page are defined by the
> common code.
> > >
> > > I'm certainly up for moving the logic up to the generic IOMMU layer,
> > > if that's feasible.
> >
> > That would be my preference.
> >
> 
> I suppose above comment is for "[Patch v11 0/3] VT-d Device-TLB flush
> issue", where the crash logic better be moved up. 

I think so too. however, I am still reading these AMD/ARM related code.

> Do you have further against
> on this patch then?

Quan
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-12 Thread Tian, Kevin
> From: Julien Grall [mailto:julien.gr...@arm.com]
> Sent: Thursday, June 09, 2016 8:40 PM
> 
> Hi Jan,
> 
> On 09/06/16 13:32, Jan Beulich wrote:
>  On 09.06.16 at 14:24,  wrote:
> >> So the behavior of iommu_iotlb_flush is up to the IOMMU driver. Whilst
> >> the behavior of iommu_{,un}map_page are defined by the common code.
> >
> > I'm certainly up for moving the logic up to the generic IOMMU layer,
> > if that's feasible.
> 
> That would be my preference.
> 

I suppose above comment is for "[Patch v11 0/3] VT-d Device-TLB flush 
issue", where the crash logic better be moved up. Do you have further 
against on this patch then?

Thanks
Kevin
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-12 Thread Tian, Kevin
> From: Xu, Quan
> Sent: Wednesday, June 08, 2016 4:59 PM
> 
> From: Quan Xu 
> 
> Signed-off-by: Quan Xu 
> Reviewed-by: Jan Beulich 
> 

Reviewed-by: Kevin Tian 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-09 Thread Julien Grall

Hi Jan,

On 09/06/16 13:32, Jan Beulich wrote:

On 09.06.16 at 14:24,  wrote:

So the behavior of iommu_iotlb_flush is up to the IOMMU driver. Whilst
the behavior of iommu_{,un}map_page are defined by the common code.


I'm certainly up for moving the logic up to the generic IOMMU layer,
if that's feasible.


That would be my preference.

Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-09 Thread Jan Beulich
>>> On 09.06.16 at 14:24,  wrote:
> On 09/06/16 13:15, Jan Beulich wrote:
> On 09.06.16 at 14:08,  wrote:
>>
>>>
>>> On 09/06/16 13:03, Julien Grall wrote:
 On 09/06/16 12:45, Jan Beulich wrote:
 On 09.06.16 at 13:12,  wrote:
>> On 08/06/16 09:58, Xu, Quan wrote:
>>> From: Quan Xu 
>>>
>>> Signed-off-by: Quan Xu 
>>> Reviewed-by: Jan Beulich 
>>>
>>> CC: Stefano Stabellini 
>>> CC: Julien Grall 
>>> CC: Jan Beulich 
>>> CC: Kevin Tian 
>>> ---
>>> xen/arch/arm/p2m.c  |  4 +++-
>>> xen/common/memory.c | 12 ++--
>>> xen/drivers/passthrough/iommu.c | 13 +
>>> xen/drivers/passthrough/x86/iommu.c |  5 +++--
>>> xen/include/xen/iommu.h |  5 +++--
>>> 5 files changed, 28 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index 6a19c57..65d8f1a 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -1178,7 +1178,9 @@ out:
>>> if ( flush )
>>> {
>>> flush_tlb_domain(d);
>>> -iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>> +ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>
>> Sorry for coming late in the discussion. What kind of error do you
>> expect to return with iommu_tlb_flush?
>>
>> Today the ARM SMMU will always return 0 if the TLB flush timeout (see
>> arm_smmu_tlb_inv_context).
>>
>> We may want in the future to return an error when it has timeout,
>> however only returning an error is not safe at all. The TLB may contain
>> entries which are invalid (because we remove the mapping earlier) and a
>> device/domain could take advantage of that.
>>
>> So I am not sure if we should let running the guest when a flush has
>> timeout. Any thoughts?
>
> Well, did you look at the rest of this series, and the other dependent
> one? Guests (other than Dom0) get crashed when a flush times out. I
>>>
>>> I missed the bit "other dependent one". Which series are you talking
>>> about? The cover letter does not give any dependent series...
>>
>> "[Patch v11 0/3] VT-d Device-TLB flush issue"
> 
> To be honest, I am usually not going through x86 patch. In this case, 
> the only call to domain_crash I can spot is in patch #2 which is Intel 
> VTD specific.

Which is why I said you probably want this on ARM too.

> So the behavior of iommu_iotlb_flush is up to the IOMMU driver. Whilst 
> the behavior of iommu_{,un}map_page are defined by the common code.

I'm certainly up for moving the logic up to the generic IOMMU layer,
if that's feasible.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-09 Thread Julien Grall

On 09/06/16 13:15, Jan Beulich wrote:

On 09.06.16 at 14:08,  wrote:




On 09/06/16 13:03, Julien Grall wrote:

On 09/06/16 12:45, Jan Beulich wrote:

On 09.06.16 at 13:12,  wrote:

On 08/06/16 09:58, Xu, Quan wrote:

From: Quan Xu 

Signed-off-by: Quan Xu 
Reviewed-by: Jan Beulich 

CC: Stefano Stabellini 
CC: Julien Grall 
CC: Jan Beulich 
CC: Kevin Tian 
---
xen/arch/arm/p2m.c  |  4 +++-
xen/common/memory.c | 12 ++--
xen/drivers/passthrough/iommu.c | 13 +
xen/drivers/passthrough/x86/iommu.c |  5 +++--
xen/include/xen/iommu.h |  5 +++--
5 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6a19c57..65d8f1a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1178,7 +1178,9 @@ out:
if ( flush )
{
flush_tlb_domain(d);
-iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);


Sorry for coming late in the discussion. What kind of error do you
expect to return with iommu_tlb_flush?

Today the ARM SMMU will always return 0 if the TLB flush timeout (see
arm_smmu_tlb_inv_context).

We may want in the future to return an error when it has timeout,
however only returning an error is not safe at all. The TLB may contain
entries which are invalid (because we remove the mapping earlier) and a
device/domain could take advantage of that.

So I am not sure if we should let running the guest when a flush has
timeout. Any thoughts?


Well, did you look at the rest of this series, and the other dependent
one? Guests (other than Dom0) get crashed when a flush times out. I


I missed the bit "other dependent one". Which series are you talking
about? The cover letter does not give any dependent series...


"[Patch v11 0/3] VT-d Device-TLB flush issue"


To be honest, I am usually not going through x86 patch. In this case, 
the only call to domain_crash I can spot is in patch #2 which is Intel 
VTD specific.


So the behavior of iommu_iotlb_flush is up to the IOMMU driver. Whilst 
the behavior of iommu_{,un}map_page are defined by the common code.


This looks odd to me. If this is expected, then this needs to be 
documented somewhere.


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-09 Thread Jan Beulich
>>> On 09.06.16 at 14:08,  wrote:

> 
> On 09/06/16 13:03, Julien Grall wrote:
>> On 09/06/16 12:45, Jan Beulich wrote:
>> On 09.06.16 at 13:12,  wrote:
 On 08/06/16 09:58, Xu, Quan wrote:
> From: Quan Xu 
>
> Signed-off-by: Quan Xu 
> Reviewed-by: Jan Beulich 
>
> CC: Stefano Stabellini 
> CC: Julien Grall 
> CC: Jan Beulich 
> CC: Kevin Tian 
> ---
>xen/arch/arm/p2m.c  |  4 +++-
>xen/common/memory.c | 12 ++--
>xen/drivers/passthrough/iommu.c | 13 +
>xen/drivers/passthrough/x86/iommu.c |  5 +++--
>xen/include/xen/iommu.h |  5 +++--
>5 files changed, 28 insertions(+), 11 deletions(-)
>
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index 6a19c57..65d8f1a 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -1178,7 +1178,9 @@ out:
>if ( flush )
>{
>flush_tlb_domain(d);
> -iommu_iotlb_flush(d, sgfn, egfn - sgfn);
> +ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);

 Sorry for coming late in the discussion. What kind of error do you
 expect to return with iommu_tlb_flush?

 Today the ARM SMMU will always return 0 if the TLB flush timeout (see
 arm_smmu_tlb_inv_context).

 We may want in the future to return an error when it has timeout,
 however only returning an error is not safe at all. The TLB may contain
 entries which are invalid (because we remove the mapping earlier) and a
 device/domain could take advantage of that.

 So I am not sure if we should let running the guest when a flush has
 timeout. Any thoughts?
>>>
>>> Well, did you look at the rest of this series, and the other dependent
>>> one? Guests (other than Dom0) get crashed when a flush times out. I
> 
> I missed the bit "other dependent one". Which series are you talking 
> about? The cover letter does not give any dependent series...

"[Patch v11 0/3] VT-d Device-TLB flush issue"

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-09 Thread Jan Beulich
>>> On 09.06.16 at 14:03,  wrote:

> 
> On 09/06/16 12:45, Jan Beulich wrote:
> On 09.06.16 at 13:12,  wrote:
>>> On 08/06/16 09:58, Xu, Quan wrote:
 From: Quan Xu 

 Signed-off-by: Quan Xu 
 Reviewed-by: Jan Beulich 

 CC: Stefano Stabellini 
 CC: Julien Grall 
 CC: Jan Beulich 
 CC: Kevin Tian 
 ---
xen/arch/arm/p2m.c  |  4 +++-
xen/common/memory.c | 12 ++--
xen/drivers/passthrough/iommu.c | 13 +
xen/drivers/passthrough/x86/iommu.c |  5 +++--
xen/include/xen/iommu.h |  5 +++--
5 files changed, 28 insertions(+), 11 deletions(-)

 diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
 index 6a19c57..65d8f1a 100644
 --- a/xen/arch/arm/p2m.c
 +++ b/xen/arch/arm/p2m.c
 @@ -1178,7 +1178,9 @@ out:
if ( flush )
{
flush_tlb_domain(d);
 -iommu_iotlb_flush(d, sgfn, egfn - sgfn);
 +ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>>>
>>> Sorry for coming late in the discussion. What kind of error do you
>>> expect to return with iommu_tlb_flush?
>>>
>>> Today the ARM SMMU will always return 0 if the TLB flush timeout (see
>>> arm_smmu_tlb_inv_context).
>>>
>>> We may want in the future to return an error when it has timeout,
>>> however only returning an error is not safe at all. The TLB may contain
>>> entries which are invalid (because we remove the mapping earlier) and a
>>> device/domain could take advantage of that.
>>>
>>> So I am not sure if we should let running the guest when a flush has
>>> timeout. Any thoughts?
>>
>> Well, did you look at the rest of this series, and the other dependent
>> one? Guests (other than Dom0) get crashed when a flush times out. I
>> guess that's what you will want on ARM then too.
> 
> I have found a call to domain_crash in iommu_map_page and 
> iommu_unmap_page. However, I have not found any for iommu_tlb_flush.

Perhaps in that other, 3-patch series then?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-09 Thread Julien Grall



On 09/06/16 13:03, Julien Grall wrote:

On 09/06/16 12:45, Jan Beulich wrote:

On 09.06.16 at 13:12,  wrote:

On 08/06/16 09:58, Xu, Quan wrote:

From: Quan Xu 

Signed-off-by: Quan Xu 
Reviewed-by: Jan Beulich 

CC: Stefano Stabellini 
CC: Julien Grall 
CC: Jan Beulich 
CC: Kevin Tian 
---
   xen/arch/arm/p2m.c  |  4 +++-
   xen/common/memory.c | 12 ++--
   xen/drivers/passthrough/iommu.c | 13 +
   xen/drivers/passthrough/x86/iommu.c |  5 +++--
   xen/include/xen/iommu.h |  5 +++--
   5 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6a19c57..65d8f1a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1178,7 +1178,9 @@ out:
   if ( flush )
   {
   flush_tlb_domain(d);
-iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);


Sorry for coming late in the discussion. What kind of error do you
expect to return with iommu_tlb_flush?

Today the ARM SMMU will always return 0 if the TLB flush timeout (see
arm_smmu_tlb_inv_context).

We may want in the future to return an error when it has timeout,
however only returning an error is not safe at all. The TLB may contain
entries which are invalid (because we remove the mapping earlier) and a
device/domain could take advantage of that.

So I am not sure if we should let running the guest when a flush has
timeout. Any thoughts?


Well, did you look at the rest of this series, and the other dependent
one? Guests (other than Dom0) get crashed when a flush times out. I


I missed the bit "other dependent one". Which series are you talking 
about? The cover letter does not give any dependent series...


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-09 Thread Julien Grall



On 09/06/16 12:45, Jan Beulich wrote:

On 09.06.16 at 13:12,  wrote:

On 08/06/16 09:58, Xu, Quan wrote:

From: Quan Xu 

Signed-off-by: Quan Xu 
Reviewed-by: Jan Beulich 

CC: Stefano Stabellini 
CC: Julien Grall 
CC: Jan Beulich 
CC: Kevin Tian 
---
   xen/arch/arm/p2m.c  |  4 +++-
   xen/common/memory.c | 12 ++--
   xen/drivers/passthrough/iommu.c | 13 +
   xen/drivers/passthrough/x86/iommu.c |  5 +++--
   xen/include/xen/iommu.h |  5 +++--
   5 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6a19c57..65d8f1a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1178,7 +1178,9 @@ out:
   if ( flush )
   {
   flush_tlb_domain(d);
-iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);


Sorry for coming late in the discussion. What kind of error do you
expect to return with iommu_tlb_flush?

Today the ARM SMMU will always return 0 if the TLB flush timeout (see
arm_smmu_tlb_inv_context).

We may want in the future to return an error when it has timeout,
however only returning an error is not safe at all. The TLB may contain
entries which are invalid (because we remove the mapping earlier) and a
device/domain could take advantage of that.

So I am not sure if we should let running the guest when a flush has
timeout. Any thoughts?


Well, did you look at the rest of this series, and the other dependent
one? Guests (other than Dom0) get crashed when a flush times out. I
guess that's what you will want on ARM then too.


I have found a call to domain_crash in iommu_map_page and 
iommu_unmap_page. However, I have not found any for iommu_tlb_flush.


iommu_unmap_page and iommu_map_page are not called in the p2m code 
because the page table are shared with the SMMU. So guest will not be 
crashed if the timeout has failed.


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-09 Thread Jan Beulich
>>> On 09.06.16 at 13:12,  wrote:
> On 08/06/16 09:58, Xu, Quan wrote:
>> From: Quan Xu 
>>
>> Signed-off-by: Quan Xu 
>> Reviewed-by: Jan Beulich 
>>
>> CC: Stefano Stabellini 
>> CC: Julien Grall 
>> CC: Jan Beulich 
>> CC: Kevin Tian 
>> ---
>>   xen/arch/arm/p2m.c  |  4 +++-
>>   xen/common/memory.c | 12 ++--
>>   xen/drivers/passthrough/iommu.c | 13 +
>>   xen/drivers/passthrough/x86/iommu.c |  5 +++--
>>   xen/include/xen/iommu.h |  5 +++--
>>   5 files changed, 28 insertions(+), 11 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index 6a19c57..65d8f1a 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -1178,7 +1178,9 @@ out:
>>   if ( flush )
>>   {
>>   flush_tlb_domain(d);
>> -iommu_iotlb_flush(d, sgfn, egfn - sgfn);
>> +ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
> 
> Sorry for coming late in the discussion. What kind of error do you 
> expect to return with iommu_tlb_flush?
> 
> Today the ARM SMMU will always return 0 if the TLB flush timeout (see 
> arm_smmu_tlb_inv_context).
> 
> We may want in the future to return an error when it has timeout, 
> however only returning an error is not safe at all. The TLB may contain 
> entries which are invalid (because we remove the mapping earlier) and a 
> device/domain could take advantage of that.
> 
> So I am not sure if we should let running the guest when a flush has 
> timeout. Any thoughts?

Well, did you look at the rest of this series, and the other dependent
one? Guests (other than Dom0) get crashed when a flush times out. I
guess that's what you will want on ARM then too.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-09 Thread Julien Grall

Hello,

On 08/06/16 09:58, Xu, Quan wrote:

From: Quan Xu 

Signed-off-by: Quan Xu 
Reviewed-by: Jan Beulich 

CC: Stefano Stabellini 
CC: Julien Grall 
CC: Jan Beulich 
CC: Kevin Tian 
---
  xen/arch/arm/p2m.c  |  4 +++-
  xen/common/memory.c | 12 ++--
  xen/drivers/passthrough/iommu.c | 13 +
  xen/drivers/passthrough/x86/iommu.c |  5 +++--
  xen/include/xen/iommu.h |  5 +++--
  5 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6a19c57..65d8f1a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1178,7 +1178,9 @@ out:
  if ( flush )
  {
  flush_tlb_domain(d);
-iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);


Sorry for coming late in the discussion. What kind of error do you 
expect to return with iommu_tlb_flush?


Today the ARM SMMU will always return 0 if the TLB flush timeout (see 
arm_smmu_tlb_inv_context).


We may want in the future to return an error when it has timeout, 
however only returning an error is not safe at all. The TLB may contain 
entries which are invalid (because we remove the mapping earlier) and a 
device/domain could take advantage of that.


So I am not sure if we should let running the guest when a flush has 
timeout. Any thoughts?


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v7 05/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones)

2016-06-08 Thread Xu, Quan
From: Quan Xu 

Signed-off-by: Quan Xu 
Reviewed-by: Jan Beulich 

CC: Stefano Stabellini 
CC: Julien Grall 
CC: Jan Beulich 
CC: Kevin Tian 
---
 xen/arch/arm/p2m.c  |  4 +++-
 xen/common/memory.c | 12 ++--
 xen/drivers/passthrough/iommu.c | 13 +
 xen/drivers/passthrough/x86/iommu.c |  5 +++--
 xen/include/xen/iommu.h |  5 +++--
 5 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 6a19c57..65d8f1a 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1178,7 +1178,9 @@ out:
 if ( flush )
 {
 flush_tlb_domain(d);
-iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+ret = iommu_iotlb_flush(d, sgfn, egfn - sgfn);
+if ( !rc )
+rc = ret;
 }
 
 while ( (pg = page_list_remove_head(_pages)) )
diff --git a/xen/common/memory.c b/xen/common/memory.c
index ccc6436..46b1d9f 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -683,9 +683,17 @@ static int xenmem_add_to_physmap(struct domain *d,
 #ifdef CONFIG_HAS_PASSTHROUGH
 if ( need_iommu(d) )
 {
+int ret;
+
 this_cpu(iommu_dont_flush_iotlb) = 0;
-iommu_iotlb_flush(d, xatp->idx - done, done);
-iommu_iotlb_flush(d, xatp->gpfn - done, done);
+
+ret = iommu_iotlb_flush(d, xatp->idx - done, done);
+if ( unlikely(ret) && rc >= 0 )
+rc = ret;
+
+ret = iommu_iotlb_flush(d, xatp->gpfn - done, done);
+if ( unlikely(ret) && rc >= 0 )
+rc = ret;
 }
 #endif
 
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index ec85352..3a73fab 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -311,24 +311,29 @@ static void iommu_free_pagetables(unsigned long unused)
 cpumask_cycle(smp_processor_id(), 
_online_map));
 }
 
-void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int 
page_count)
+int iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+  unsigned int page_count)
 {
 const struct domain_iommu *hd = dom_iommu(d);
 
 if ( !iommu_enabled || !hd->platform_ops || !hd->platform_ops->iotlb_flush 
)
-return;
+return 0;
 
 hd->platform_ops->iotlb_flush(d, gfn, page_count);
+
+return 0;
 }
 
-void iommu_iotlb_flush_all(struct domain *d)
+int iommu_iotlb_flush_all(struct domain *d)
 {
 const struct domain_iommu *hd = dom_iommu(d);
 
 if ( !iommu_enabled || !hd->platform_ops || 
!hd->platform_ops->iotlb_flush_all )
-return;
+return 0;
 
 hd->platform_ops->iotlb_flush_all(d);
+
+return 0;
 }
 
 int __init iommu_setup(void)
diff --git a/xen/drivers/passthrough/x86/iommu.c 
b/xen/drivers/passthrough/x86/iommu.c
index b64b98f..a18a608 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -104,8 +104,9 @@ int arch_iommu_populate_page_table(struct domain *d)
 this_cpu(iommu_dont_flush_iotlb) = 0;
 
 if ( !rc )
-iommu_iotlb_flush_all(d);
-else if ( rc != -ERESTART )
+rc = iommu_iotlb_flush_all(d);
+
+if ( rc && rc != -ERESTART )
 iommu_teardown(d);
 
 return rc;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 2b86710..57c9fbc 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -200,8 +200,9 @@ int iommu_do_pci_domctl(struct xen_domctl *, struct domain 
*d,
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
 XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
-void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int 
page_count);
-void iommu_iotlb_flush_all(struct domain *d);
+int __must_check iommu_iotlb_flush(struct domain *d, unsigned long gfn,
+   unsigned int page_count);
+int __must_check iommu_iotlb_flush_all(struct domain *d);
 
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
-- 
1.9.1


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel