Re: [Xen-devel] [PATCH v4 3/4] iommu: elide flushing for higher order map/unmap operations

2018-12-19 Thread Julien Grall

Hi,

On 17/12/2018 09:08, Paul Durrant wrote:

@@ -345,7 +352,26 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn,

mfn_t mfn,

   return rc;
   }

-int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int

page_order)

+int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
+ unsigned int page_order, unsigned int flags)
+{
+unsigned int flush_flags = 0;


newline here.


Ah yes.


Actually, hang on... no. Why would I need a newline between two stack variable 
initializations?


Yes. Sorry I misread the code.

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 3/4] iommu: elide flushing for higher order map/unmap operations

2018-12-17 Thread Paul Durrant
> -Original Message-
> From: Xen-devel [mailto:xen-devel-boun...@lists.xenproject.org] On Behalf
> Of Paul Durrant
> Sent: 17 December 2018 08:57
> To: 'Julien Grall' ; xen-devel@lists.xenproject.org
> Cc: Kevin Tian ; Stefano Stabellini
> ; Wei Liu ; Suravee
> Suthikulpanit ; Konrad Rzeszutek Wilk
> ; Andrew Cooper ; Tim
> (Xen.org) ; George Dunlap ; Jan
> Beulich ; Ian Jackson ; Brian
> Woods ; Roger Pau Monne 
> Subject: Re: [Xen-devel] [PATCH v4 3/4] iommu: elide flushing for higher
> order map/unmap operations
> 
> > -Original Message-
> > From: Julien Grall [mailto:julien.gr...@arm.com]
> > Sent: 14 December 2018 15:18
> > To: Paul Durrant ; xen-
> de...@lists.xenproject.org
> > Cc: Stefano Stabellini ; Andrew Cooper
> > ; George Dunlap ;
> Ian
> > Jackson ; Jan Beulich ;
> Konrad
> > Rzeszutek Wilk ; Tim (Xen.org) ;
> Wei
> > Liu ; Suravee Suthikulpanit
> > ; Brian Woods ;
> Kevin
> > Tian ; Roger Pau Monne 
> > Subject: Re: [PATCH v4 3/4] iommu: elide flushing for higher order
> > map/unmap operations
> >
> > Hi Paul,
> >
> > On 12/6/18 3:34 PM, Paul Durrant wrote:
> > > This patch removes any implicit flushing that occurs in the
> > implementation
> > > of map and unmap operations and adds new iommu_map/unmap() wrapper
> > > functions. To maintain sematics of the iommu_legacy_map/unmap()
> wrapper
> >
> > NIT: s/sematics/semantics/
> 
> Good spot.
> 
> >
> > > functions, these are modified to call the new wrapper functions and
> then
> > > perform an explicit flush operation.
> > >
> > > Because VT-d currently performs two different types of flush dependent
> > upon
> > > whether a PTE is being modified versus merely added (i.e. replacing a
> > non-
> > > present PTE) 'iommu flush flags' are defined by this patch and the
> > > iommu_ops map_page() and unmap_page() methods are modified to OR the
> > type
> > > of flush necessary for the PTE that has been populated or depopulated
> > into
> > > an accumulated flags value. The accumulated value can then be passed
> > into
> > > the explicit flush operation.
> > >
> > > The ARM SMMU implementations of map_page() and unmap_page() currently
> > > perform no implicit flushing and therefore the modified methods do not
> > > adjust the flush flags.
> >
> > I am a bit confused with the explanation here. map_page()/unmap_page()
> > will require to flush the IOMMU TLBs. So what do you mean by implicit?
> >
> 
> What I mean is that, without this patch, the x86 implementations of the
> map/unmap_page() functions (i.e. both the AMD and Intel) flush the TLB as
> necessary at the end of the operation whereas the ARM implementation (i.e.
> SMMU) does not. The only flushing is done explicitly by the P2M code.
> 
> > [...]
> >
> > > diff --git a/xen/drivers/passthrough/arm/smmu.c
> > b/xen/drivers/passthrough/arm/smmu.c
> > > index 9612c0fddc..5d12639e97 100644
> > > --- a/xen/drivers/passthrough/arm/smmu.c
> > > +++ b/xen/drivers/passthrough/arm/smmu.c
> > > @@ -2534,9 +2534,12 @@ static int __must_check
> > arm_smmu_iotlb_flush_all(struct domain *d)
> > >   return 0;
> > >   }
> > >
> > > -static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t
> > dfn,
> > > - unsigned int page_count)
> > > +static int __must_check arm_smmu_iotlb_flush(
> > > + struct domain *d, dfn_t dfn, unsigned int page_count,
> > > + unsigned int flush_flags)
> >
> > Can we keep the parameters aligned to (?
> >
> 
> Possibly, now this is an unsigned int rather than an enum as it was in
> earlier versions, this won't blow the 80 char limit any more. I'll check.
> 
> > >   {
> > > + ASSERT(flush_flags);
> > > +
> > >   /* ARM SMMU v1 doesn't have flush by VMA and VMID */
> > >   return arm_smmu_iotlb_flush_all(d);
> > >   }
> > > @@ -2731,8 +2734,9 @@ static void
> arm_smmu_iommu_domain_teardown(struct
> > domain *d)
> > >   xfree(xen_domain);
> > >   }
> > >
> > > -static int __must_check arm_smmu_map_page(struct domain *d, dfn_t
> dfn,
> > > -   mfn_t mfn, unsigned int flags)
> > > +static int __must_check arm_smmu_map_page(
> > > + struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int flags,
> > > + unsigned int *flush_

Re: [Xen-devel] [PATCH v4 3/4] iommu: elide flushing for higher order map/unmap operations

2018-12-17 Thread Paul Durrant
> -Original Message-
> From: Julien Grall [mailto:julien.gr...@arm.com]
> Sent: 14 December 2018 15:18
> To: Paul Durrant ; xen-devel@lists.xenproject.org
> Cc: Stefano Stabellini ; Andrew Cooper
> ; George Dunlap ; Ian
> Jackson ; Jan Beulich ; Konrad
> Rzeszutek Wilk ; Tim (Xen.org) ; Wei
> Liu ; Suravee Suthikulpanit
> ; Brian Woods ; Kevin
> Tian ; Roger Pau Monne 
> Subject: Re: [PATCH v4 3/4] iommu: elide flushing for higher order
> map/unmap operations
> 
> Hi Paul,
> 
> On 12/6/18 3:34 PM, Paul Durrant wrote:
> > This patch removes any implicit flushing that occurs in the
> implementation
> > of map and unmap operations and adds new iommu_map/unmap() wrapper
> > functions. To maintain sematics of the iommu_legacy_map/unmap() wrapper
> 
> NIT: s/sematics/semantics/

Good spot.

> 
> > functions, these are modified to call the new wrapper functions and then
> > perform an explicit flush operation.
> >
> > Because VT-d currently performs two different types of flush dependent
> upon
> > whether a PTE is being modified versus merely added (i.e. replacing a
> non-
> > present PTE) 'iommu flush flags' are defined by this patch and the
> > iommu_ops map_page() and unmap_page() methods are modified to OR the
> type
> > of flush necessary for the PTE that has been populated or depopulated
> into
> > an accumulated flags value. The accumulated value can then be passed
> into
> > the explicit flush operation.
> >
> > The ARM SMMU implementations of map_page() and unmap_page() currently
> > perform no implicit flushing and therefore the modified methods do not
> > adjust the flush flags.
> 
> I am a bit confused with the explanation here. map_page()/unmap_page()
> will require to flush the IOMMU TLBs. So what do you mean by implicit?
> 

What I mean is that, without this patch, the x86 implementations of the 
map/unmap_page() functions (i.e. both the AMD and Intel) flush the TLB as 
necessary at the end of the operation whereas the ARM implementation (i.e. 
SMMU) does not. The only flushing is done explicitly by the P2M code.

> [...]
> 
> > diff --git a/xen/drivers/passthrough/arm/smmu.c
> b/xen/drivers/passthrough/arm/smmu.c
> > index 9612c0fddc..5d12639e97 100644
> > --- a/xen/drivers/passthrough/arm/smmu.c
> > +++ b/xen/drivers/passthrough/arm/smmu.c
> > @@ -2534,9 +2534,12 @@ static int __must_check
> arm_smmu_iotlb_flush_all(struct domain *d)
> > return 0;
> >   }
> >
> > -static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t
> dfn,
> > - unsigned int page_count)
> > +static int __must_check arm_smmu_iotlb_flush(
> > +   struct domain *d, dfn_t dfn, unsigned int page_count,
> > +   unsigned int flush_flags)
> 
> Can we keep the parameters aligned to (?
> 

Possibly, now this is an unsigned int rather than an enum as it was in earlier 
versions, this won't blow the 80 char limit any more. I'll check.

> >   {
> > +   ASSERT(flush_flags);
> > +
> > /* ARM SMMU v1 doesn't have flush by VMA and VMID */
> > return arm_smmu_iotlb_flush_all(d);
> >   }
> > @@ -2731,8 +2734,9 @@ static void arm_smmu_iommu_domain_teardown(struct
> domain *d)
> > xfree(xen_domain);
> >   }
> >
> > -static int __must_check arm_smmu_map_page(struct domain *d, dfn_t dfn,
> > - mfn_t mfn, unsigned int flags)
> > +static int __must_check arm_smmu_map_page(
> > +   struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int flags,
> > +   unsigned int *flush_flags)
> 
> Same here.
> 
> >   {
> > p2m_type_t t;
> >
> 
> [...]
> 
> > @@ -345,7 +352,26 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn,
> mfn_t mfn,
> >   return rc;
> >   }
> >
> > -int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int
> page_order)
> > +int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
> > + unsigned int page_order, unsigned int flags)
> > +{
> > +unsigned int flush_flags = 0;
> 
> newline here.

Ah yes.

  Paul

> 
> > +int rc = iommu_map(d, dfn, mfn, page_order, flags, _flags);
> 
> Cheers,
> 
> --
> Julien Grall
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 3/4] iommu: elide flushing for higher order map/unmap operations

2018-12-14 Thread Julien Grall

Hi Paul,

On 12/6/18 3:34 PM, Paul Durrant wrote:

This patch removes any implicit flushing that occurs in the implementation
of map and unmap operations and adds new iommu_map/unmap() wrapper
functions. To maintain sematics of the iommu_legacy_map/unmap() wrapper


NIT: s/sematics/semantics/


functions, these are modified to call the new wrapper functions and then
perform an explicit flush operation.

Because VT-d currently performs two different types of flush dependent upon
whether a PTE is being modified versus merely added (i.e. replacing a non-
present PTE) 'iommu flush flags' are defined by this patch and the
iommu_ops map_page() and unmap_page() methods are modified to OR the type
of flush necessary for the PTE that has been populated or depopulated into
an accumulated flags value. The accumulated value can then be passed into
the explicit flush operation.

The ARM SMMU implementations of map_page() and unmap_page() currently
perform no implicit flushing and therefore the modified methods do not
adjust the flush flags.


I am a bit confused with the explanation here. map_page()/unmap_page() 
will require to flush the IOMMU TLBs. So what do you mean by implicit?


[...]


diff --git a/xen/drivers/passthrough/arm/smmu.c 
b/xen/drivers/passthrough/arm/smmu.c
index 9612c0fddc..5d12639e97 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2534,9 +2534,12 @@ static int __must_check arm_smmu_iotlb_flush_all(struct 
domain *d)
return 0;
  }
  
-static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,

- unsigned int page_count)
+static int __must_check arm_smmu_iotlb_flush(
+   struct domain *d, dfn_t dfn, unsigned int page_count,
+   unsigned int flush_flags)


Can we keep the parameters aligned to (?


  {
+   ASSERT(flush_flags);
+
/* ARM SMMU v1 doesn't have flush by VMA and VMID */
return arm_smmu_iotlb_flush_all(d);
  }
@@ -2731,8 +2734,9 @@ static void arm_smmu_iommu_domain_teardown(struct domain 
*d)
xfree(xen_domain);
  }
  
-static int __must_check arm_smmu_map_page(struct domain *d, dfn_t dfn,

- mfn_t mfn, unsigned int flags)
+static int __must_check arm_smmu_map_page(
+   struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int flags,
+   unsigned int *flush_flags)


Same here.


  {
p2m_type_t t;
  


[...]


@@ -345,7 +352,26 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t 
mfn,
  return rc;
  }
  
-int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_order)

+int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
+ unsigned int page_order, unsigned int flags)
+{
+unsigned int flush_flags = 0;


newline here.


+int rc = iommu_map(d, dfn, mfn, page_order, flags, _flags);


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 3/4] iommu: elide flushing for higher order map/unmap operations

2018-12-12 Thread Tian, Kevin
> From: Paul Durrant [mailto:paul.durr...@citrix.com]
> Sent: Thursday, December 6, 2018 11:34 PM
> 
> This patch removes any implicit flushing that occurs in the implementation
> of map and unmap operations and adds new iommu_map/unmap()
> wrapper
> functions. To maintain sematics of the iommu_legacy_map/unmap()
> wrapper
> functions, these are modified to call the new wrapper functions and then
> perform an explicit flush operation.
> 
> Because VT-d currently performs two different types of flush dependent
> upon
> whether a PTE is being modified versus merely added (i.e. replacing a non-
> present PTE) 'iommu flush flags' are defined by this patch and the
> iommu_ops map_page() and unmap_page() methods are modified to OR
> the type
> of flush necessary for the PTE that has been populated or depopulated into
> an accumulated flags value. The accumulated value can then be passed into
> the explicit flush operation.
> 
> The ARM SMMU implementations of map_page() and unmap_page()
> currently
> perform no implicit flushing and therefore the modified methods do not
> adjust the flush flags.
> 
> NOTE: The per-cpu 'iommu_dont_flush_iotlb' is respected by the
>   iommu_legacy_map/unmap() wrapper functions and therefore this now
>   applies to all IOMMU implementations rather than just VT-d.
> 
> Signed-off-by: Paul Durrant 

Reviewed-by: Kevin Tian 
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v4 3/4] iommu: elide flushing for higher order map/unmap operations

2018-12-07 Thread Jan Beulich
>>> On 06.12.18 at 16:34,  wrote:
> This patch removes any implicit flushing that occurs in the implementation
> of map and unmap operations and adds new iommu_map/unmap() wrapper
> functions. To maintain sematics of the iommu_legacy_map/unmap() wrapper
> functions, these are modified to call the new wrapper functions and then
> perform an explicit flush operation.
> 
> Because VT-d currently performs two different types of flush dependent upon
> whether a PTE is being modified versus merely added (i.e. replacing a non-
> present PTE) 'iommu flush flags' are defined by this patch and the
> iommu_ops map_page() and unmap_page() methods are modified to OR the type
> of flush necessary for the PTE that has been populated or depopulated into
> an accumulated flags value. The accumulated value can then be passed into
> the explicit flush operation.
> 
> The ARM SMMU implementations of map_page() and unmap_page() currently
> perform no implicit flushing and therefore the modified methods do not
> adjust the flush flags.
> 
> NOTE: The per-cpu 'iommu_dont_flush_iotlb' is respected by the
>   iommu_legacy_map/unmap() wrapper functions and therefore this now
>   applies to all IOMMU implementations rather than just VT-d.
> 
> Signed-off-by: Paul Durrant 

Reviewed-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel