Re: [PATCH v3 4/7] iommu/vt-d: Handle non-page aligned address

2020-07-06 Thread Jacob Pan
On Thu, 2 Jul 2020 09:50:19 +0200
Auger Eric  wrote:

> Hi Jacob,
> 
> On 7/1/20 5:33 PM, Jacob Pan wrote:
> > From: Liu Yi L 
> > 
> > Address information for device TLB invalidation comes from userspace
> > when device is directly assigned to a guest with vIOMMU support.
> > VT-d requires page aligned address. This patch checks and enforce
> > address to be page aligned, otherwise reserved bits can be set in
> > the invalidation descriptor. Unrecoverable fault will be reported
> > due to non-zero value in the reserved bits.  
> on the other hand if user space sends unaligned invalidations,
> shouldn't it be reported in some way?
> > 
> > Fixes: 61a06a16e36d8 ("iommu/vt-d: Support flushing more translation
> > cache types")
> > Acked-by: Lu Baolu 
> > Signed-off-by: Liu Yi L 
> > Signed-off-by: Jacob Pan 
> > 
> > Signed-off-by: Jacob Pan 
> > ---
> >  drivers/iommu/intel/dmar.c | 20 ++--
> >  1 file changed, 18 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
> > index d9f973fa1190..3899f3161071 100644
> > --- a/drivers/iommu/intel/dmar.c
> > +++ b/drivers/iommu/intel/dmar.c
> > @@ -1455,9 +1455,25 @@ void qi_flush_dev_iotlb_pasid(struct
> > intel_iommu *iommu, u16 sid, u16 pfsid,
> >  * Max Invs Pending (MIP) is set to 0 for now until we
> > have DIT in
> >  * ECAP.
> >  */
> > -   desc.qw1 |= addr & ~mask;
> > -   if (size_order)
> > +   if (addr & ~VTD_PAGE_MASK)
> > +   pr_warn_ratelimited("Invalidate non-page aligned
> > address %llx\n", addr); +
> > +   /* Take page address */
> > +   desc.qw1 |= QI_DEV_EIOTLB_ADDR(addr);
> > +
> > +   if (size_order) {
> > +   /*
> > +* Existing 0s in address below size_order may be
> > the least
> > +* significant bit, we must set them to 1s to
> > avoid having
> > +* smaller size than desired.  
> Shouldn't you test the input addr against the size_order. Aren't they
> supposed to be consistent? Otherwise one should emit a warning at
> least?
Will check size_order and addr match to emit warning. Combine with
VTD_PAGE_MASK check above.

> > +*/
> > +   desc.qw1 |= GENMASK_ULL(size_order +
> > VTD_PAGE_SHIFT,
> > +   VTD_PAGE_SHIFT);  
> nit: instead of working directly on .qw1, couldn't you perform all
> those manipulations directly on addr? and eventually override qw1 at
> the end?
That is good too, I just felt it is more readable, which fields are
being manipulated in qw1.

> > +   /* Clear size_order bit to indicate size */
> > +   desc.qw1 &= ~mask;
> > +   /* Set the S bit to indicate flushing more than 1
> > page */ desc.qw1 |= QI_DEV_EIOTLB_SIZE;
> > +   }
> >  
> > qi_submit_sync(iommu, , 1, 0);
> >  }
> >   
> Thanks
> 
> Eric
> 

[Jacob Pan]
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 4/7] iommu/vt-d: Handle non-page aligned address

2020-07-02 Thread Auger Eric
Hi Jacob,

On 7/1/20 5:33 PM, Jacob Pan wrote:
> From: Liu Yi L 
> 
> Address information for device TLB invalidation comes from userspace
> when device is directly assigned to a guest with vIOMMU support.
> VT-d requires page aligned address. This patch checks and enforce
> address to be page aligned, otherwise reserved bits can be set in the
> invalidation descriptor. Unrecoverable fault will be reported due to
> non-zero value in the reserved bits.
on the other hand if user space sends unaligned invalidations, shouldn't
it be reported in some way?
> 
> Fixes: 61a06a16e36d8 ("iommu/vt-d: Support flushing more translation
> cache types")
> Acked-by: Lu Baolu 
> Signed-off-by: Liu Yi L 
> Signed-off-by: Jacob Pan 
> 
> Signed-off-by: Jacob Pan 
> ---
>  drivers/iommu/intel/dmar.c | 20 ++--
>  1 file changed, 18 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
> index d9f973fa1190..3899f3161071 100644
> --- a/drivers/iommu/intel/dmar.c
> +++ b/drivers/iommu/intel/dmar.c
> @@ -1455,9 +1455,25 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu 
> *iommu, u16 sid, u16 pfsid,
>* Max Invs Pending (MIP) is set to 0 for now until we have DIT in
>* ECAP.
>*/
> - desc.qw1 |= addr & ~mask;
> - if (size_order)
> + if (addr & ~VTD_PAGE_MASK)
> + pr_warn_ratelimited("Invalidate non-page aligned address 
> %llx\n", addr);
> +
> + /* Take page address */
> + desc.qw1 |= QI_DEV_EIOTLB_ADDR(addr);
> +
> + if (size_order) {
> + /*
> +  * Existing 0s in address below size_order may be the least
> +  * significant bit, we must set them to 1s to avoid having
> +  * smaller size than desired.
Shouldn't you test the input addr against the size_order. Aren't they
supposed to be consistent? Otherwise one should emit a warning at least?
> +  */
> + desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT,
> + VTD_PAGE_SHIFT);
nit: instead of working directly on .qw1, couldn't you perform all those
manipulations directly on addr? and eventually override qw1 at the end?
> + /* Clear size_order bit to indicate size */
> + desc.qw1 &= ~mask;
> + /* Set the S bit to indicate flushing more than 1 page */
>   desc.qw1 |= QI_DEV_EIOTLB_SIZE;
> + }
>  
>   qi_submit_sync(iommu, , 1, 0);
>  }
> 
Thanks

Eric

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v3 4/7] iommu/vt-d: Handle non-page aligned address

2020-07-01 Thread Jacob Pan
From: Liu Yi L 

Address information for device TLB invalidation comes from userspace
when device is directly assigned to a guest with vIOMMU support.
VT-d requires page aligned address. This patch checks and enforce
address to be page aligned, otherwise reserved bits can be set in the
invalidation descriptor. Unrecoverable fault will be reported due to
non-zero value in the reserved bits.

Fixes: 61a06a16e36d8 ("iommu/vt-d: Support flushing more translation
cache types")
Acked-by: Lu Baolu 
Signed-off-by: Liu Yi L 
Signed-off-by: Jacob Pan 

Signed-off-by: Jacob Pan 
---
 drivers/iommu/intel/dmar.c | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index d9f973fa1190..3899f3161071 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -1455,9 +1455,25 @@ void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, 
u16 sid, u16 pfsid,
 * Max Invs Pending (MIP) is set to 0 for now until we have DIT in
 * ECAP.
 */
-   desc.qw1 |= addr & ~mask;
-   if (size_order)
+   if (addr & ~VTD_PAGE_MASK)
+   pr_warn_ratelimited("Invalidate non-page aligned address 
%llx\n", addr);
+
+   /* Take page address */
+   desc.qw1 |= QI_DEV_EIOTLB_ADDR(addr);
+
+   if (size_order) {
+   /*
+* Existing 0s in address below size_order may be the least
+* significant bit, we must set them to 1s to avoid having
+* smaller size than desired.
+*/
+   desc.qw1 |= GENMASK_ULL(size_order + VTD_PAGE_SHIFT,
+   VTD_PAGE_SHIFT);
+   /* Clear size_order bit to indicate size */
+   desc.qw1 &= ~mask;
+   /* Set the S bit to indicate flushing more than 1 page */
desc.qw1 |= QI_DEV_EIOTLB_SIZE;
+   }
 
qi_submit_sync(iommu, , 1, 0);
 }
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu