On Thu, 25 Jun 2020 18:05:52 +0800
Lu Baolu <[email protected]> wrote:

> Hi,
> 
> On 2020/6/23 23:43, Jacob Pan wrote:
> > From: Liu Yi L <[email protected]>
> > 
> > Address information for device TLB invalidation comes from userspace
> > when device is directly assigned to a guest with vIOMMU support.
> > VT-d requires page aligned address. This patch checks and enforce
> > address to be page aligned, otherwise reserved bits can be set in
> > the invalidation descriptor. Unrecoverable fault will be reported
> > due to non-zero value in the reserved bits.
> > 
> > Signed-off-by: Liu Yi L <[email protected]>
> > Signed-off-by: Jacob Pan <[email protected]>
> > ---
> >   drivers/iommu/intel/dmar.c | 19 +++++++++++++++++--
> >   1 file changed, 17 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
> > index d9f973fa1190..53f4e5003620 100644
> > --- a/drivers/iommu/intel/dmar.c
> > +++ b/drivers/iommu/intel/dmar.c
> > @@ -1455,9 +1455,24 @@ void qi_flush_dev_iotlb_pasid(struct
> > intel_iommu *iommu, u16 sid, u16 pfsid,
> >      * Max Invs Pending (MIP) is set to 0 for now until we
> > have DIT in
> >      * ECAP.
> >      */
> > -   desc.qw1 |= addr & ~mask;
> > -   if (size_order)
> > +   if (addr & ~VTD_PAGE_MASK)
> > +           pr_warn_ratelimited("Invalidate non-page aligned
> > address %llx\n", addr); +
> > +   if (size_order) {
> > +           /* Take page address */
> > +           desc.qw1 |= QI_DEV_EIOTLB_ADDR(addr);  
> 
> If size_order == 0 (that means only a single page is about to be
> invalidated), do you still need to set ADDR field of the descriptor?
> 
Good catch! we should always set addr. I will move addr assignment out
of the if condition.
 .
> Best regards,
> baolu
> 
> > +           /*
> > +            * Existing 0s in address below size_order may be
> > the least
> > +            * significant bit, we must set them to 1s to
> > avoid having
> > +            * smaller size than desired.
> > +            */
> > +           desc.qw1 |= GENMASK_ULL(size_order +
> > VTD_PAGE_SHIFT,
> > +                                   VTD_PAGE_SHIFT);
> > +           /* Clear size_order bit to indicate size */
> > +           desc.qw1 &= ~mask;
> > +           /* Set the S bit to indicate flushing more than 1
> > page */ desc.qw1 |= QI_DEV_EIOTLB_SIZE;
> > +   }
> >   
> >     qi_submit_sync(iommu, &desc, 1, 0);
> >   }
> >   

[Jacob Pan]
_______________________________________________
iommu mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to