The Intel VT-d implementation supports device TLB management. Select
PCI_ATS explicitly so that the pci_ats helpers are always available.
Signed-off-by: Lu Baolu
---
drivers/iommu/intel/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/iommu/intel/Kconfig b/drivers/iommu/intel/
The Intel IOMMU driver reports the DMA fault reason in a decimal number
while the VT-d specification uses a hexadecimal one. It's inconvenient
that users need to covert them everytime before consulting the spec.
Let's use hexadecimal number for a DMA fault reason.
The fault message uses 0x
When first-level page tables are used for IOVA translation, we use user
privilege by setting U/S bit in the page table entry. This is to make it
consistent with the second level translation, where the U/S enforcement
is not available. Clear the SRE (Supervisor Request Enable) field in the
pasid tab
On Tue, May 11, 2021 at 04:47:26PM -0300, Jason Gunthorpe wrote:
> > Let me try to break down your concerns:
> > 1. portability - driver uses DMA APIs can function w/ and w/o IOMMU. is
> > that your concern? But PASID is intrinsically tied with IOMMU and if
> > the drivers are using a generic sva-l
On 5/11/21 3:40 PM, Keqian Zhu wrote:
For upper layers, before starting page tracking, they check the
dirty_page_trackable attribution of the domain and start it only it's
capable. Once the page tracking is switched on the vendor iommu driver
(or iommu core) should block further device attach/det
> From: Jason Gunthorpe
> Sent: Wednesday, May 12, 2021 8:25 AM
>
> On Wed, May 12, 2021 at 12:21:24AM +, Tian, Kevin wrote:
>
> > > Basically each RID knows based on its kernel drivers if it is a local
> > > or global RID and the ioasid knob can further fine tune this for any
> > > other sp
On Wed, May 12, 2021 at 12:21:24AM +, Tian, Kevin wrote:
> > Basically each RID knows based on its kernel drivers if it is a local
> > or global RID and the ioasid knob can further fine tune this for any
> > other specialty cases.
>
> It's fine if you insist on this way. Then we leave it to u
> From: Jason Gunthorpe
> Sent: Wednesday, May 12, 2021 7:40 AM
>
> On Tue, May 11, 2021 at 10:51:40PM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Tuesday, May 11, 2021 10:39 PM
> > >
> > > On Tue, May 11, 2021 at 09:10:03AM +, Tian, Kevin wrote:
> > >
> > > > 3) SRIOV,
On Tue, May 11, 2021 at 10:51:40PM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Tuesday, May 11, 2021 10:39 PM
> >
> > On Tue, May 11, 2021 at 09:10:03AM +, Tian, Kevin wrote:
> >
> > > 3) SRIOV, ENQCMD (Intel):
> > > - "PASID global" with host-allocated PASIDs;
> > > -
> From: Liu Yi L
> Sent: Tuesday, May 11, 2021 9:25 PM
>
> On Tue, 11 May 2021 09:10:03 +, Tian, Kevin wrote:
>
> > > From: Jason Gunthorpe
> > > Sent: Monday, May 10, 2021 8:37 PM
> > >
> > [...]
> > > > gPASID!=hPASID has a problem when assigning a physical device which
> > > > supports bo
> From: Jason Gunthorpe
> Sent: Tuesday, May 11, 2021 10:39 PM
>
> On Tue, May 11, 2021 at 09:10:03AM +, Tian, Kevin wrote:
>
> > 3) SRIOV, ENQCMD (Intel):
> > - "PASID global" with host-allocated PASIDs;
> > - PASID table managed by host (in HPA space);
> > - all RIDs bound to t
On Tue, May 11, 2021 at 11:05:50AM -0700, Jacob Pan wrote:
> Hi Jason,
>
> On Tue, 11 May 2021 13:35:21 -0300, Jason Gunthorpe wrote:
>
> > On Tue, May 11, 2021 at 09:14:52AM -0700, Jacob Pan wrote:
> >
> > > > Honestly, I'm not convinced we should have "kernel SVA" at all.. Why
> > > > does ID
On Tue, Mar 2, 2021 at 7:54 AM Jordan Crouse wrote:
>
> On Tue, Mar 02, 2021 at 12:17:24PM +, Robin Murphy wrote:
> > On 2021-02-25 17:51, Jordan Crouse wrote:
> > > Call report_iommu_fault() to allow upper-level drivers to register their
> > > own fault handlers.
> > >
> > > Signed-off-by: Jo
Hi Jason,
On Tue, 11 May 2021 13:35:21 -0300, Jason Gunthorpe wrote:
> On Tue, May 11, 2021 at 09:14:52AM -0700, Jacob Pan wrote:
>
> > > Honestly, I'm not convinced we should have "kernel SVA" at all.. Why
> > > does IDXD use normal DMA on the RID for kernel controlled accesses?
> >
> > Usi
On Tue, May 11, 2021 at 02:56:05PM +0800, Lu Baolu wrote:
> > After my next series the mdev drivers will have direct access to
> > the vfio_device. So an alternative to using the struct device, or
> > adding 'if mdev' is to add an API to the vfio_device world to
> > inject what iom
On Tue, May 11, 2021 at 09:51:20AM -0700, Stefano Stabellini wrote:
> On Tue, 11 May 2021, Christoph Hellwig wrote:
> > On Tue, May 11, 2021 at 09:47:33AM -0700, Stefano Stabellini wrote:
> > > That's a much better plan. It is also not super urgent, so maybe for now
> > > we could add an explicit c
On Tue, 11 May 2021, Christoph Hellwig wrote:
> On Tue, May 11, 2021 at 09:47:33AM -0700, Stefano Stabellini wrote:
> > That's a much better plan. It is also not super urgent, so maybe for now
> > we could add an explicit check for io_tlb_default_mem != NULL at the
> > beginning of xen_swiotlb_init
On Mon, May 10, 2021 at 11:03 PM Christoph Hellwig wrote:
>
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +#include
> > +#include
> > +#include
> > +#include
> > +#include
> > +#endif
>
> I don't think any of this belongs into swiotlb.c. Marking
> swiotlb_init_io_tlb_mem non-static and having a
On Mon, May 10, 2021 at 11:03 PM Christoph Hellwig wrote:
>
> > +static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
> > +{
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > + if (dev && dev->dma_io_tlb_mem)
> > + return dev->dma_io_tlb_mem;
> > +#endif /* CONFIG_DMA_RESTR
On Mon, May 10, 2021 at 11:05 PM Christoph Hellwig wrote:
>
> > +static inline bool is_dev_swiotlb_force(struct device *dev)
> > +{
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > + if (dev->dma_io_tlb_mem)
> > + return true;
> > +#endif /* CONFIG_DMA_RESTRICTED_POOL */
> > + return
On Tue, May 11, 2021 at 09:47:33AM -0700, Stefano Stabellini wrote:
> That's a much better plan. It is also not super urgent, so maybe for now
> we could add an explicit check for io_tlb_default_mem != NULL at the
> beginning of xen_swiotlb_init? So that at least we can fail explicitly
> or ignore
On Tue, 11 May 2021, Christoph Hellwig wrote:
> On Mon, May 10, 2021 at 06:46:34PM -0700, Stefano Stabellini wrote:
> > On Mon, 10 May 2021, Christoph Hellwig wrote:
> > > On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
> > > > The pointer dereferenced seems to suggest that the swiotl
On 2021-05-11 10:06 a.m., Don Dutile wrote:
> On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
>> When a PCI P2PDMA page is seen, set the IOVA length of the segment
>> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
>> apply the appropriate bus address to the segment. The IOVA is
On Tue, May 11, 2021 at 09:14:52AM -0700, Jacob Pan wrote:
> > Honestly, I'm not convinced we should have "kernel SVA" at all.. Why
> > does IDXD use normal DMA on the RID for kernel controlled accesses?
>
> Using SVA simplifies the work submission, there is no need to do map/unmap.
> Just bind P
On 5/11/21 12:12 PM, Logan Gunthorpe wrote:
On 2021-05-11 10:05 a.m., Don Dutile wrote:
On 5/1/21 11:58 PM, John Hubbard wrote:
On 4/8/21 10:01 AM, Logan Gunthorpe wrote:
In order to call upstream_bridge_distance_warn() from a dma_map function,
it must not sleep. The only reason it does sleep
On 2021-05-11 10:06 a.m., Don Dutile wrote:
> On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
>> Add a flags member to the dma_map_ops structure with one flag to
>> indicate support for PCI P2PDMA.
>>
>> Also, add a helper to check if a device supports PCI P2PDMA.
>>
>> Signed-off-by: Logan Gunthorpe
On 2021-05-11 10:06 a.m., Don Dutile wrote:
> On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
>> pci_p2pdma_map_type() will be needed by the dma-iommu map_sg
>> implementation because it will need to determine the mapping type
>> ahead of actually doing the mapping to create the actual iommu mapping.
On 2021-05-11 10:05 a.m., Don Dutile wrote:
> ... add a flag (set for p2pdma use) to the function to print out what the
> root->devfn is, and what
> the device is so the needed quirk &/or modification can added to handle when
> this assumption fails;
> or make it a prdebug that can be flipped
On 2021-05-11 10:05 a.m., Don Dutile wrote:
> On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
>> In order to use upstream_bridge_distance_warn() from a dma_map function,
>> it must not sleep. However, pci_get_slot() takes the pci_bus_sem so it
>> might sleep.
>>
>> In order to avoid this, try to get t
On 2021-05-11 10:05 a.m., Don Dutile wrote:
> On 5/1/21 11:58 PM, John Hubbard wrote:
>> On 4/8/21 10:01 AM, Logan Gunthorpe wrote:
>>> In order to call upstream_bridge_distance_warn() from a dma_map function,
>>> it must not sleep. The only reason it does sleep is to allocate the seqbuf
>>> to p
Hi Jason,
On Tue, 11 May 2021 08:48:48 -0300, Jason Gunthorpe wrote:
> On Mon, May 10, 2021 at 08:31:45PM -0700, Jacob Pan wrote:
> > Hi Jason,
> >
> > On Mon, 10 May 2021 20:37:49 -0300, Jason Gunthorpe
> > wrote:
> > > On Mon, May 10, 2021 at 06:25:07AM -0700, Jacob Pan wrote:
> > >
> >
On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
pci_p2pdma_map_type() will be needed by the dma-iommu map_sg
implementation because it will need to determine the mapping type
ahead of actually doing the mapping to create the actual iommu mapping.
Signed-off-by: Logan Gunthorpe
---
drivers/pci/p2pdm
On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
When a PCI P2PDMA page is seen, set the IOVA length of the segment
to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
apply the appropriate bus address to the segment. The IOVA is not
created if the scatterlist only consists of P2PDMA
On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
Add a flags member to the dma_map_ops structure with one flag to
indicate support for PCI P2PDMA.
Also, add a helper to check if a device supports PCI P2PDMA.
Signed-off-by: Logan Gunthorpe
---
include/linux/dma-map-ops.h | 3 +++
include/linux/dma
On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
dma_map_sg() either returns a positive number indicating the number
of entries mapped or zero indicating that resources were not available
to create the mapping. When zero is returned, it is always safe to retry
the mapping later once resources have been
On 5/2/21 3:58 PM, John Hubbard wrote:
On 4/8/21 10:01 AM, Logan Gunthorpe wrote:
Attempt to find the mapping type for P2PDMA pages on the first
DMA map attempt if it has not been done ahead of time.
Previously, the mapping type was expected to be calculated ahead of
time, but if pages are to c
On 5/2/21 1:35 AM, John Hubbard wrote:
On 4/8/21 10:01 AM, Logan Gunthorpe wrote:
In order to use upstream_bridge_distance_warn() from a dma_map function,
it must not sleep. However, pci_get_slot() takes the pci_bus_sem so it
might sleep.
In order to avoid this, try to get the host bridge's dev
On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
In order to use upstream_bridge_distance_warn() from a dma_map function,
it must not sleep. However, pci_get_slot() takes the pci_bus_sem so it
might sleep.
In order to avoid this, try to get the host bridge's device from
bus->self, and if that is not se
On 5/1/21 11:58 PM, John Hubbard wrote:
On 4/8/21 10:01 AM, Logan Gunthorpe wrote:
In order to call upstream_bridge_distance_warn() from a dma_map function,
it must not sleep. The only reason it does sleep is to allocate the seqbuf
to print which devices are within the ACS path.
Switch the kmal
On 4/8/21 1:01 PM, Logan Gunthorpe wrote:
Hi,
This patchset continues my work to to add P2PDMA support to the common
dma map operations. This allows for creating SGLs that have both P2PDMA
and regular pages which is a necessary step to allowing P2PDMA pages in
userspace.
The earlier RFC[1] gene
On Tue, May 11, 2021 at 09:10:03AM +, Tian, Kevin wrote:
> 3) SRIOV, ENQCMD (Intel):
> - "PASID global" with host-allocated PASIDs;
> - PASID table managed by host (in HPA space);
> - all RIDs bound to this ioasid_fd use the global pool;
> - however, exposing global PAS
On Tue, 11 May 2021 09:10:03 +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Monday, May 10, 2021 8:37 PM
> >
> [...]
> > > gPASID!=hPASID has a problem when assigning a physical device which
> > > supports both shared work queue (ENQCMD with PASID in MSR)
> > > and dedicated wor
On Mon, May 10, 2021 at 08:31:45PM -0700, Jacob Pan wrote:
> Hi Jason,
>
> On Mon, 10 May 2021 20:37:49 -0300, Jason Gunthorpe wrote:
>
> > On Mon, May 10, 2021 at 06:25:07AM -0700, Jacob Pan wrote:
> >
> > > +/*
> > > + * The IOMMU_SVA_BIND_SUPERVISOR flag requests a PASID which can be
> > > u
Hi Alex,
Hope for some suggestions or comments from you since there seems to be many
unsure
points in this series. :-)
Thanks,
Shenming
On 2021/4/26 9:41, Shenming Lu wrote:
> On 2021/4/9 11:44, Shenming Lu wrote:
>> Hi,
>>
>> Requesting for your comments and suggestions. :-)
>
> Kind ping...
On 2021-05-10 12:53, chenxiang wrote:
From: Xiang Chen
It is not necessary to put free_iova_mem() inside of spinlock/unlock
iova_rbtree_lock which only leads to more completion for the spinlock.
It has a small promote on the performance after the change. And also
rename private_free_iova() as r
> From: Jason Gunthorpe
> Sent: Monday, May 10, 2021 8:37 PM
>
[...]
> > gPASID!=hPASID has a problem when assigning a physical device which
> > supports both shared work queue (ENQCMD with PASID in MSR)
> > and dedicated work queue (PASID in device register) to a guest
> > process which is assoc
Am 11.05.21 um 10:50 schrieb Christoph Hellwig:
On Tue, May 11, 2021 at 09:35:20AM +0200, Christian König wrote:
We certainly going to need the drm_need_swiotlb() for userptr support
(unless we add some approach for drivers to opt out of swiotlb).
swiotlb use is driven by three things:
1) ad
On Tue, May 11, 2021 at 09:35:20AM +0200, Christian König wrote:
> We certainly going to need the drm_need_swiotlb() for userptr support
> (unless we add some approach for drivers to opt out of swiotlb).
swiotlb use is driven by three things:
1) addressing limitations of the device
2) addressi
Hi Baolu,
On 2021/5/11 11:12, Lu Baolu wrote:
> Hi Keqian,
>
> On 5/10/21 7:07 PM, Keqian Zhu wrote:
> I suppose this interface is to ask the vendor IOMMU driver to check
> whether each device/iommu in the domain supports dirty bit tracking.
> But what will happen if new devices with
Am 11.05.21 um 08:05 schrieb Christoph Hellwig:
Use the dma_alloc_pages allocator for the TTM pool allocator.
This allocator is a front end to the page allocator which takes the
DMA mask of the device into account, thus offering the best of both
worlds of the two existing allocator versions. Thi
50 matches
Mail list logo