-by: Thomas Gleixner
> ---
> V3: New patch
> ---
> drivers/pci/msi/msi.c | 23 +--
> 1 file changed, 17 insertions(+), 6 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Mon, Dec 13, 2021 at 08:50:05AM +0800, Lu Baolu wrote:
> > Does this work for you? Can I work towards this in the next version?
>
> A kindly ping ... Is this heading the right direction? I need your
> advice to move ahead. :-)
I prefer this to all the duplicated code in the v3 series.. Given
On Sun, Dec 12, 2021 at 01:12:05AM +0100, Thomas Gleixner wrote:
> PCI/MSI and PCI/MSI-X are just implementations of IMS
>
> Not more, not less. The fact that they have very strict rules about the
> storage space and the fact that they are mutually exclusive does not
> change that at all.
A
On Sun, Dec 12, 2021 at 09:55:32PM +0100, Thomas Gleixner wrote:
> Kevin,
>
> On Sun, Dec 12 2021 at 01:56, Kevin Tian wrote:
> >> From: Thomas Gleixner
> >> All I can find is drivers/iommu/virtio-iommu.c but I can't find anything
> >> vIR related there.
> >
> > Well, virtio-iommu is a para-virtu
On Sat, Dec 11, 2021 at 08:39:12AM +, Tian, Kevin wrote:
> Uniqueness is not the main argument of using global PASIDs for
> SWQ, since it can be defined either in per-RID or in global PASID
> space. No SVA architecture can allow two processes to use the
> same PASID to submit work unless they
On Sun, Dec 12, 2021 at 08:44:46AM +0200, Mika Penttilä wrote:
> > /*
> > * The MSIX mappable capability informs that MSIX data of a BAR can be
> > mmapped
> > * which allows direct access to non-MSIX registers which happened to be
> > within
> > * the same system page.
> > *
> > * Eve
On Fri, Dec 10, 2021 at 10:18:20AM -0800, Jacob Pan wrote:
> > If one device has 10 PASID's pointing to this domain you must flush
> > them all if that is what the HW requires.
> >
> Yes. My point is that other than PASID 0 is a given, we must track the 10
> PASIDs to avoid wasted flush. It also
On Fri, Dec 10, 2021 at 09:50:25AM -0800, Jacob Pan wrote:
> > Tying pasid to an iommu_domain is not a good idea. An iommu_domain
> > represents an I/O address translation table. It could be attached to a
> > device or a PASID on the device.
>
> I don;t think we can avoid storing PASID at domain
On Fri, Dec 10, 2021 at 07:29:01AM +, Tian, Kevin wrote:
> > 5) It's not possible for the kernel to reliably detect whether it is
> > running on bare metal or not. Yes we talked about heuristics, but
> > that's something I really want to avoid.
>
> How would the hypercall mechanism
On Fri, Dec 10, 2021 at 09:06:24AM +, Jean-Philippe Brucker wrote:
> On Thu, Dec 09, 2021 at 10:14:04AM -0800, Jacob Pan wrote:
> > > This looks like we're just one step away from device drivers needing
> > > multiple PASIDs for kernel DMA so I'm trying to figure out how to evolve
> > > the API
On Fri, Dec 10, 2021 at 07:36:12AM +, Tian, Kevin wrote:
> /*
> * The MSIX mappable capability informs that MSIX data of a BAR can be mmapped
> * which allows direct access to non-MSIX registers which happened to be
> within
> * the same system page.
> *
> * Even though the userspace get
On Thu, Dec 09, 2021 at 03:21:13PM -0800, Jacob Pan wrote:
> For DMA PASID storage, can we store it in the iommu_domain instead of
> iommu_group?
It doesn't make sense to put in the domain, the domain should be only
the page table and not have any relation to how things are matched to
it
Jason
_
On Thu, Dec 09, 2021 at 09:32:42PM +0100, Thomas Gleixner wrote:
> On Thu, Dec 09 2021 at 12:21, Jason Gunthorpe wrote:
> > On Thu, Dec 09, 2021 at 09:37:06AM +0100, Thomas Gleixner wrote:
> > If we keep the MSI emulation in the hypervisor then MSI != IMS. The
> > MSI code
On Thu, Dec 09, 2021 at 09:37:06AM +0100, Thomas Gleixner wrote:
> On Thu, Dec 09 2021 at 05:23, Kevin Tian wrote:
> >> From: Thomas Gleixner
> >> I don't see anything wrong with that. A subdevice is it's own entity and
> >> VFIO can chose the most conveniant representation of it to the guest
> >>
On Thu, Dec 09, 2021 at 03:59:57AM +, Tian, Kevin wrote:
> > From: Tian, Kevin
> > Sent: Thursday, December 9, 2021 10:58 AM
> >
> > For ARM it's SMMU's PASID table format. There is no step-2 since PASID
> > is already within the address space covered by the user PASID table.
> >
>
> One cor
On Thu, Dec 09, 2021 at 08:50:04AM +0100, Eric Auger wrote:
> > The kernel API should accept the S1ContextPtr IPA and all the parts of
> > the STE that relate to the defining the layout of what the S1Context
> > points to an thats it.
> Yes that's exactly what is done currently. At config time th
On Wed, Dec 08, 2021 at 01:59:45PM -0800, Jacob Pan wrote:
> Hi Jason,
>
> On Wed, 8 Dec 2021 16:30:22 -0400, Jason Gunthorpe wrote:
>
> > On Wed, Dec 08, 2021 at 11:55:16AM -0800, Jacob Pan wrote:
> > > Hi Jason,
> > >
> > > On Wed, 8 Dec 2021
On Wed, Dec 08, 2021 at 11:55:16AM -0800, Jacob Pan wrote:
> Hi Jason,
>
> On Wed, 8 Dec 2021 09:13:58 -0400, Jason Gunthorpe wrote:
>
> > > This patch utilizes iommu_enable_pasid_dma() to enable DSA to perform
> > > DMA requests with PASID under the same
On Wed, Dec 08, 2021 at 05:20:39PM +, Jean-Philippe Brucker wrote:
> On Wed, Dec 08, 2021 at 08:56:16AM -0400, Jason Gunthorpe wrote:
> > From a progress perspective I would like to start with simple 'page
> > tables in userspace', ie no PASID in this step.
> >
On Wed, Dec 08, 2021 at 08:35:49AM -0700, Dave Jiang wrote:
>
> On 12/8/2021 6:13 AM, Jason Gunthorpe wrote:
> > On Tue, Dec 07, 2021 at 05:47:14AM -0800, Jacob Pan wrote:
> > > In-kernel DMA should be managed by DMA mapping API. The existing kernel
> > > PAS
On Mon, Dec 06, 2021 at 11:39:26PM +0100, Thomas Gleixner wrote:
> Store the properties which are interesting for various places so the MSI
> descriptor fiddling can be removed.
>
> Signed-off-by: Thomas Gleixner
> ---
> V2: Use the setter function
> ---
> drivers/pci/msi/msi.c |8
>
On Mon, Dec 06, 2021 at 11:39:33PM +0100, Thomas Gleixner wrote:
> @@ -209,10 +209,10 @@ static int setup_msi_msg_address(struct
> return -ENODEV;
> }
>
> - entry = first_pci_msi_entry(dev);
> + is_64bit = msi_device_has_property(&dev->dev, MSI_PROP_64BIT);
How about
On Mon, Dec 06, 2021 at 11:39:28PM +0100, Thomas Gleixner wrote:
> instead of fiddling with MSI descriptors.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Reviewed-by: Jason Gunthorpe
> arch/x86/pci/xen.c |6 ++
> 1 file changed, 2 inser
On Mon, Dec 06, 2021 at 11:39:34PM +0100, Thomas Gleixner wrote:
> instead of fiddling with MSI descriptors.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Reviewed-by: Jason Gunthorpe
> arch/powerpc/platforms/pseries/msi.c |4 ++--
> 1 file
On Mon, Dec 06, 2021 at 11:39:29PM +0100, Thomas Gleixner wrote:
> instead of fiddling with MSI descriptors.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Reviewed-by: Jason Gunthorpe
> ---
> arch/x86/kernel/apic/msi.c |5 +
> 1 file ch
On Tue, Dec 07, 2021 at 05:47:13AM -0800, Jacob Pan wrote:
> Between DMA requests with and without PASID (legacy), DMA mapping APIs
> are used indiscriminately on a device. Therefore, we should always match
> the addressing mode of the legacy DMA when enabling kernel PASID.
>
> This patch adds sup
On Tue, Dec 07, 2021 at 05:47:14AM -0800, Jacob Pan wrote:
> In-kernel DMA should be managed by DMA mapping API. The existing kernel
> PASID support is based on the SVA machinery in SVA lib that is intended
> for user process SVA. The binding between a kernel PASID and kernel
> mapping has many fla
On Tue, Dec 07, 2021 at 05:47:10AM -0800, Jacob Pan wrote:
> Modern accelerators such as Intel's Data Streaming Accelerator (DSA) can
> perform DMA requests with PASID, which is a finer granularity than the
> device's requester ID(RID). In fact, work submissions on DSA shared work
> queues require
On Wed, Dec 08, 2021 at 08:33:33AM +0100, Eric Auger wrote:
> Hi Baolu,
>
> On 12/8/21 3:44 AM, Lu Baolu wrote:
> > Hi Eric,
> >
> > On 12/7/21 6:22 PM, Eric Auger wrote:
> >> On 12/6/21 11:48 AM, Joerg Roedel wrote:
> >>> On Wed, Oct 27, 2021 at 12:44:20PM +0200, Eric Auger wrote:
> Signed-o
On Tue, Dec 07, 2021 at 05:25:04AM -0800, Christoph Hellwig wrote:
> On Tue, Dec 07, 2021 at 09:16:27AM -0400, Jason Gunthorpe wrote:
> > Yes, the suggestion was to put everything that 'if' inside a function
> > and then of course a matching undo function.
>
> Can
On Tue, Dec 07, 2021 at 10:57:25AM +0800, Lu Baolu wrote:
> On 12/6/21 11:06 PM, Jason Gunthorpe wrote:
> > On Mon, Dec 06, 2021 at 06:36:27AM -0800, Christoph Hellwig wrote:
> > > I really hate the amount of boilerplate code that having this in each
> > > bus type caus
On Mon, Dec 06, 2021 at 09:28:47PM +0100, Thomas Gleixner wrote:
> That's already the plan in some form, but there's a long way towards
> that. See below.
Okay, then I think we are thinking the same sorts of things, it is
good to see
> Also there will be a question of how many different callbac
On Mon, Dec 06, 2021 at 04:47:58PM +0100, Thomas Gleixner wrote:
> >>- The irqchip callbacks which can be implemented by these top
> >> level domains are going to be restricted.
> >
> > OK - I think it is great that the driver will see a special ops struct
> > that is 'ops for dev
On Mon, Dec 06, 2021 at 06:36:27AM -0800, Christoph Hellwig wrote:
> I really hate the amount of boilerplate code that having this in each
> bus type causes.
+1
I liked the first version of this series better with the code near
really_probe().
Can we go back to that with some device_configure_d
On Mon, Dec 06, 2021 at 06:47:45AM -0800, Christoph Hellwig wrote:
> On Mon, Dec 06, 2021 at 10:45:35AM -0400, Jason Gunthorpe via iommu wrote:
> > IIRC the only thing this function does is touch ACPI and OF stuff?
> > Isn't that firmware?
> >
> > AFAICT amba uses
On Mon, Dec 06, 2021 at 02:35:55PM +0100, Joerg Roedel wrote:
> On Mon, Dec 06, 2021 at 09:58:46AM +0800, Lu Baolu wrote:
> > >From the perspective of who is initiating the device to do DMA, device
> > DMA could be divided into the following types:
> >
> > DMA_OWNER_DMA_API: Device DMAs ar
On Mon, Dec 06, 2021 at 06:13:01AM -0800, Christoph Hellwig wrote:
> On Mon, Dec 06, 2021 at 08:53:07AM +0100, Greg Kroah-Hartman wrote:
> > On Mon, Dec 06, 2021 at 09:58:48AM +0800, Lu Baolu wrote:
> > > The platform_dma_configure() is shared between platform and amba bus
> > > drivers. Rename the
On Sun, Dec 05, 2021 at 03:16:40PM +0100, Thomas Gleixner wrote:
> On Sat, Dec 04 2021 at 15:20, Thomas Gleixner wrote:
> > On Fri, Dec 03 2021 at 12:41, Jason Gunthorpe wrote:
> > So I need to break that up in a way which caters for both cases, but
> > does neither create a
On Sat, Dec 04, 2021 at 03:20:36PM +0100, Thomas Gleixner wrote:
> Jason,
>
> On Fri, Dec 03 2021 at 12:41, Jason Gunthorpe wrote:
> > On Fri, Dec 03, 2021 at 04:07:58PM +0100, Thomas Gleixner wrote:
> > Lets do a thought experiment, lets say we forget about the current P
On Mon, Dec 06, 2021 at 09:59:03AM +0800, Lu Baolu wrote:
> @@ -941,48 +944,44 @@ int host1x_client_iommu_attach(struct host1x_client
> *client)
>* not the shared IOMMU domain, don't try to attach it to a different
>* domain. This allows using the IOMMU-backed DMA API.
>*/
On Fri, Dec 03, 2021 at 04:07:58PM +0100, Thomas Gleixner wrote:
> Jason,
>
> On Thu, Dec 02 2021 at 20:37, Jason Gunthorpe wrote:
> > On Thu, Dec 02, 2021 at 11:31:11PM +0100, Thomas Gleixner wrote:
> >> >> Of course we can store them in pci_dev.dev
On Thu, Dec 02, 2021 at 11:31:11PM +0100, Thomas Gleixner wrote:
> The software representation aka struct msi_desc is a different
> story. That's what we are debating.
Okay, I did mean msi_desc storage, so we are talking about the same thigns
> >> Of course we can store them in pci_dev.dev.msi.d
On Thu, Dec 02, 2021 at 08:25:48PM +0100, Thomas Gleixner wrote:
> Jason,
>
> On Thu, Dec 02 2021 at 09:55, Jason Gunthorpe wrote:
> > On Thu, Dec 02, 2021 at 01:01:42AM +0100, Thomas Gleixner wrote:
> >> On Wed, Dec 01 2021 at 21:21, Thomas Gleixner wrote:
> >&g
On Thu, Dec 02, 2021 at 03:23:38PM +0100, Greg Kroah-Hartman wrote:
> On Thu, Dec 02, 2021 at 09:55:02AM -0400, Jason Gunthorpe wrote:
> > Further, there is no reason why IMS should be reserved exclusively for
> > VFIO! Why shouldn't the cdev be able to use IMS vectors t
On Thu, Dec 02, 2021 at 01:01:42AM +0100, Thomas Gleixner wrote:
> Jason,
>
> On Wed, Dec 01 2021 at 21:21, Thomas Gleixner wrote:
> > On Wed, Dec 01 2021 at 14:14, Jason Gunthorpe wrote:
> > Which in turn is consistent all over the place and does not require any
> >
On Wed, Dec 01, 2021 at 06:35:35PM +0100, Thomas Gleixner wrote:
> On Wed, Dec 01 2021 at 09:00, Jason Gunthorpe wrote:
> > On Wed, Dec 01, 2021 at 11:16:47AM +0100, Thomas Gleixner wrote:
> >> Looking at the device slices as subdevices with their own struct device
> >>
On Wed, Dec 01, 2021 at 11:16:47AM +0100, Thomas Gleixner wrote:
> Looking at the device slices as subdevices with their own struct device
> makes a lot of sense from the conceptual level.
Except IMS is not just for subdevices, it should be usable for any
driver in any case as a general interrupt
On Mon, Nov 29, 2021 at 11:34:39AM +0100, Greg Kroah-Hartman wrote:
> On Sun, Nov 28, 2021 at 07:15:09PM -0400, Jason Gunthorpe wrote:
> > On Sun, Nov 28, 2021 at 09:10:14AM +0100, Greg Kroah-Hartman wrote:
> > > On Sun, Nov 28, 2021 at 10:50:38AM +0800, Lu Baolu wrote:
> &g
On Sun, Nov 28, 2021 at 09:10:14AM +0100, Greg Kroah-Hartman wrote:
> On Sun, Nov 28, 2021 at 10:50:38AM +0800, Lu Baolu wrote:
> > Multiple platform devices may be placed in the same IOMMU group because
> > they cannot be isolated from each other. These devices must either be
> > entirely under ke
difference.. Though it does
highlight there is some asymmetry with how platform and PCI works here
where PCI fills some 'struct msix_entry *'. Many drivers would be
quite happy to just call msi_get_virq() and avoid the extra memory, so
I think the msi_get_virq() version is good.
Reviewed-
On Sat, Nov 27, 2021 at 02:20:09AM +0100, Thomas Gleixner wrote:
> +/**
> + * msi_setup_device_data - Setup MSI device data
> + * @dev: Device for which MSI device data should be set up
> + *
> + * Return: 0 on success, appropriate error code otherwise
> + *
> + * This can be called more than
On Fri, Nov 19, 2021 at 04:06:12PM +0100, Jörg Rödel wrote:
> This change came to be because the iommu_attach/detach_device()
> interface doesn't fit well into a world with iommu-groups. Devices
> within a group are by definition not isolated between each other, so
> they must all be in the same a
On Fri, Nov 19, 2021 at 05:44:35AM +, Tian, Kevin wrote:
> Well, the difference is just in literal. I don't know the background
> why the existing iommu_attach_device() users want to do it this
> way. But given the condition in iommu_attach_device() it could
> in theory imply some unknown hard
On Thu, Nov 18, 2021 at 09:12:41AM +0800, Lu Baolu wrote:
> The existing iommu_attach_device() allows only for singleton group. As
> we have added group ownership attribute, we can enforce this interface
> only for kernel domain usage.
Below is what I came up with.
- Replace the file * with a sim
On Thu, Nov 18, 2021 at 02:39:45AM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Tuesday, November 16, 2021 9:46 PM
> >
> > On Tue, Nov 16, 2021 at 09:57:30AM +0800, Lu Baolu wrote:
> > > Hi Christoph,
> > >
> > > On 11/15/21 9:
On Wed, Nov 17, 2021 at 01:22:19PM +0800, Lu Baolu wrote:
> Hi Jason,
>
> On 11/16/21 9:46 PM, Jason Gunthorpe wrote:
> > On Tue, Nov 16, 2021 at 09:57:30AM +0800, Lu Baolu wrote:
> > > Hi Christoph,
> > >
> > > On 11/15/21 9:14 PM, Christoph Hellwig wr
On Tue, Nov 16, 2021 at 02:22:01PM -0600, Bjorn Helgaas wrote:
> On Tue, Nov 16, 2021 at 03:24:29PM +0800, Lu Baolu wrote:
> > On 2021/11/16 4:44, Bjorn Helgaas wrote:
> > > On Mon, Nov 15, 2021 at 10:05:45AM +0800, Lu Baolu wrote:
> > > > IOMMU grouping on PCI necessitates that if we lack isolatio
On Tue, Nov 16, 2021 at 09:57:30AM +0800, Lu Baolu wrote:
> Hi Christoph,
>
> On 11/15/21 9:14 PM, Christoph Hellwig wrote:
> > On Mon, Nov 15, 2021 at 10:05:42AM +0800, Lu Baolu wrote:
> > > +enum iommu_dma_owner {
> > > + DMA_OWNER_NONE,
> > > + DMA_OWNER_KERNEL,
> > > + DMA_OWNER_USER,
> > > +}
On Mon, Nov 15, 2021 at 08:58:19PM +, Robin Murphy wrote:
> > The above scenarios are already blocked by the kernel with
> > LOCKDOWN_DEV_MEM - yes there are historical ways to violate kernel
> > integrity, and these days they almost all have mitigation. I would
> > consider any kernel integrit
On Mon, Nov 15, 2021 at 06:35:37PM +, Robin Murphy wrote:
> On 2021-11-15 15:56, Jason Gunthorpe via iommu wrote:
> > On Mon, Nov 15, 2021 at 03:37:18PM +, Robin Murphy wrote:
> >
> > > IOMMUs, and possibly even fewer of them support VFIO, so I'm in full
On Mon, Nov 15, 2021 at 05:54:42PM +, Robin Murphy wrote:
> On 2021-11-15 16:17, Jason Gunthorpe wrote:
> > On Mon, Nov 15, 2021 at 03:14:49PM +, Robin Murphy wrote:
> >
> > > > If userspace has control of device A and can cause A to issue DMA to
> > >
On Mon, Nov 15, 2021 at 03:14:49PM +, Robin Murphy wrote:
> > If userspace has control of device A and can cause A to issue DMA to
> > arbitary DMA addresses then there are certain PCI topologies where A
> > can now issue peer to peer DMA and manipulate the MMMIO registers in
> > device B.
> >
On Mon, Nov 15, 2021 at 03:37:18PM +, Robin Murphy wrote:
> IOMMUs, and possibly even fewer of them support VFIO, so I'm in full
> agreement with Greg and Christoph that this absolutely warrants being scoped
> per-bus. I mean, we literally already have infrastructure to prevent drivers
> bindi
On Mon, Nov 15, 2021 at 07:59:10AM +0100, Greg Kroah-Hartman wrote:
> > @@ -566,6 +567,12 @@ static int really_probe(struct device *dev, struct
> > device_driver *drv)
> > goto done;
> > }
> >
> > + if (!drv->suppress_auto_claim_dma_owner) {
> > + ret = iommu_device_s
On Mon, Nov 15, 2021 at 05:21:26AM -0800, Christoph Hellwig wrote:
> On Mon, Nov 15, 2021 at 10:05:44AM +0800, Lu Baolu wrote:
> > pci_stub allows the admin to block driver binding on a device and make
> > it permanently shared with userspace. Since pci_stub does not do DMA,
> > it is safe.
>
> If
On Mon, Nov 15, 2021 at 05:19:02AM -0800, Christoph Hellwig wrote:
> On Mon, Nov 15, 2021 at 10:05:43AM +0800, Lu Baolu wrote:
> > @@ -566,6 +567,12 @@ static int really_probe(struct device *dev, struct
> > device_driver *drv)
> > goto done;
> > }
> >
> > + if (!drv->suppress_a
On Tue, Nov 02, 2021 at 09:53:29AM +, Liu, Yi L wrote:
> > vfio_uninit_group_dev(&mdev_state->vdev);
> > kfree(mdev_state->pages);
> > kfree(mdev_state->vconfig);
> > kfree(mdev_state);
> >
> > pages/vconfig would logically be in a release function
>
> I see. So the criteria
On Fri, Oct 29, 2021 at 09:47:27AM +, Liu, Yi L wrote:
> Hi Jason,
>
> > From: Jason Gunthorpe
> > Sent: Monday, October 25, 2021 8:53 PM
> >
> > On Mon, Oct 25, 2021 at 06:28:09AM +, Liu, Yi L wrote:
> > >thanks for the guiding. will al
On Thu, Oct 28, 2021 at 02:07:46AM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Tuesday, October 26, 2021 7:35 AM
> >
> > On Fri, Oct 22, 2021 at 03:08:06AM +, Tian, Kevin wrote:
> >
> > > > I have no idea what security
On Fri, Oct 29, 2021 at 11:15:31AM +1100, David Gibson wrote:
> > +Device must be bound to an iommufd before the attach operation can
> > +be conducted. The binding operation builds the connection between
> > +the devicefd (opened via device-passthrough framework) and IOMMUFD.
> > +IOMMU-protected
On Tue, Oct 26, 2021 at 12:16:43AM +1100, David Gibson wrote:
> If you attach devices A and B (both in group X) to IOAS 1, then detach
> device A, what happens? Do you detach both devices? Or do you have a
> counter so you have to detach as many time as you attached?
I would refcount it since th
On Fri, Oct 22, 2021 at 03:08:06AM +, Tian, Kevin wrote:
> > I have no idea what security model makes sense for wbinvd, that is the
> > major question you have to answer.
>
> wbinvd flushes the entire cache in local cpu. It's more a performance
> isolation problem but nothing can prevent it o
On Fri, Oct 22, 2021 at 08:49:03AM +0100, Jean-Philippe Brucker wrote:
> On Thu, Oct 21, 2021 at 08:22:23PM -0300, Jason Gunthorpe wrote:
> > On Thu, Oct 21, 2021 at 03:58:02PM +0100, Jean-Philippe Brucker wrote:
> > > On Thu, Oct 21, 2021 at 02:26:00AM +, Tian, Kevin wro
On Mon, Oct 25, 2021 at 06:28:09AM +, Liu, Yi L wrote:
>thanks for the guiding. will also refer to your vfio_group_cdev series.
>
>Need to double confirm here. Not quite following on the kfree. Is
>this kfree to free the vfio_device structure? But now the
>vfio_device pointer i
On Mon, Oct 25, 2021 at 04:14:56PM +1100, David Gibson wrote:
> On Mon, Oct 18, 2021 at 01:32:38PM -0300, Jason Gunthorpe wrote:
> > On Mon, Oct 18, 2021 at 02:57:12PM +1100, David Gibson wrote:
> >
> > > The first user might read this. Subsequent users are likely
On Thu, Oct 21, 2021 at 02:26:00AM +, Tian, Kevin wrote:
> But in reality only Intel integrated GPUs have this special no-snoop
> trick (fixed knowledge), with a dedicated IOMMU which doesn't
> support enforce-snoop format at all. In this case there is no choice
> that the user can further ma
On Thu, Oct 21, 2021 at 03:58:02PM +0100, Jean-Philippe Brucker wrote:
> On Thu, Oct 21, 2021 at 02:26:00AM +, Tian, Kevin wrote:
> > > I'll leave it to Jean to confirm. If only coherent DMA can be used in
> > > the guest on other platforms, suppose VFIO should not blindly set
> > > IOMMU_CACHE
On Tue, Oct 19, 2021 at 10:11:34AM -0700, Jacob Pan wrote:
> Hi Jason,
>
> On Tue, 19 Oct 2021 13:57:47 -0300, Jason Gunthorpe wrote:
>
> > On Tue, Oct 19, 2021 at 09:57:34AM -0700, Jacob Pan wrote:
> > > Hi Jason,
> > >
> > > On Fri, 15 Oct 2021
On Tue, Oct 19, 2021 at 09:57:34AM -0700, Jacob Pan wrote:
> Hi Jason,
>
> On Fri, 15 Oct 2021 08:18:07 -0300, Jason Gunthorpe wrote:
>
> > On Fri, Oct 15, 2021 at 09:18:06AM +, Liu, Yi L wrote:
> >
> > > > Acquire from the xarray is
> > &
On Mon, Oct 18, 2021 at 02:50:54PM +1100, David Gibson wrote:
> Hrm... which makes me think... if we allow this for the common
> kernel-managed case, do we even need to have capcity in the high-level
> interface for reporting IO holes? If the kernel can choose a non-zero
> base, it could just cho
On Mon, Oct 18, 2021 at 02:57:12PM +1100, David Gibson wrote:
> The first user might read this. Subsequent users are likely to just
> copy paste examples from earlier things without fully understanding
> them. In general documenting restrictions somewhere is never as
> effective as making those
On Fri, Oct 15, 2021 at 09:18:06AM +, Liu, Yi L wrote:
> > Acquire from the xarray is
> >rcu_lock()
> >ioas = xa_load()
> >if (ioas)
> > if (down_read_trylock(&ioas->destroying_lock))
>
> all good suggestions, will refine accordingly. Here destroying_lock is a
> rw_semapho
On Fri, Oct 15, 2021 at 01:29:16AM +, Tian, Kevin wrote:
> Hi, Jason,
>
> > From: Jason Gunthorpe
> > Sent: Wednesday, September 29, 2021 8:59 PM
> >
> > On Wed, Sep 29, 2021 at 12:38:35AM +, Tian, Kevin wrote:
> >
> > > /* If set the dri
On Thu, Oct 14, 2021 at 09:11:58AM +, Tian, Kevin wrote:
> But in both cases cache maintenance instructions are available from
> guest p.o.v and no coherency semantics would be violated.
You've described how Intel's solution papers over the problem.
In part wbinvd is defined to restore CPU
On Thu, Oct 14, 2021 at 03:33:21PM +1100, da...@gibson.dropbear.id.au wrote:
> > If the HW can attach multiple non-overlapping IOAS's to the same
> > device then the HW is routing to the correct IOAS by using the address
> > bits. This is not much different from the prior discussion we had
> > whe
On Thu, Oct 14, 2021 at 03:53:33PM +1100, David Gibson wrote:
> > My feeling is that qemu should be dealing with the host != target
> > case, not the kernel.
> >
> > The kernel's job should be to expose the IOMMU HW it has, with all
> > features accessible, to userspace.
>
> See... to me this is
On Mon, Oct 11, 2021 at 09:49:57AM +0100, Jean-Philippe Brucker wrote:
> Seems like we don't need the negotiation part? The host kernel
> communicates available IOVA ranges to userspace including holes (patch
> 17), and userspace can check that the ranges it needs are within the IOVA
> space boun
On Mon, Oct 11, 2021 at 05:02:01PM +1100, David Gibson wrote:
> > This means we cannot define an input that has a magic HW specific
> > value.
>
> I'm not entirely sure what you mean by that.
I mean if you make a general property 'foo' that userspace must
specify correctly then your API isn't ge
On Mon, Oct 11, 2021 at 04:37:38PM +1100, da...@gibson.dropbear.id.au wrote:
> > PASID support will already require that a device can be multi-bound to
> > many IOAS's, couldn't PPC do the same with the windows?
>
> I don't see how that would make sense. The device has no awareness of
> multiple
On Thu, Oct 07, 2021 at 12:11:27PM -0700, Jacob Pan wrote:
> Hi Barry,
>
> On Thu, 7 Oct 2021 18:43:33 +1300, Barry Song <21cn...@gmail.com> wrote:
>
> > > > Security-wise, KVA respects kernel mapping. So permissions are better
> > > > enforced than pass-through and identity mapping.
> > >
> >
On Thu, Oct 07, 2021 at 10:50:10AM -0700, Jacob Pan wrote:
> On platforms that are DMA snooped, this barrier is not needed. But I think
> your point is that once we convert to DMA API, the sync/barrier is covered
> by DMA APIs if !dev_is_dma_coherent(dev). Then all archs are good.
No.. my point i
On Fri, Oct 08, 2021 at 12:54:52AM +1300, Barry Song wrote:
> On Fri, Oct 8, 2021 at 12:32 AM Jason Gunthorpe wrote:
> >
> > On Thu, Oct 07, 2021 at 06:43:33PM +1300, Barry Song wrote:
> >
> > > So do we have a case where devices can directly access the kernel
On Thu, Oct 07, 2021 at 12:23:13PM +1100, David Gibson wrote:
> On Fri, Oct 01, 2021 at 09:43:22AM -0300, Jason Gunthorpe wrote:
> > On Thu, Sep 30, 2021 at 01:10:29PM +1000, David Gibson wrote:
> > > On Wed, Sep 29, 2021 at 09:24:57AM -0300, Jason Gunthorpe wrote:
> > >
On Thu, Oct 07, 2021 at 06:43:33PM +1300, Barry Song wrote:
> So do we have a case where devices can directly access the kernel's data
> structure such as a list/graph/tree with pointers to a kernel virtual address?
> then devices don't need to translate the address of pointers in a structure.
> I
On Mon, Oct 04, 2021 at 09:40:03AM -0700, Jacob Pan wrote:
> Hi Barry,
>
> On Sat, 2 Oct 2021 01:45:59 +1300, Barry Song <21cn...@gmail.com> wrote:
>
> > >
> > > > I assume KVA mode can avoid this iotlb flush as the device is using
> > > > the page table of the kernel and sharing the whole kern
On Mon, Oct 04, 2021 at 03:22:22PM +0200, Christian König wrote:
> That use case is completely unrelated to GUP and when this doesn't work we
> have quite a problem.
My read is that unmap_mapping_range() guarentees the physical TLB
hardware is serialized across all CPUs upon return.
It also guar
On Mon, Oct 04, 2021 at 08:58:35AM +0200, Christian König wrote:
> I'm not following this discussion to closely, but try to look into it from
> time to time.
>
> Am 01.10.21 um 19:45 schrieb Jason Gunthorpe:
> > On Fri, Oct 01, 2021 at 11:01:49AM -0600, Logan Gunthorpe wrot
On Sat, Oct 02, 2021 at 02:21:38PM +1000, da...@gibson.dropbear.id.au wrote:
> > > No. qemu needs to supply *both* the 32-bit and 64-bit range to its
> > > guest, and therefore needs to request both from the host.
> >
> > As I understood your remarks each IOAS can only be one of the formats
> > a
On Fri, Oct 01, 2021 at 04:22:28PM -0600, Logan Gunthorpe wrote:
> > It would close this issue, however synchronize_rcu() is very slow
> > (think > 1second) in some cases and thus cannot be inserted here.
>
> It shouldn't be *that* slow, at least not the vast majority of the
> time... it seems a
On Fri, Oct 01, 2021 at 02:13:14PM -0600, Logan Gunthorpe wrote:
>
>
> On 2021-10-01 11:45 a.m., Jason Gunthorpe wrote:
> >> Before the invalidation, an active flag is cleared to ensure no new
> >> mappings can be created while the unmap is proceeding.
> >> u
401 - 500 of 1012 matches
Mail list logo