On Mon, Jun 07, 2010 at 12:01:04PM -0700, Tom Lyon wrote:
On Sunday 06 June 2010 02:54:51 am Michael S. Tsirkin wrote:
On Thu, Jun 03, 2010 at 02:41:38PM -0700, Tom Lyon wrote:
OK, in the interest of making progress, I am about to embark on the
following:
1. Create a
On Sunday 06 June 2010 02:54:51 am Michael S. Tsirkin wrote:
On Thu, Jun 03, 2010 at 02:41:38PM -0700, Tom Lyon wrote:
OK, in the interest of making progress, I am about to embark on the
following:
1. Create a user-iommu-domain driver - opening it will give a new empty
domain.
On Thu, Jun 03, 2010 at 02:41:38PM -0700, Tom Lyon wrote:
OK, in the interest of making progress, I am about to embark on the following:
1. Create a user-iommu-domain driver - opening it will give a new empty
domain.
Ultimately this can also populate sysfs with the state of its world,
On 06/02/2010 07:53 PM, Chris Wright wrote:
* Avi Kivity (a...@redhat.com) wrote:
The interface would only work for clients which support it: kvm,
vhost, and iommu/devices with restartable dma.
BTW, there is no such thing as restartable dma. There is a provision in
new specs (read:
OK, in the interest of making progress, I am about to embark on the following:
1. Create a user-iommu-domain driver - opening it will give a new empty domain.
Ultimately this can also populate sysfs with the state of its world, which
would
also be a good addition to the base iommu stuff.
On Tue, Jun 01, 2010 at 12:55:32PM +0300, Michael S. Tsirkin wrote:
There seems to be some misunderstanding. The userspace interface
proposed forces a separate domain per device and forces userspace to
repeat iommu programming for each device. We are better off sharing a
domain between
On Tue, Jun 01, 2010 at 03:41:55PM +0300, Avi Kivity wrote:
On 06/01/2010 01:46 PM, Michael S. Tsirkin wrote:
Main difference is that vhost works fine with unlocked
memory, paging it in on demand. iommu needs to unmap
memory when it is swapped out or relocated.
So you'd just take the memory
On 06/02/2010 12:45 PM, Joerg Roedel wrote:
On Tue, Jun 01, 2010 at 03:41:55PM +0300, Avi Kivity wrote:
On 06/01/2010 01:46 PM, Michael S. Tsirkin wrote:
Main difference is that vhost works fine with unlocked
memory, paging it in on demand. iommu needs to unmap
memory when it is
On 06/02/2010 12:42 PM, Joerg Roedel wrote:
On Tue, Jun 01, 2010 at 12:55:32PM +0300, Michael S. Tsirkin wrote:
There seems to be some misunderstanding. The userspace interface
proposed forces a separate domain per device and forces userspace to
repeat iommu programming for each device.
On Tue, Jun 01, 2010 at 09:59:40PM -0700, Tom Lyon wrote:
This is just what I was thinking. But rather than a get/set, just use two
fds.
ioctl(vfio_fd1, VFIO_SET_DOMAIN, vfio_fd2);
This may fail if there are really 2 different IOMMUs, so user code must be
prepared for failure, In
On Wed, Jun 02, 2010 at 11:42:01AM +0200, Joerg Roedel wrote:
On Tue, Jun 01, 2010 at 12:55:32PM +0300, Michael S. Tsirkin wrote:
There seems to be some misunderstanding. The userspace interface
proposed forces a separate domain per device and forces userspace to
repeat iommu programming
On Wed, Jun 02, 2010 at 12:49:28PM +0300, Avi Kivity wrote:
On 06/02/2010 12:45 PM, Joerg Roedel wrote:
IOMMU mapped memory can not be swapped out because we can't do demand
paging on io-page-faults with current devices. We have to pin _all_
userspace memory that is mapped into an IOMMU
On Wed, Jun 02, 2010 at 12:04:04PM +0200, Joerg Roedel wrote:
On Wed, Jun 02, 2010 at 12:49:28PM +0300, Avi Kivity wrote:
On 06/02/2010 12:45 PM, Joerg Roedel wrote:
IOMMU mapped memory can not be swapped out because we can't do demand
paging on io-page-faults with current devices. We have
On Wed, Jun 02, 2010 at 12:53:12PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 11:42:01AM +0200, Joerg Roedel wrote:
IMO a seperate iommu-userspace driver is a nightmare for a userspace
interface. It is just too complicated to use.
One advantage would be that we can reuse the
On Wed, Jun 02, 2010 at 11:45:27AM +0200, Joerg Roedel wrote:
On Tue, Jun 01, 2010 at 03:41:55PM +0300, Avi Kivity wrote:
On 06/01/2010 01:46 PM, Michael S. Tsirkin wrote:
Main difference is that vhost works fine with unlocked
memory, paging it in on demand. iommu needs to unmap
memory
On Wed, Jun 02, 2010 at 12:19:40PM +0200, Joerg Roedel wrote:
On Wed, Jun 02, 2010 at 12:53:12PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 11:42:01AM +0200, Joerg Roedel wrote:
IMO a seperate iommu-userspace driver is a nightmare for a userspace
interface. It is just too
On Wed, Jun 02, 2010 at 01:15:34PM +0300, Michael S. Tsirkin wrote:
One of the issues I see with the current patch is that
it uses the mlock rlimit to do this pinning. So this wastes the rlimit
for an app that did mlockall already, and also consumes
this resource transparently, so an app might
On Wed, Jun 02, 2010 at 01:21:44PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 12:19:40PM +0200, Joerg Roedel wrote:
It can. The worst thing that can happen is an io-page-fault.
devices might not be able to recover from this.
With the userspace interface a process can create
On Wed, Jun 02, 2010 at 12:35:16PM +0200, Joerg Roedel wrote:
On Wed, Jun 02, 2010 at 01:21:44PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 12:19:40PM +0200, Joerg Roedel wrote:
It can. The worst thing that can happen is an io-page-fault.
devices might not be able to
On Wed, Jun 02, 2010 at 12:19:40PM +0200, Joerg Roedel wrote:
On Wed, Jun 02, 2010 at 12:53:12PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 11:42:01AM +0200, Joerg Roedel wrote:
IMO a seperate iommu-userspace driver is a nightmare for a userspace
interface. It is just too
On Wed, Jun 02, 2010 at 01:38:28PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 12:35:16PM +0200, Joerg Roedel wrote:
With the userspace interface a process can create io-page-faults
anyway if it wants. We can't protect us from this.
We could fail all operations until an iommu
On 06/02/2010 01:04 PM, Joerg Roedel wrote:
On Wed, Jun 02, 2010 at 12:49:28PM +0300, Avi Kivity wrote:
On 06/02/2010 12:45 PM, Joerg Roedel wrote:
IOMMU mapped memory can not be swapped out because we can't do demand
paging on io-page-faults with current devices. We have to pin
On Wed, Jun 02, 2010 at 01:12:25PM +0200, Joerg Roedel wrote:
On Wed, Jun 02, 2010 at 01:38:28PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 12:35:16PM +0200, Joerg Roedel wrote:
With the userspace interface a process can create io-page-faults
anyway if it wants. We can't
On Wed, Jun 02, 2010 at 02:21:00PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 01:12:25PM +0200, Joerg Roedel wrote:
Even if it is bound to a domain the userspace driver could program the
device to do dma to unmapped regions causing io-page-faults. The kernel
can't do anything
On 06/02/2010 03:19 PM, Joerg Roedel wrote:
Yes. so you do:
iommu = open
ioctl(dev1, BIND, iommu)
ioctl(dev2, BIND, iommu)
ioctl(dev3, BIND, iommu)
ioctl(dev4, BIND, iommu)
No need to add a SHARE ioctl.
In my proposal this looks like:
dev1 = open();
ioctl(dev2, SHARE, dev1);
On Wed, Jun 02, 2010 at 02:19:28PM +0200, Joerg Roedel wrote:
On Wed, Jun 02, 2010 at 02:21:00PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 01:12:25PM +0200, Joerg Roedel wrote:
Even if it is bound to a domain the userspace driver could program the
device to do dma to
On Wed, Jun 02, 2010 at 03:25:11PM +0300, Avi Kivity wrote:
On 06/02/2010 03:19 PM, Joerg Roedel wrote:
Yes. so you do:
iommu = open
ioctl(dev1, BIND, iommu)
ioctl(dev2, BIND, iommu)
ioctl(dev3, BIND, iommu)
ioctl(dev4, BIND, iommu)
No need to add a SHARE ioctl.
In my proposal
On Wed, Jun 02, 2010 at 03:34:17PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 02:19:28PM +0200, Joerg Roedel wrote:
you normally need device mapped to start DMA.
SHARE makes this bug more likely as you allow
switching domains: mmap could be done before switching.
We need to
On 06/02/2010 03:50 PM, Joerg Roedel wrote:
The problem with this is that it is assymetric, dev1 is treated
differently from dev[234]. It's an unintuitive API.
Its by far more unintuitive that a process needs to explicitly bind a
device to an iommu domain before it can do anything with
On Wed, Jun 02, 2010 at 02:50:50PM +0200, Joerg Roedel wrote:
On Wed, Jun 02, 2010 at 03:25:11PM +0300, Avi Kivity wrote:
On 06/02/2010 03:19 PM, Joerg Roedel wrote:
Yes. so you do:
iommu = open
ioctl(dev1, BIND, iommu)
ioctl(dev2, BIND, iommu)
ioctl(dev3, BIND, iommu)
On Wed, Jun 02, 2010 at 04:06:21PM +0300, Avi Kivity wrote:
On 06/02/2010 03:50 PM, Joerg Roedel wrote:
Its by far more unintuitive that a process needs to explicitly bind a
device to an iommu domain before it can do anything with it.
I don't really care about the iommu domain. It's a side
* Avi Kivity (a...@redhat.com) wrote:
The interface would only work for clients which support it: kvm,
vhost, and iommu/devices with restartable dma.
BTW, there is no such thing as restartable dma. There is a provision in
new specs (read: no real hardware) that allows a device to request pages
* Joerg Roedel (j...@8bytes.org) wrote:
On Wed, Jun 02, 2010 at 02:21:00PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 01:12:25PM +0200, Joerg Roedel wrote:
Even if it is bound to a domain the userspace driver could program the
device to do dma to unmapped regions causing
On Wednesday 02 June 2010 10:46:15 am Chris Wright wrote:
* Joerg Roedel (j...@8bytes.org) wrote:
On Wed, Jun 02, 2010 at 02:21:00PM +0300, Michael S. Tsirkin wrote:
On Wed, Jun 02, 2010 at 01:12:25PM +0200, Joerg Roedel wrote:
Even if it is bound to a domain the userspace driver
On Wed, Jun 02, 2010 at 11:09:17AM -0700, Tom Lyon wrote:
On Wednesday 02 June 2010 10:46:15 am Chris Wright wrote:
This is not any hot path, so saving an ioctl shouldn't be a consideration.
Only important consideration is a good API. I may have lost context here,
but the SHARE API is
On 05/31/2010 08:10 PM, Michael S. Tsirkin wrote:
On Mon, May 31, 2010 at 02:50:29PM +0300, Avi Kivity wrote:
On 05/30/2010 05:53 PM, Michael S. Tsirkin wrote:
So what I suggested is failing any kind of access until iommu
is assigned.
So, the kernel driver must be aware of
On Tue, Jun 01, 2010 at 11:10:45AM +0300, Avi Kivity wrote:
On 05/31/2010 08:10 PM, Michael S. Tsirkin wrote:
On Mon, May 31, 2010 at 02:50:29PM +0300, Avi Kivity wrote:
On 05/30/2010 05:53 PM, Michael S. Tsirkin wrote:
So what I suggested is failing any kind of access until iommu
On 06/01/2010 12:55 PM, Michael S. Tsirkin wrote:
It can't program the iommu.
What
the patch proposes is that userspace tells vfio about the needed
mappings, and vfio programs the iommu.
There seems to be some misunderstanding. The userspace interface
proposed forces a separate
On Tue, Jun 01, 2010 at 01:28:48PM +0300, Avi Kivity wrote:
On 06/01/2010 12:55 PM, Michael S. Tsirkin wrote:
It can't program the iommu.
What
the patch proposes is that userspace tells vfio about the needed
mappings, and vfio programs the iommu.
There seems to be some
On 06/01/2010 01:46 PM, Michael S. Tsirkin wrote:
Since vfio would be the only driver, there would be no duplication. But
a separate object for the iommu mapping is a good thing. Perhaps we can
even share it with vhost (without actually using the mmu, since vhost is
software only).
On Tuesday 01 June 2010 03:46:51 am Michael S. Tsirkin wrote:
On Tue, Jun 01, 2010 at 01:28:48PM +0300, Avi Kivity wrote:
On 06/01/2010 12:55 PM, Michael S. Tsirkin wrote:
It can't program the iommu.
What
the patch proposes is that userspace tells vfio about the needed
mappings, and
On Monday 31 May 2010 10:17:35 am Alan Cox wrote:
Does look like it needs a locking audit, some memory and error checks
reviewing and some further review of the ioctl security and
overflows/trusted values.
Yes. Thanks for the detailed look.
Rather a nice way of attacking the user space PCI
On 06/02/2010 12:26 AM, Tom Lyon wrote:
I'm not really opposed to multiple devices per domain, but let me point out how
I
ended up here. First, the driver has two ways of mapping pages, one based on
the
iommu api and one based on the dma_map_sg api. With the latter, the system
already
On Tue, 2010-06-01 at 13:28 +0300, Avi Kivity wrote:
On 06/01/2010 12:55 PM, Michael S. Tsirkin wrote:
It can't program the iommu.
What
the patch proposes is that userspace tells vfio about the needed
mappings, and vfio programs the iommu.
There seems to be some
On Tuesday 01 June 2010 09:29:47 pm Alex Williamson wrote:
On Tue, 2010-06-01 at 13:28 +0300, Avi Kivity wrote:
On 06/01/2010 12:55 PM, Michael S. Tsirkin wrote:
It can't program the iommu.
What
the patch proposes is that userspace tells vfio about the needed
mappings, and vfio
On 06/02/2010 07:59 AM, Tom Lyon wrote:
This is just what I was thinking. But rather than a get/set, just use two fds.
ioctl(vfio_fd1, VFIO_SET_DOMAIN, vfio_fd2);
This may fail if there are really 2 different IOMMUs, so user code must be
prepared for failure, In addition, this is
* Avi Kivity (a...@redhat.com) wrote:
On 06/02/2010 12:26 AM, Tom Lyon wrote:
I'm not really opposed to multiple devices per domain, but let me point out
how I
ended up here. First, the driver has two ways of mapping pages, one based
on the
iommu api and one based on the dma_map_sg api.
On 06/02/2010 08:29 AM, Chris Wright wrote:
* Avi Kivity (a...@redhat.com) wrote:
On 06/02/2010 12:26 AM, Tom Lyon wrote:
I'm not really opposed to multiple devices per domain, but let me point out how
I
ended up here. First, the driver has two ways of mapping pages, one based on
On 05/30/2010 05:53 PM, Michael S. Tsirkin wrote:
So what I suggested is failing any kind of access until iommu
is assigned.
So, the kernel driver must be aware of the iommu. In which case it may
as well program it.
--
I have a truly marvellous patch that fixes the bug which this
+/*
+ * Map usr buffer at specific IO virtual address
+ */
+static int vfio_dma_map_iova(
+ mlp = kzalloc(sizeof *mlp, GFP_KERNEL);
Not good at that point. I think you need to allocate it first, error if
it can't be allocated and then do the work and free it on error ?
+ mlp =
On Mon, May 31, 2010 at 02:50:29PM +0300, Avi Kivity wrote:
On 05/30/2010 05:53 PM, Michael S. Tsirkin wrote:
So what I suggested is failing any kind of access until iommu
is assigned.
So, the kernel driver must be aware of the iommu. In which case it may
as well program it.
It's a
On 05/30/2010 03:19 PM, Michael S. Tsirkin wrote:
On Fri, May 28, 2010 at 04:07:38PM -0700, Tom Lyon wrote:
The VFIO driver is used to allow privileged AND non-privileged processes to
implement user-level device drivers for any well-behaved PCI, PCI-X, and PCIe
devices.
On Sun, May 30, 2010 at 03:27:05PM +0300, Avi Kivity wrote:
On 05/30/2010 03:19 PM, Michael S. Tsirkin wrote:
On Fri, May 28, 2010 at 04:07:38PM -0700, Tom Lyon wrote:
The VFIO driver is used to allow privileged AND non-privileged processes
to
implement user-level device drivers for any
On 05/29/2010 02:07 AM, Tom Lyon wrote:
The VFIO driver is used to allow privileged AND non-privileged processes to
implement user-level device drivers for any well-behaved PCI, PCI-X, and PCIe
devices.
+
+Why is this interesting? Some applications, especially in the high performance
On Sun, May 30, 2010 at 04:01:53PM +0300, Avi Kivity wrote:
On 05/30/2010 03:49 PM, Michael S. Tsirkin wrote:
On Sun, May 30, 2010 at 03:27:05PM +0300, Avi Kivity wrote:
On 05/30/2010 03:19 PM, Michael S. Tsirkin wrote:
On Fri, May 28, 2010 at 04:07:38PM -0700, Tom Lyon wrote:
On Saturday 29 May 2010, Tom Lyon wrote:
+/*
+ * Structure for DMA mapping of user buffers
+ * vaddr, dmaaddr, and size must all be page aligned
+ * buffer may only be larger than 1 page if (a) there is
+ * an iommu in the system, or (b) buffer is part of a huge page
+ */
+struct
On 05/29/2010 02:55 PM, Arnd Bergmann wrote:
On Saturday 29 May 2010, Tom Lyon wrote:
+/*
+ * Structure for DMA mapping of user buffers
+ * vaddr, dmaaddr, and size must all be page aligned
+ * buffer may only be larger than 1 page if (a) there is
+ * an iommu in the system, or (b) buffer
Hi,
On Fri, 28 May 2010 16:07:38 -0700 Tom Lyon wrote:
Missing diffstat -p1 -w 70:
Documentation/vfio.txt | 176
MAINTAINERS|7
drivers/Kconfig|2
drivers/Makefile |1
drivers/vfio/Kconfig |9
On Fri, 28 May 2010 16:07:38 -0700 Tom Lyon wrote:
diff -uprN linux-2.6.34/Documentation/vfio.txt
vfio-linux-2.6.34/Documentation/vfio.txt
--- linux-2.6.34/Documentation/vfio.txt 1969-12-31 16:00:00.0
-0800
+++ vfio-linux-2.6.34/Documentation/vfio.txt 2010-05-28
59 matches
Mail list logo