Re: [vfio-users] about vfio interrupt performance

2019-06-17 Thread Alex Williamson
On Mon, 17 Jun 2019 16:00:42 +0800
James  wrote:

> Hi Experts:
> 
> Sorry to disturb you.
> 
> 
> 
> I failed to find any valid data about vfio interrupt performance in
> community, so send mail to you boldly.
> 
> 
> 
> We have a pcie device work on x86 platform, and no VM in our env,  I plan
> to replace the kernel side device driver with vfio framework, reimplement
> it in user space after enable vfio/vfio_pci/vfio_iommu_type1 in kernel. The
> original intention is just to get rid of the dependents to kernel, let our
> application which need to access our pcie device to be a pure application,
> let it can run on other linux distribution(no custom kernel driver need).

Wouldn't getting your driver upstream also solve some of these issues?

> Our pcie device have the following character:
> 
> 1, have a great deal of interrupt when working
> 
> 2, and also have high demand to interrupt’s processing speed.

There will be more interrupt latency for a vfio userspace driver, the
interrupt is received on the host and signaled to the user via an
eventfd.  Hardware accelerators like APICv and Posted Interrupts are
not available outside of a VM context.  Whether the overhead is
acceptable is something you'll need to determine.  It may be beneficial
to switch to polling mode at high interrupt rate as network devices
tend to do.  DPDK is a userspace driver that makes use of vfio for
device access, but typically uses polling rather that interrupt driven
data transfer AIUI.
 
> 3, it will need to access almost all bar space after mapping.

This is not an issue.

> Here want to check with you, compare with previous kernel side device
> driver, if there are huge decrease for interrupt’s processing speed when
> the interrupt numbers are huge in short time?
> 
> How about your comments to my this attemptation, if it’s valueble to move
> driver to userspace in this kind of situation(no vm, huge interrupt numbers
> etc..).

The description implies you're trying to avoid open sourcing your
device driver by moving it to a userspace driver.  While I'd rather run
an untrusted driver in vfio as a userspace driver, this potentially
makes it inaccessible to users where the hardware or lack of isolation
provided by the platform prevent them from making use of your device.

> BTW, I found there are some random issue when using vfio in community, such
> as:
> 
> 1, Some device’s extend configuration space will have problem when
> accessing by random.
> 
> 2, When try to access the device’s space which in the same iommu groups at
> the same time, it will trigger issue by random.
> 
> 
> 
> If this kind of issue have relation with IOMMU’s hardware limitation, or if
> we can bypass it via some method for now?

The questions are not well worded to understand the issues you're
trying to note here.  Some portions of config space are emulated or
virtualized by the vfio kernel driver, some by QEMU.  Since you won't
be using QEMU, you don't have the latter.  The QEMU machine type and
VM PCI topology also determines the availability of extended config
space, these are VM specific issues.  The IOMMU grouping is definitely
an issue.  IOMMU groups cannot be shared therefore usage of the device
might be restricted to physical configurations where IOMMU isolation is
provided.  The ACS override patch that some people here use is not and
will not be upstreamed, so it should not be considered as a requirement
for the availability of your device.  Thanks,

Alex

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] [PATCH] Passthough of one GPU with a PC with 2 identical GPUs installed

2019-06-17 Thread Alex Williamson
On Sat, 15 Jun 2019 14:30:49 +0100
James  wrote:

> Hi,
> 
> Please find attached a kernel patch. This is based from a very old patch 
> that never made it into the kernel in 2014. 
> https://lkml.org/lkml/2014/10/20/295. I am not sure who else I should be 
> adding to the Signed-off-by section.
> 
> I have modified it and tested it, so that it works against kernel 5.1.10.
> 
> Summary below:
> 
> 
>     PCI: Introduce new device binding path using pci_dev.driver_override
> 
>      In order to bind PCI slots to specific drivers use:
> pci=driver[:xx:xx.x]=foo,driver[:xx:xx.x]=bar,...
> 
>      The main use case for this is in relation to qemu passthrough
>      of a pci device using IOMMU and vfio-pci.
>      Example:
>      The host has two identical devices. E.g. 2 AMD Vega GPUs,
>      The user wishes to passthrough only one GPU.
>      This new feature allows the user to select which GPU to passthrough.
> 
>      Signed-off-by: James Courtier-Dutton 

vfio-users is not a development list, patches should not be posted here
with the intent of upstream inclusion, especially patches outside of
the vfio driver itself.

Upstream patches should be posted inline, not as attachments.  Messages
with this sort of attachment are likely to get rejected by upstream
lists.

This is simply a reposting of patch that was thoroughly discussed
upstream in the link you provided.  None of the issues raised in that
thread have been addressed in this reposting.

While I like the idea of a driver_override command line option, there
are existing mechanisms to deal with two identical cards with different
driver binding requirements.  I'd suggest using a modprobe.d install
command to perform the driver_override on the device instance intended
for use with vfio-pci, for example:

install amdgpu echo vfio-pci > 
/sys/bus/pci/devices/:02:00.0/driver_override; modprobe --ignore-install 
amdgpu

Thanks,
Alex

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


[vfio-users] about vfio interrupt performance

2019-06-17 Thread James
Hi Experts:

Sorry to disturb you.



I failed to find any real data about vfio interrupt performance in
community, so send mail to you boldly.



We have a pcie device work on x86 platform, and no VM in our env,  I plan
to replace the kernel side device driver with vfio framework, reimplement
it in user space after enable vfio/vfio_pci/vfio_iommu_type1 in kernel. The
original intention is just to get rid of the dependents to kernel, let our
application which need to access our pcie device to be a pure application,
let it can run on other linux distribution(no custom kernel driver need).



Our pcie device have the following character:

1, have a great deal of interrupt when working

2, and also have high demand to interrupt’s processing speed.

3, it will need to access almost all bar space after mapping.



Here want to check with you, compare with previous kernel side device
driver, if there are huge decrease for interrupt’s processing speed when
the interrupt numbers are huge in short time?

How about your comments to my this attemptation, if it’s valueble to move
driver to userspace in this kind of situation(no vm, huge interrupt numbers
etc..).







BTW, I found there are some random issue when using vfio in community, such
as:

1, Some device’s extend configuration space will have problem when
accessing by random.

2, When try to access the device’s space which in the same iommu groups at
the same time, it will trigger issue by random.



If this kind of issue have relation with IOMMU’s hardware limitation, or if
we can bypass it via some method for now?





Many thanks for your times!!


Best regards!

James
___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


[vfio-users] about vfio interrupt performance

2019-06-17 Thread James
Hi Experts:

Sorry to disturb you.



I failed to find any valid data about vfio interrupt performance in
community, so send mail to you boldly.



We have a pcie device work on x86 platform, and no VM in our env,  I plan
to replace the kernel side device driver with vfio framework, reimplement
it in user space after enable vfio/vfio_pci/vfio_iommu_type1 in kernel. The
original intention is just to get rid of the dependents to kernel, let our
application which need to access our pcie device to be a pure application,
let it can run on other linux distribution(no custom kernel driver need).



Our pcie device have the following character:

1, have a great deal of interrupt when working

2, and also have high demand to interrupt’s processing speed.

3, it will need to access almost all bar space after mapping.



Here want to check with you, compare with previous kernel side device
driver, if there are huge decrease for interrupt’s processing speed when
the interrupt numbers are huge in short time?

How about your comments to my this attemptation, if it’s valueble to move
driver to userspace in this kind of situation(no vm, huge interrupt numbers
etc..).







BTW, I found there are some random issue when using vfio in community, such
as:

1, Some device’s extend configuration space will have problem when
accessing by random.

2, When try to access the device’s space which in the same iommu groups at
the same time, it will trigger issue by random.



If this kind of issue have relation with IOMMU’s hardware limitation, or if
we can bypass it via some method for now?





Many thanks for your times!!


Best regards!

James
___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users