Re: [vfio-users] Single function PCIe NIC card having multiple queue. one queue for one VM. Can i use VFIO here?

2016-09-23 Thread Alex Williamson
On Fri, 23 Sep 2016 14:42:58 +0530
Sasikumar Natarajan  wrote:

> Hi all,
> 
[diagram:https://www.redhat.com/archives/vfio-users/2016-September/msg00097.html]>
 
> 
> We are trying to share the PCIe NIC card with multiple VMs. PCI NIC card
> has one Physical function (NO SRIOV support).
> 
> The NIC has multiple queues (each with TX and RX descriptor ring). Packets
> are placed in queue based on packet classifier(Ex: MAC addresses). So each
> queue can be used for a network interface, therefore host can have multiple
> virtual network interfaces. We are planning to assign each queue to each VM
> (emulate like PCI Virual functions, but our HW doesnt support Virtual
> function). Packet buffer for corresponding queue will reside in VM's memory.
> 
> DMA in the PCI card need to push/pull packet data to/from directly to the
> VM's memory (for this we need IOMMU support).
> 
> How VFIO can be used to achieve this ?
> A if VFIO is not possible, any other methods can be used here ?
> are there any other methods without any intermediate CPU memory copy?

You show a data path through the IOMMU, but without SR-IOV the physical
device only uses a single requester ID and therefore the same context
entry and IOMMU page tables regardless of the VM for which a DMA is
routed.  Currently vfio requires an IOMMU to provide isolation and
translation with per device and VM granularity.  There's work underway
upstream to create vfio "mediated" devices where a software component
in the host fills in the gaps versus a device that supports full
SR-IOV, including device isolation, but this latter component is still
provided via the device through things like per-process GTT (graphics
Translation Tables) in the case of vGPUs.  You could create a mediated
device that doesn't do DMA, or polices all devices programming, but
what's the point versus something like virtio if the performance
impact is more than trivial?  For DMA, there needs to be isolation and
translation provided at a different level.  Thanks,

Alex

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] issues about igd and usb passthrough when windows7 safe mode interface occurs

2016-09-23 Thread Alex Williamson
On Fri, 23 Sep 2016 19:25:54 +0800 (CST)
fulaiyang  wrote:

> Hi,Alex
> Recently, I bought another platform with  i3-6100U to passthrough the 
> GPU. The platform has only DP or HDMI output. I am very happy to success to 
> passthrough igd and the usb controller. However, when the windows7 safe mode 
> interface happens, the GUI is divided into three parts. And the keyboard is 
> invalid. After the windows7 desktop occurs, the GPU and usb devices can run 
> very perfectly. Did you meet this problem?
> 
> 
> environment :
> Intel Core i3-6100U @ 2.30GHz
> Intel Corporation Sky Lake Integrated Graphics 
> Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller
> DP output
> Ubuntu 16.04 LTS DESKTOP
> Host kernel 4.7.2 x86_64 GNU/Linux
> qemu 2.7.0
> 

Several people have reported problems with this specific processor, I
think you're the first to report garbled output rather than no output.
I don't have a system with this CPU to test, my only guess is to see if
it behaves more like previous generation IGD:

index bec694c..1813e37 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1008,6 +1008,10 @@ static int igd_gen(VFIOPCIDevice *vdev)
 return 8; /* Broxton */
 }
 
+if (vdev->device_id == 0x1906) {
+return 6; /* i3-6100U */
+}
+
 switch (vdev->device_id & 0xff00) {
 /* Old, untested, unavailable, unknown */
 case 0x:

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] Single function PCIe NIC card having multiple queue. one queue for one VM. Can i use VFIO here?

2016-09-23 Thread Andre Richter
Is this a commercial off the shelf NIC?
Intel for example uses multi queues to speed up virtualization of the NIC
by sorting packets into distinct queues per VM. IIRC you can't use vfio
here but can utilize it via their driver in the host.

VMDQ is the buzzword you need to google for this.

Cheers,
Andre
Sasikumar Natarajan  schrieb am Fr. 23.
Sep. 2016 um 11:15:

> Hi all,
>
>  ___
> ___
> |   |
> |   |
> |   Virtual Machine 1   ||   Virtual
> Machine 2   |
> |  ___  ||
> ___   |
> | |   | |||
> |  |
> | | VM1 Memory| |||VM2 Memory
> |  |
> | |___| ||
> |___|  |
> | ^ ||
> ^ |
> | | || |
> |
>   ||
>   ||
>
> VV___
>   |   |
> |   |
>   |   |   IOMMU
> |   |
>
> |___||___|
>   ^^
>   ||
>  [To/From VM1 Buffers][To/From VM2
> Buffers] HOST
>
> ||
> _ | __
> |_   PCI Device
>|  |  PCI NIC CARD
> | |
>|  |  (Single PCI Function)
> | |
>|
> __||__   |
>|   |  ||
> |  |
>|   |  |PCIe DMA|
> |  |
>|   |_ |___
> |__|  |
>|  |
> | |
>|  V
> V |
>| Queue for VM1   Queues for
> VM2  |
>|   |RX|TX|
> |RX|TX|  |
>|   |RX|TX|
> |RX|TX|  |
>|   |RX|TX|
> |RX|TX|  |
>|  ^
> ^ |
>|  |   ___
> | |
>|  |  |   |
> | |
>|  -->|  Queue Classifier
> |<  |
>|
> |___|   |
>| |
> |   |
>| |MAC
> |   |
>| |___|
> -- |
>
> |__|Port|_|
>   --
> ||
> ||
> ||___
>
> |___> To Network Fabric
>
>
>
>
> We are trying to share the PCIe NIC card with multiple VMs. PCI NIC card
> has one Physical function (NO SRIOV support).
>
> The NIC has multiple queues (each with TX and RX descriptor ring). Packets
> are placed in queue based on packet classifier(Ex: MAC addresses). So each
> queue can be used for a network interface, therefore host can have multiple
> virtual network interfaces. We are planning to assign each queue to each VM
> (emulate like PCI Virual functions, but our HW doesnt support Virtual
> function). Packet buffer for corresponding queue will reside in VM's memory.
>
> DMA in the PCI card need to push/pull packet data to/from directly to the
> VM's memory (for this we need IOMMU support).
>
> How VFIO can be used to achieve this ?
> A if VFIO is not possible, any other methods can be used here ?
> are there any other methods without any intermediate CPU memory copy?
>
>
>
>
> *Regards,*
>
> Sasikumar Natarajan
> ​.​
>
> ___
> vfio-users mailing list
> vfio-users@redhat.com
> https://www.redhat.com/mailman/listinfo/vfio-users
>
___
vfio-users mailing list
vfio-users@redhat.co

Re: [vfio-users] Lost link when pass through rtl8168 to guest

2016-09-23 Thread Alex Williamson
On Fri, 23 Sep 2016 14:52:46 +0800
Wei Xu  wrote:

> On 2016年09月21日 22:50, Alex Williamson wrote:
> > On Wed, 21 Sep 2016 14:04:20 +0800
> > Wei Xu  wrote:
> >  
> >> On 2016年09月21日 13:41, Wei Xu wrote:  
> >>   > On 2016年09月21日 12:31, Alex Williamson wrote:  
> >>   >> On Wed, 21 Sep 2016 11:52:31 +0800
> >>   >> Wei Xu  wrote:
> >>   >>  
> >>   >>> On 2016年09月21日 02:59, Nick Sarnie wrote:  
> >>    Hi Wei,
> >>   
> >>    My system is a desktop, so it must just be a general Gigabyte BIOS  
> >> bug.  
> >>    I submitted a help ticket about this issue and just gave a brief
> >>    explanation and then sent Alex's explanation. Hopefully it will be
> >>    escalated correctly.  
> >>   >>>
> >>   >>> Thanks for your feedback, i'm also using a Gigabyte board, i have
> >>   >>> checked out the firmware update history and updated my firmware to 
> >> the
> >>   >>> latest one which was released at March, looks it's a long way to get 
> >> a
> >>   >>> feedback for this issue from them.
> >>   >>>
> >>   >>> Alex,
> >>   >>> It's a hard time for us to do nothing but wait, the reason why i use 
> >> my
> >>   >>> desktop is i got a com console on it, so it's quite convenient to
> >>   >>> debugging kernel via kgdb, and i want to keep my realtek nic for ssh
> >>   >>> access from my notebook, anyway to workaround it to just bypass the
> >>   >>> wireless nic only as a temporary experiment?
> >>   >>>
> >>   >>> I'm trying VirtIO DMAR patch with vIOMMU in the guest recently, which
> >>   >>> need pass through a pcie unit from host, and one more virtio nic  
> >> for the  
> >>   >>> guest due to the feedbacks, maybe i can pass through a device in 
> >> other
> >>   >>> groups instead of a nic?  
> >>   >>
> >>   >> Sure, but skylake platforms are notoriously bad for their lack of
> >>   >> device isolation, even things like USB controllers and audio devices
> >>   >> are now part of multifunction packages that do not expose isolation
> >>   >> through ACS.  If you can't resolve the IOMMU grouping otherwise, your
> >>   >> choices are as I told Nick in the other thread:
> >>   >>
> >>   >>"Your choices are to run an unsupported (and unsupportable)
> >>   >>configuration using the ACS override patch, get your hardware 
> >> vendor
> >>   >>to fix their platform, or upgrade to better hardware with better
> >>   >>isolation characteristics."
> >>   >>
> >>   >> It's unfortunate that Intel provides VT-d on consumer platforms 
> >> without
> >>   >> sufficient device isolation to really make it usable, but that's often
> >>   >> the state of things.  The workstation and server class platforms,
> >>   >> supporting Xeon E5 or High End Desktop Processors provide the 
> >> necessary
> >>   >> isolation.  Thanks,  
> >>   >
> >>   > Yes, fortunately i get it solved finally, i tried adding the 'r8169'
> >>   > driver to the kernel group whitelist behind 'pci-stub' and recompile  &
> >>   > update the kernel firstly, and the VM boot up successfully, but a map
> >>   > page to iova error for realtek nic during DMA crashed the system later,
> >>   > looks it was caused by the group dependency, i remembered the vfio doc
> >>   > tells the group is the minimum isolation unit.  
> >
> > This approach is just a bad idea.
> >  
> >>   >
> >>   > Then i found there are 3 pci bridges on my board, 2 of them are with a
> >>   > group, another is a separate group, after plug the iwl wlan nic to this
> >>   > one, everything works well.  
> >>
> >> Just noticed a topology change of my system, looks the PCI bridges is
> >> different as before after i changed the slot for my wlan nic, i used to
> >> think i plugged it to 00:1d.0 but it was connected to Sky Lake PCIe
> >> controller, does this mean there are hidden PCI bridges for pci
> >> enumeration in the system, is this allowable?
> >>
> >> Before:
> >> 00:1c.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root
> >> Port #5 (rev f1) (prog-if 00 [Normal decode])
> >> 00:1c.7 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root
> >> Port #8 (rev f1) (prog-if 00 [Normal decode])  wlan nic
> >> 00:1d.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root
> >> Port #9 (rev f1) (prog-if 00 [Normal decode])
> >>
> >> Now:
> >> 00:01.0 PCI bridge: Intel Corporation Sky Lake PCIe Controller (x16)
> >> (rev 07) (prog-if 00 [Normal decode])  wlan nic
> >> 00:1c.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root
> >> Port #5 (rev f1) (prog-if 00 [Normal decode])
> >> 00:1d.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root
> >> Port #9 (rev f1) (prog-if 00 [Normal decode])  
> >
> > There are generally two sources of PCIe root ports on Intel systems,
> > the processor itself and the PCH (Platform Controller Hub).  Look at a
> > block diagram for a modern system and you'll see this.  Typically for a
> > client processor (i3/i5/i7) there is no isolation between or
> > downstrea

[vfio-users] Single function PCIe NIC card having multiple queue. one queue for one VM. Can i use VFIO here?

2016-09-23 Thread Sasikumar Natarajan
Hi all,

 ___
___
|   |
|   |
|   Virtual Machine 1   ||   Virtual
Machine 2   |
|  ___  ||
___   |
| |   | |||
|  |
| | VM1 Memory| |||VM2 Memory
|  |
| |___| ||
|___|  |
| ^ ||
^ |
| | || |
|
  ||
  ||

VV___
  |   |
|   |
  |   |   IOMMU
|   |

|___||___|
  ^^
  ||
 [To/From VM1 Buffers][To/From VM2
Buffers] HOST

||
_ | __
|_   PCI Device
   |  |  PCI NIC CARD
| |
   |  |  (Single PCI Function)
| |
   |
__||__   |
   |   |  ||
|  |
   |   |  |PCIe DMA|
|  |
   |   |_ |___
|__|  |
   |  |
| |
   |  V
V |
   | Queue for VM1   Queues for
VM2  |
   |   |RX|TX|
|RX|TX|  |
   |   |RX|TX|
|RX|TX|  |
   |   |RX|TX|
|RX|TX|  |
   |  ^
^ |
   |  |   ___
| |
   |  |  |   |
| |
   |  -->|  Queue Classifier
|<  |
   |
|___|   |
   | |
|   |
   | |MAC
|   |
   | |___|
-- |

|__|Port|_|
  --
||
||
||___

|___> To Network Fabric




We are trying to share the PCIe NIC card with multiple VMs. PCI NIC card
has one Physical function (NO SRIOV support).

The NIC has multiple queues (each with TX and RX descriptor ring). Packets
are placed in queue based on packet classifier(Ex: MAC addresses). So each
queue can be used for a network interface, therefore host can have multiple
virtual network interfaces. We are planning to assign each queue to each VM
(emulate like PCI Virual functions, but our HW doesnt support Virtual
function). Packet buffer for corresponding queue will reside in VM's memory.

DMA in the PCI card need to push/pull packet data to/from directly to the
VM's memory (for this we need IOMMU support).

How VFIO can be used to achieve this ?
A if VFIO is not possible, any other methods can be used here ?
are there any other methods without any intermediate CPU memory copy?




*Regards,*

Sasikumar Natarajan
​.​
___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users