Re: [vfio-users] intel skylake passthrough

2016-01-22 Thread globalgorrilla
Hi Nicolas,

Would you be able to send the full IOMMU groups tree for both NUCs? It's
nice to see for the record. I was looking at using a Broadwell or
Haswell one  myself.

Skylake was already out of real consideration for me as I was assuming
there would be the same lack of isolation as your're noting.

Slightly off topic ... I was leaning towards the Haswell NUC though
since apparently only that IGD 4600 supports 3 external monitors via MST
hub ... do you know anything about that vis-a-vis the Broadwell and
Skylake NUCs?

Seems like the NUCs are going backwards as the Intel packages go
forward ... :(

On Fri, Jan 22, 2016, at 07:26 AM, nicolas prochazka wrote:
> Hello, I just need a confimation, with my nuc broadwell I can do
> passthrough of my sound card without problem, because of a good
> iommu group, but it seems impossible with my new skylake nuc group 7
> is not viable .. please, tell me that a solution is possible .
> Regards, Nicolas
>
> Broadwell
>
> [nicolas-hard-365e3600-7279-11e3-91e4-b8aeed728e68]lspci 00:00.0 Host
> bridge: Intel Corporation Broadwell-U Host Bridge -OPI (rev 09)
> 00:02.0 VGA compatible controller: Intel Corporation Broadwell-U
> Integrated Graphics (rev 09) 00:03.0 Audio device: Intel Corporation
> Broadwell-U Audio Controller (rev 09) 00:14.0 USB controller: Intel
> Corporation Wildcat Point-LP USB xHCI Controller (rev 03) 00:16.0
> Communication controller: Intel Corporation Wildcat Point-LP MEI
> Controller #1 (rev 03) 00:19.0 Ethernet controller: Intel Corporation
> Ethernet Connection (3) I218-V (rev 03)
> 00:1b.0 Audio device: Intel Corporation Wildcat Point-LP High
>Definition Audio Controller (rev 03)
> 00:1c.0 PCI bridge: Intel Corporation Wildcat Point-LP PCI Express
>Root Port #1 (rev e3)
> 00:1c.3 PCI bridge: Intel Corporation Wildcat Point-LP PCI Express
>Root Port #4 (rev e3)
> 00:1d.0 USB controller: Intel Corporation Wildcat Point-LP USB EHCI
>Controller (rev 03)
> 00:1f.0 ISA bridge: Intel Corporation Wildcat Point-LP LPC Controller
>(rev 03)
> 00:1f.2 SATA controller: Intel Corporation Wildcat Point-LP SATA
>Controller [AHCI Mode] (rev 03)
> 00:1f.3 SMBus: Intel Corporation Wildcat Point-LP SMBus Controller
>(rev 03) 02:00.0 Network controller: Intel Corporation Wireless
>7265 (rev 59)
>
> audio /sys/kernel/iommu_groups/3 /sys/kernel/iommu_groups/3/devices
> /sys/kernel/iommu_groups/3/devices/:00:16.0
>
> Now, with my new nuc skylake  : [nuc-skylake-9787be61-53d5-1246-cb75-
> b8aeed7d8885]lspci 00:00.0 Host bridge: Intel Corporation Sky Lake
> Host Bridge/DRAM Registers (rev 09) 00:02.0 VGA compatible controller:
> Intel Corporation Sky Lake Integrated Graphics (rev 0a) 00:14.0 USB
> controller: Intel Corporation Device 9d2f (rev 21) 00:14.2 Signal
> processing controller: Intel Corporation Device 9d31 (rev 21) 00:16.0
> Communication controller: Intel Corporation Device 9d3a (rev 21)
> 00:17.0 SATA controller: Intel Corporation Device 9d03 (rev 21)
> 00:1c.0 PCI bridge: Intel Corporation Device 9d14 (rev f1)
> 00:1d.0 PCI bridge: Intel Corporation Device 9d18 (rev f1)
> 00:1e.0 Signal processing controller: Intel Corporation Device 9d27
>(rev 21)
> 00:1e.6 SD Host controller: Intel Corporation Device 9d2d (rev 21)
> 00:1f.0 ISA bridge: Intel Corporation Device 9d48 (rev 21)
> 00:1f.2 Memory controller: Intel Corporation Device 9d21 (rev 21)
> 00:1f.3 Audio device: Intel Corporation Device 9d70 (rev 21)
> 00:1f.4 SMBus: Intel Corporation Device 9d23 (rev 21)
> 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection I219-
>V (rev 21) 01:00.0 Network controller: Intel Corporation Wireless
>8260 (rev 3a) 02:00.0 Non-Volatile memory controller: Samsung
>Electronics Co Ltd Device a802 (rev 01) /sys/kernel/iommu_groups/7
>/sys/kernel/iommu_groups/7/devices
>/sys/kernel/iommu_groups/7/devices/:00:1f.0
>/sys/kernel/iommu_groups/7/devices/:00:1f.2
>/sys/kernel/iommu_groups/7/devices/:00:1f.3
>/sys/kernel/iommu_groups/7/devices/:00:1f.4
>/sys/kernel/iommu_groups/7/devices/:00:1f.6
>
>
> _
> vfio-users mailing list vfio-users@redhat.com
> https://www.redhat.com/mailman/listinfo/vfio-users
___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] cpu usage in guest != cpu usage in host, even with exclusive pinning

2016-02-14 Thread globalgorrilla
Still the same with 4.5-rc4 and AUR qemu-git.

On 23 Jan 2016, at 18:55, Dan Ziemba wrote:

> On Tue, 2016-01-19 at 16:52 +0100, Friedrich Oslage wrote:
>> I did some more testing and it turns out to be a kernel error,
>> introduced somewhere around linux 4.3.
>>
>> Short description of the error: 100% cpu usage while using 3d in a vm
>> with vfio
>>
>> My last known good kernel is 4.2.8, which gives me about 20% cpu
>> usage
>> while playing Diablo III. My first known bad kernel is 4.3.3, which
>> gives me 100% cpu usage with every game, even while idle-ling in the
>> menu.
>>
>> On 01/17/2016 10:29 AM, Friedrich Oslage wrote:
>>> my host system is linux-4.4.0 with qemu-2.5.0 and a 4-core i7.
>>> Linux is
>>> booted with isolcpus=1-3,5-7 to reserve 3 cores + threads for the
>>> Windows 10 VM.
>>> The VM's 3 cores(2 threads each) are pinned to the respective
>>> physical
>>> core/thread. The iothread is pinned to 1-3,5-7.
>
> Interesting, I've also noticed similar behavior since upgrading to
> kernel 4.4 and qemu 2.5.0.  I am running 4 cores / 8 threads for the VM
> and pinned to individual host threads by libvirt, but I didn't boot
> with the isolcpus arg. Emulator threads are pinned to the remaining 4
> host threads.
>
> Host is a i7-5930K, so 6 cores.  Guest is running Win 8.1.
>
> Just sitting at the pause screen in Fallout: New Vegas, I have a single
> CPU running at 100% according to both the windows task manager and the
> host.  The host cpu thread that is at 100% is one of the ones with a
> vcpu pinned to it.  In the guest, it does appear that the game is the
> thing using all the CPU cycles, and the GPU is basically idle at this
> time.
>
> When I was previously running kernel 4.1.15 and qemu 2.4, New Vegas was
> never demanding enough to max out any one cpu no matter what I was
> doing with the game.  Same goes for more demanding games such as GTA V.
>  
>
> I upgraded both kernel and qemu at the same time, so I can't say which
> caused it.  I haven't really noticed much difference in performance. If
> anything, it might be slightly better.  I seemed to notice less frame
> rate drops and stuttering while playing GTA V after the upgrade.
>
> Dan
>
> ___
> vfio-users mailing list
> vfio-users@redhat.com
> https://www.redhat.com/mailman/listinfo/vfio-users

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] posted interrupts Xeon E5 v3, v4, Xeon D

2016-04-12 Thread globalgorrilla

Perhaps this is the answer:

According to their data sheets (volume 2) both Xeon D and E5 v4 have 
posted_interrupts_support in vtd[0:1]_cap. E5 v3 does not.


On 12 Apr 2016, at 21:51, globalgorri...@fastmail.fm wrote:


Hello,

Are there any differences between posted interrupt support in the Xeon 
E5 v3, v4, and Xeon D? The latter two are Broadwell, but posted 
(external) interrupts are touted with E5 v4 while (I think) VT-d 
posted interrupts are already available in E5 v3.


Thanks

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


[vfio-users] posted interrupts Xeon E5 v3, v4, Xeon D

2016-04-12 Thread globalgorrilla

Hello,

Are there any differences between posted interrupt support in the Xeon 
E5 v3, v4, and Xeon D? The latter two are Broadwell, but posted 
(external) interrupts are touted with E5 v4 while (I think) VT-d posted 
interrupts are already available in E5 v3.


Thanks

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] fitlet I211 PCI passthrough

2016-03-26 Thread globalgorrilla
Even while I use SR-IOV interfaces passed through, I still find 
openvswitch very useful.


In your scenario I might just use openvswitch and bind all the NICs.

Perhaps use DPDK if you're primarily doing networking with the device.

Also CoreOS might be nice if again you're building a dedicated 
router/firewall.


On 26 Mar 2016, at 4:28, YAEGASHI Takeshi wrote:


Hello,

I have a fitlet http://www.fit-pc.com/web/products/fitlet/ with 4 GbE
(I211) ports and wanted to passthrough some of them to KVM guests, ie
assign one I211 to each guests.  I've tried various configurations
with libvirt/kvm but no luck so far.

After reading 
http://vfio.blogspot.jp/2014/08/iommu-groups-inside-and-out.html

I've got sure that there's a problem in IOMMU groups.  Actually all of
I211 and AMD's PCI bridge [1022:156b] share the same iommu_group 2.

$ lspci -tv
-[:00]-+-00.0  Advanced Micro Devices, Inc. [AMD] Device 1566
   +-00.2  Advanced Micro Devices, Inc. [AMD] Device 1567
   +-01.0  Advanced Micro Devices, Inc. [AMD/ATI] Mullins 
[Radeon R6 Graphics]
   +-01.1  Advanced Micro Devices, Inc. [AMD/ATI] Kabini 
HDMI/DP Audio

   +-02.0  Advanced Micro Devices, Inc. [AMD] Device 156b
   +-02.2-[01]00.0  Intel Corporation I211 Gigabit Network 
Connection
   +-02.3-[02]00.0  Intel Corporation I211 Gigabit Network 
Connection
   +-02.4-[03]00.0  Intel Corporation I211 Gigabit Network 
Connection
   +-02.5-[04]00.0  Intel Corporation I211 Gigabit Network 
Connection

   +-08.0  Advanced Micro Devices, Inc. [AMD] Device 1537
   +-10.0  Advanced Micro Devices, Inc. [AMD] FCH USB XHCI 
Controller
   +-11.0  Advanced Micro Devices, Inc. [AMD] FCH SATA 
Controller [AHCI mode]
   +-12.0  Advanced Micro Devices, Inc. [AMD] FCH USB EHCI 
Controller
   +-13.0  Advanced Micro Devices, Inc. [AMD] FCH USB EHCI 
Controller
   +-14.0  Advanced Micro Devices, Inc. [AMD] FCH SMBus 
Controller
   +-14.2  Advanced Micro Devices, Inc. [AMD] FCH Azalia 
Controller

   +-14.3  Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge
   +-14.7  Advanced Micro Devices, Inc. [AMD] FCH SD Flash 
Controller

   +-18.0  Advanced Micro Devices, Inc. [AMD] Device 1580
   +-18.1  Advanced Micro Devices, Inc. [AMD] Device 1581
   +-18.2  Advanced Micro Devices, Inc. [AMD] Device 1582
   +-18.3  Advanced Micro Devices, Inc. [AMD] Device 1583
   +-18.4  Advanced Micro Devices, Inc. [AMD] Device 1584
   \-18.5  Advanced Micro Devices, Inc. [AMD] Device 1585

$ find /sys/kernel/iommu_groups -type l
/sys/kernel/iommu_groups/0/devices/:00:00.0
/sys/kernel/iommu_groups/1/devices/:00:01.0
/sys/kernel/iommu_groups/1/devices/:00:01.1
/sys/kernel/iommu_groups/2/devices/:00:02.0
/sys/kernel/iommu_groups/2/devices/:00:02.2
/sys/kernel/iommu_groups/2/devices/:00:02.3
/sys/kernel/iommu_groups/2/devices/:00:02.4
/sys/kernel/iommu_groups/2/devices/:00:02.5
/sys/kernel/iommu_groups/2/devices/:01:00.0
/sys/kernel/iommu_groups/2/devices/:02:00.0
/sys/kernel/iommu_groups/2/devices/:03:00.0
/sys/kernel/iommu_groups/2/devices/:04:00.0
/sys/kernel/iommu_groups/3/devices/:00:08.0
/sys/kernel/iommu_groups/4/devices/:00:10.0
/sys/kernel/iommu_groups/5/devices/:00:11.0
/sys/kernel/iommu_groups/6/devices/:00:12.0
/sys/kernel/iommu_groups/7/devices/:00:13.0
/sys/kernel/iommu_groups/8/devices/:00:14.0
/sys/kernel/iommu_groups/8/devices/:00:14.2
/sys/kernel/iommu_groups/8/devices/:00:14.3
/sys/kernel/iommu_groups/8/devices/:00:14.7
/sys/kernel/iommu_groups/9/devices/:00:18.0
/sys/kernel/iommu_groups/9/devices/:00:18.1
/sys/kernel/iommu_groups/9/devices/:00:18.2
/sys/kernel/iommu_groups/9/devices/:00:18.3
/sys/kernel/iommu_groups/9/devices/:00:18.4
/sys/kernel/iommu_groups/9/devices/:00:18.5

I'm running Ubuntu 14.04 with xenial kernel 4.4.0-13, libvirt 1.2.2,
qemu 2.0.0.  Using vfio-pci simply failed:

qemu-system-x86_64: -device 
vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x3: vfio: error, 
group 2 is not viable, please ensure all devices within the 
iommu_group are bound to their vfio bus driver.
qemu-system-x86_64: -device 
vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x3: vfio: failed to 
get group 2
qemu-system-x86_64: -device 
vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x3: Device 
initialization failed.
qemu-system-x86_64: -device 
vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x3: Device 
'vfio-pci' could not be initialized


Using legacy KVM device assignment with  also
failed with unclear reason "Invalid argument":

qemu-system-x86_64: -device 
pci-assign,configfd=24,host=01:00.0,id=hostdev0,bus=pci.2,addr=0x2: 
Failed to assign device "hostdev0" : Invalid argument
qemu-system-x86_64: -device 
pci-assign,configfd=24,host=01:00.0,id=hostdev0,bus=pci.2,addr=0x2: 

[vfio-users] BIOS memory mapping or chipset issue with MMIO prevents IGD pass-through in legacy mode?

2016-07-27 Thread globalgorrilla
Hello,

Do you reckon this is what some bugzilla's find to be a motherboard
memory mapping issue or a chipset issue? 

Any ideas on working around it besides opting the IGD out of MMIO? 

Doing that prevents it from being used with vfio-pci and qemu.

---

i7-4790k, ASUS Z97-WS (BIOS 2403), kernel 4.7, qemu v2.7.0-rc0

% lspci -vvn -s 00:02
00:02.0 0300: 8086:0412 (rev 06) (prog-if 00 [VGA controller])
Subsystem: 1043:8534
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort-
SERR- https://www.redhat.com/mailman/listinfo/vfio-users


[vfio-users] Bug exposed with Libvirt, Arch Linux and VFs with kernel 4.9+

2017-01-30 Thread globalgorrilla
Heads-up if you're using libvirt and Arch Linux and networking VFs and 
kernel 4.9+.


https://bugs.archlinux.org/task/52778

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


[vfio-users] radeon vfio bus information issue?

2016-10-24 Thread globalgorrilla

Hi,

Pass-through works fine with the the same linux 4.8 VM and nouveau and a 
Quadro K2200.


Passing-through both a R7 260X and and a R9 290 I get the same kernel 
oops:


IP: [] drm_pcie_get_speed_cap_mask+0x39/0xf0 [drm]

More here:
http://pastebin.com/Waysyk2e

Both the AMD cards work fine if I pass them through to a Windows VM.

I'm supposing this might likely be then an issue with Radeon.

However the oops makes me think perhaps it's breaking on unexpected PCIE 
information?


Is it possible there some missing information in that radeon is looking 
for that's missing? What PCIE information is passed through by vfio 
(lspci from both host and vm also in the pastebin)?


Anybody using radeon with passthrough AMD cards and seen this?

Thanks!

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] radeon vfio bus information issue?

2016-10-24 Thread globalgorrilla

On 24 Oct 2016, at 10:23, Alex Williamson wrote:


On Mon, 24 Oct 2016 10:20:08 -0700
globalgorri...@fastmail.fm wrote:


Hi,

Pass-through works fine with the the same linux 4.8 VM and nouveau 
and a

Quadro K2200.

Passing-through both a R7 260X and and a R9 290 I get the same kernel
oops:

IP: [] drm_pcie_get_speed_cap_mask+0x39/0xf0 [drm]

More here:
http://pastebin.com/Waysyk2e

Both the AMD cards work fine if I pass them through to a Windows VM.

I'm supposing this might likely be then an issue with Radeon.

However the oops makes me think perhaps it's breaking on unexpected 
PCIE

information?

Is it possible there some missing information in that radeon is 
looking

for that's missing? What PCIE information is passed through by vfio
(lspci from both host and vm also in the pastebin)?

Anybody using radeon with passthrough AMD cards and seen this?


Radeon with a Linux guest is actually one of the few cases where you
need to be running a Q35 machine with the GPU placed behind a PCIe
downstream/root port.  The code blindly assumes that an upstream PCIe
bridge is present and tries to poke registers on it.


Bingo.

I'll have to clone and try with that.

Alex, do you know if is enough to just simply passthrough the device but 
with Q35? Or is a custom topology needed?


Thanks!

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] radeon vfio bus information issue?

2016-10-24 Thread globalgorrilla

And done.

I just changed the machine type to the Q35 most recent available for me: 
pc-q35-2.8


The default PCIE layout worked. Up and running with Wayland on the R9 
290.


Thank you Alex! I hope someone else gets to enjoy a similar setup too!

On 24 Oct 2016, at 10:43, Alex Williamson wrote:


On Mon, 24 Oct 2016 10:38:08 -0700
globalgorri...@fastmail.fm wrote:


On 24 Oct 2016, at 10:23, Alex Williamson wrote:


On Mon, 24 Oct 2016 10:20:08 -0700
globalgorri...@fastmail.fm wrote:


Hi,

Pass-through works fine with the the same linux 4.8 VM and nouveau
and a
Quadro K2200.

Passing-through both a R7 260X and and a R9 290 I get the same 
kernel

oops:

IP: [] drm_pcie_get_speed_cap_mask+0x39/0xf0 
[drm]


More here:
http://pastebin.com/Waysyk2e

Both the AMD cards work fine if I pass them through to a Windows 
VM.


I'm supposing this might likely be then an issue with Radeon.

However the oops makes me think perhaps it's breaking on unexpected
PCIE
information?

Is it possible there some missing information in that radeon is
looking
for that's missing? What PCIE information is passed through by vfio
(lspci from both host and vm also in the pastebin)?

Anybody using radeon with passthrough AMD cards and seen this?


Radeon with a Linux guest is actually one of the few cases where you
need to be running a Q35 machine with the GPU placed behind a PCIe
downstream/root port.  The code blindly assumes that an upstream 
PCIe

bridge is present and tries to poke registers on it.


Bingo.

I'll have to clone and try with that.

Alex, do you know if is enough to just simply passthrough the device 
but

with Q35? Or is a custom topology needed?


I would recommend a PCIe root port with the Radeon on the bus created
by that.  The Linux driver assumes this sort of topology.


___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] Ryzen Primary GPU passthrough success and woes

2017-04-04 Thread globalgorrilla

On 4 Apr 2017, at 20:37, Daimon Wang wrote:

Hi Graham,    Windows crash after switch to virtio-disk because its 
boot loader doesn't have driver for the disk.


^ this, and ...

    Reinstall Windows with virtio-disk would fix the issue (you'll 
need the virtio-disk driver during installation).I'm not sure 
if there's any way to install the virtio-disk driver to an existing 
windows.


To avoid that you might add a second empty, or temporary, virtio disk to 
the VM.


Then either add the vfio drivers ISO to the VM as well or download and 
mount it in the VM.


Once the driver is installed you can shutdown and change the C:\ disk to 
virtio.




Regards,Daimon

On Tuesday, April 4, 2017 4:21 AM, Graham Neville 
 wrote:



 Cheers for that. Added it to my config and set hugespaces via kernel 
command line.
BOOT_IMAGE=/vmlinuz-linux 
root=UUID=bf69add2-e36f-453a-b92e-a4343ca20d26 rw quiet amd_iommu=on 
vfio-pci.ids=1002:67b1,1002:aac8 video=efifb:off isolcpus=0-9 
kvm_amd.avic=1 hugepages=12188
I also set iothreads, but can't seem to change my disks from sata to 
virtio. Whenever I try I get a windows bluescreen on boot.


I'm not seeing any interrupts for when the GPU is in use:
 PIN:  0  0  
0  0  0  
0  0  0  
0  0  0  
0  0  0  
0  0   Posted-interrupt notification event
 PIW:  0  0  
0  0  0  
0  0  0  
0  0  0  
0  0  0  
0  0   Posted-interrupt wakeup event


Here's my current XML:



  Windows10
  d45c3b5f-be8a-41e8-a22c-02e91c4c6245
  16384000
  16384000
  
    
  
  8
  4
  
    
    
    
    
    
    
    
    
    
    
    
    
  
  
    hvm
    type='pflash'>/home/virtualguests/windows10/ovmf_code_x64.bin

    /home/virtualguests/windows10/ovmf_vars_x64.bin
    
  
  
    
    
    
    
      
      
      
    
    
  
    
    
  
  
    
  
  
    
    
    
    
  
  destroy
  restart
  restart
  
    
    
  
  
    /usr/bin/qemu-system-x86_64
    
      
      file='/home/virtualguests/windows10/windows10-c-nas.qcow2'/>

      
  unit='0'/>

    
    
      
      
      
  unit='1'/>

    
    
      function='0x7'/>

    
    
      
      function='0x0' multifunction='on'/>

    
    
      
      function='0x1'/>

    
    
      
      function='0x2'/>

    
    
      function='0x2'/>

    
    
    
      
      function='0x0'/>

    
    
      
      
      function='0x0'/>

    
    
      
      
      function='0x0'/>

    
    
      function='0x0'/>

    
    
      
    
    
      
    
    
      
    
    
    
    
  
  
  
  
  
  
  function='0x0'/>

    
   
  
    
    
  
    
   
  
    
    
  
    
    
  
    
    
  
    
    
  
    
    
  
    
    
  
    
    
  
    
    
  
    
    
  
    
    
  
    
    
  
    
    
      function='0x0'/>

    
    
      
        function='0x0'/>

      
      file='/home/virtualguests/windows10/r9290.rom'/>
      function='0x0' multifunction='on'/>

    
    
      
        function='0x1'/>

      
      function='0x1'/>

    
    
  


On Mon, Apr 3, 2017 at 5:13 PM,  wrote:

 Might perform better. Also, yes Hugepages 
might be helpful too.Also, would you mind observing whether interrupts 
are posted when the GPU is in use in pass-through?watch -d cat 
/proc/interruptsLook at the bottom for PIN (Posted-interrupt 
notification event) and PIW (Posted-interrupt wakeup event).Thanks!On 
30 Mar 2017, at 15:46, Graham Neville wrote:
Finally gotten to the bottom of this, this the help of your XML file 
I'm able to run with a Q35 setup, I'm also able to just passthrough 
the CPU features. Thanks.
I discovered the issue with the guest crashing was actually due to my 
graphics card overheating! Noticed that it was hitting 99C and then 
guest would crash.
The setup is pretty sweet now, I just want to change my drives from 
sata to virtio which I believe will be quicker. Then also investigate 
HugePages


Here's my Kernel params:

cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-linux root=UUID=bf69add2-e36f-453a- 
b92e-a4343ca20d26 rw quiet amd_iommu=on vfio-pci.ids=1002:67b1,1002: 
aac8 video=efifb:off isolcpus=0-7


Here's my final XML file:

  Windows10
  d45c3b5f-be8a-41e8-a22c- 02e91c4c6245
  8388608
  8388608
  4
  
    
    
    
    
  
  
    hvm
    /home/ 
virtualguests/windows10/ovmf_ code_x64.bin

    /home/virtualguests/ windows10/ovmf_vars_x64.bin
    
  
  
    
    
    
    
      
      
      
    
    
  
    
  
  
  
    
    
    
    
  
  destroy
  restart
  restart
  
    
    
  
  
    /usr/bin/qemu- system-x86_64
    
      
      

 

Re: [vfio-users] vfio, xeon e3s, acs, & gpus -- oh my!

2017-04-03 Thread globalgorrilla

On 3 Apr 2017, at 10:21, globalgorri...@fastmail.fm wrote:


On 3 Apr 2017, at 10:19, Steven Walter wrote:


On Mon, Apr 3, 2017 at 1:08 PM,   wrote:

On 2 Apr 2017, at 0:10, Joshua Hoblitt wrote:

I decided to go with an E6-1650 v4 / C612 based motherboard as I 
wanted

both ECC memory and a BMC.



I think something this is the best option for the moment.

Posted-interrupts are only on E5 v4 and Xeon-D.

It makes a big difference with network cards (and GPUs).

Does anyone know if AVIC or some other Ryzen feature provides for
posted-interrupts?


By "posted interrupts" do you mean that they get delivered directly 
to

the VM, bypassing the host?  If so, then yes enabling AVIC does
provide that behavior.  With AVIC enabled (kvm_amd.avic=1), I do not
see the counters in /proc/interrupts increment for passed-through
devices.


Yes, that's it. Fantastic. Thanks!



This does make me interested in trying a Ryzen setup... the i7s AFAIK 
don't have posted-interrupts.


___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] vfio, xeon e3s, acs, & gpus -- oh my!

2017-04-03 Thread globalgorrilla

On 2 Apr 2017, at 0:10, Joshua Hoblitt wrote:

I decided to go with an E6-1650 v4 / C612 based motherboard as I 
wanted

both ECC memory and a BMC.


I think something this is the best option for the moment.

Posted-interrupts are only on E5 v4 and Xeon-D.

It makes a big difference with network cards (and GPUs).

Does anyone know if AVIC or some other Ryzen feature provides for 
posted-interrupts?



This required switching from U to RDIMMs,
which I typically don't use in a desktop, but there is theoretically a
small RAS improvement with RDIMMs.  The cost works out to ~$350 more
than a E3 system at the same clock rates, and dropping from kaby lake 
to

broadwell.  However, it is a jump from 2-4 memory channels, additional
DIMM slots, and 2 additional cores.

-Josh

--
On 04/01/2017 10:44 PM, Joshua Lee wrote:

That's the total price, counting the motherboard, btw.

On Sun, Apr 2, 2017 at 1:43 AM, Joshua Lee > wrote:

What do you mean by "iffy"? My $200 x99 board has seperate IOMMU
groups per-device, and a i7 5820k/6800k or, if you want it
cheaper, an E5-1620v3 or 1620v4, can do ACS and sane IOMMU groups
with it for under $600 also, without needing to get a used
processor...

On Fri, Mar 31, 2017 at 8:11 PM, taii...@gmx.com
 > wrote:

Intel's stuff is really iffy about ACS unless you buy one of
the overpriced two thousand dollar processors, they use it 
for

artificial market segmentation so that they can say the
desktop processors and the E3 etc doesn't "support" sr-iov 
etc.


If you want to do this without spending lots of money I would
go with a coreboot compatible G34 Opteron setup, I have a
KGPE-D16 and it has ACS + IOMMU with interrupt remapping (and
every device gets its own IOMMU group) - I play games in VM 
on it.


You can get the board for $400 and a used quality 16 core CPU
for around $100, there is also a port of the OpenBMC network
kvm in progress - which will make it the first blob free
server board with feature equivalency to the proprietary 
stuff.


https://www.coreboot.org/Board:asus/kgpe-d16



___
vfio-users mailing list
vfio-users@redhat.com 
https://www.redhat.com/mailman/listinfo/vfio-users






___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users


Re: [vfio-users] vfio, xeon e3s, acs, & gpus -- oh my!

2017-04-03 Thread globalgorrilla
On 3 Apr 2017, at 10:19, Steven Walter wrote:

> On Mon, Apr 3, 2017 at 1:08 PM,   wrote:
>> On 2 Apr 2017, at 0:10, Joshua Hoblitt wrote:
>>
>>> I decided to go with an E6-1650 v4 / C612 based motherboard as I wanted
>>> both ECC memory and a BMC.
>>
>>
>> I think something this is the best option for the moment.
>>
>> Posted-interrupts are only on E5 v4 and Xeon-D.
>>
>> It makes a big difference with network cards (and GPUs).
>>
>> Does anyone know if AVIC or some other Ryzen feature provides for
>> posted-interrupts?
>
> By "posted interrupts" do you mean that they get delivered directly to
> the VM, bypassing the host?  If so, then yes enabling AVIC does
> provide that behavior.  With AVIC enabled (kvm_amd.avic=1), I do not
> see the counters in /proc/interrupts increment for passed-through
> devices.

Yes, that's it. Fantastic. Thanks!

> -- 
> -Steven Walter 

___
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users