Re: [vfio-users] issues about igd-passthrough using vfio-pci
On Tuesday 13 September 2016 21:38:01 Alex Williamson wrote: > [adding the list back] > > On Tue, Sep 13, 2016 at 9:17 PM, fulaiyangwrote: > > hello, > > > > my kernel does include simplefb:CONFIG_FB_SIMPLE=y.QEMU does not > > > > print any messages when started but the top tree show that qemu'cpu usage > > is always about 100%.I have confirmed that the windows 7 os does not > > boot.I > > don't know how I can get other qemu information,could you tell me? thanks. > > > >PID USER PR NIVIRTRESSHR S *%CPU* %MEM TIME+ > > > > COMMAND > > > > 9542 root 20 0 2768476 2.028g 11848 S * 99.7* 53.3 3:48.82 > > > > qemu-system-x86 > > Try removing the modprobe.blacklist and video options from your kernel > command line (the unsafe interrupts thing isn't necessary on your system > either). After boot, IGD should be bound to i915. Unbind it, bind to > vfio-pci, and try QEMU again. Since you have simplefb in your kernel, I > don't trust that it's not claiming device resources as you're using it now. Hi Alex, Yang. I've been toying around with this case again, and it seems I got some output that can be of any help. I think it points that i915 is reluctant to free the resources, but it can highlight something else to your eyes. [ 225.155202] [ cut here ] [ 225.155217] WARNING: CPU: 2 PID: 7101 at drivers/gpu/drm/drm_crtc.c:5939 drm_mode_config_cleanup+0x20f/0x230 [drm] [ 225.155218] Modules linked in: vfio_pci vfio_iommu_type1 vfio_virqfd vfio drbg ansi_cprng ctr ccm bridge stp llc af_packet ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables nf_conntrack_ipv6 nf_defrag_ipv6 xt_conntrack nf_conntrack ip6table_filter ip6_tables snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic binfmt_misc i915 arc4 iwlmvm mac80211 loop i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops drm x86_pkg_temp_thermal intel_powerclamp iwlwifi coretemp kvm_intel kvm joydev snd_hda_intel uvcvideo mousedev snd_hda_codec btusb cfg80211 btrtl videobuf2_vmalloc videobuf2_memops btbcm btintel snd_hwdep videobuf2_v4l2 rtsx_pci_sdmmc videobuf2_core bluetooth mmc_core videodev snd_hda_core [ 225.155245] snd_pcsp rtsx_pci_ms snd_pcm memstick media irqbypass crc32c_intel snd_timer psmouse ghash_clmulni_intel snd i2c_hid efi_pstore evdev wmi video input_leds efivars i2c_i801 serio_raw rtsx_pci i2c_core backlight intel_lpss_acpi intel_lpss thermal tpm_crb soundcore mfd_core button battery ac acpi_pad efivarfs unix dm_zero dm_thin_pool dm_persistent_data dm_bio_prison dm_service_time dm_round_robin dm_queue_length dm_multipath dm_log_userspace cn dm_flakey dm_delay xts aesni_intel glue_helper lrw gf128mul ablk_helper cryptd aes_x86_64 cbc sha256_generic scsi_transport_iscsi r8169 mii fuse nfs lockd grace sunrpc fscache ext4 jbd2 mbcache multipath linear raid10 raid1 raid0 dm_raid raid456 libcrc32c md_mod async_raid6_recov async_memcpy async_pq async_xor xor async_tx raid6_pq dm_snapshot [ 225.155275] dm_bufio dm_crypt dm_mirror dm_region_hash dm_log dm_mod hid_generic usbhid xhci_pci xhci_hcd ohci_hcd uhci_hcd usb_storage ehci_pci ehci_hcd usbcore usb_common scsi_transport_fc sr_mod cdrom sg sd_mod ata_piix ahci libahci sata_sx4 pata_oldpiix [ 225.155286] CPU: 2 PID: 7101 Comm: vfio-bind Not tainted 4.7.2 #5 [ 225.155287] Hardware name: PC Specialist Limited N24_25JU/N24_25JU, BIOS 5.11 12/14/2015 [ 225.155288] 88035a43fc08 812d18c2 [ 225.155290] 88035a43fc48 8105b211 1733a0928530 [ 225.155292] 88035bfbe498 88035bfbe000 88035bfbe340 88035e65bb80 [ 225.155293] Call Trace: [ 225.155297] [] dump_stack+0x67/0x95 [ 225.155299] [] __warn+0xd1/0xf0 [ 225.155301] [] warn_slowpath_null+0x1d/0x20 [ 225.155310] [] drm_mode_config_cleanup+0x20f/0x230 [drm] [ 225.155331] [] intel_modeset_cleanup+0x80/0xa0 [i915] [ 225.155347] [] i915_driver_unload+0x74/0x1d0 [i915] [ 225.155354] [] drm_dev_unregister+0x29/0xb0 [drm] [ 225.155361] [] drm_put_dev+0x23/0x60 [drm] [ 225.155370] [] i915_pci_remove+0x15/0x20 [i915] [ 225.155372] [] pci_device_remove+0x39/0xc0 [ 225.155375] [] __device_release_driver+0x9a/0x140 [ 225.155376] [] device_release_driver+0x23/0x30 [ 225.155377] [] unbind_store+0xe7/0x140 [ 225.155379] [] drv_attr_store+0x25/0x30 [ 225.155381] [] sysfs_kf_write+0x37/0x40 [ 225.155382] [] kernfs_fop_write+0x118/0x190 [ 225.155384] [] __vfs_write+0x28/0x120 [ 225.155386] [] ? security_file_permission+0x3d/0xc0 [ 225.155388] [] ? percpu_down_read+0x12/0x60 [ 225.155390] [] vfs_write+0xb8/0x1a0 [ 225.155391] [] SyS_write+0x46/0xb0 [ 225.155393] [] do_syscall_64+0x61/0x110 [ 225.155395] [] entry_SYSCALL64_slow_path+0x25/0x25 [ 225.155396] ---[ end trace
Re: [vfio-users] Host hard lockups
Hi, I also runs on Rampage IV. I dont have freezes during work but sometimes I have performance issue durring some media with sound operations. I very often have similiar freezes when I'm shuting down the VM. I havent much time to troubleshoot this unfortunetly. I have a question for you. I have problem with sound in the vm, do you have sound from your sound card or using HDMI passed through ? I cant get this to work. Best Regards Tomasz Strzelecki 2016-08-22 22:18 GMT+02:00 vfio: > On 08/16/2016 12:51 PM, vfio wrote: > > I noticed that the symptoms of the freeze are very similar to the > > freezes I sometimes get when shutting down the guest. After searching > > the archives, I decided to try enabling MSI for the Titan X in the Win10 > > guest. This did indeed stop the freezes at shutdown time. I've been > > using the guest for 10 days since my original post. During this time I > > experienced only one freeze, but I was not nearby at the time so it's > > hard to say if the guest caused it. > > It took longer than expected, but a definite crash happened yesterday. > Sadly, it seems that MSI was not a fix for the in-use crashes. > > At this point I'm worried that it's some sort of weird hardware-specific > interaction that is unlikely to be fixed. If anybody experiences similar > symptoms or can suggest any debugging techniques, I'd greatly appreciate > any suggestions. > > Thanks! > > ___ > vfio-users mailing list > vfio-users@redhat.com > https://www.redhat.com/mailman/listinfo/vfio-users > ___ vfio-users mailing list vfio-users@redhat.com https://www.redhat.com/mailman/listinfo/vfio-users
Re: [vfio-users] Z170X IOMMU Groups
On Sat, Sep 17, 2016 at 9:12 AM, Alex Williamson < alex.l.william...@gmail.com> wrote: > On Sat, Sep 17, 2016 at 9:00 AM, Nick Sarnie> wrote: > >> Hi Alex, >> >> I'm on 4.7.4 which includes this patch, and there are the IOMMU groups. >> Is there some extra info I can provide? >> > > Hmm, based on the info you sent me previously, your PCH root ports don't > even attempt to include the broken ACS capability, therefore the quirk > doesn't get enabled on your system. Perhaps the Z170X is an especially > broken version of Z170 :-\ > Could you pastebin a dump of PCI config space for the PCH root ports? ie. "sudo lspci -s 1c." ___ vfio-users mailing list vfio-users@redhat.com https://www.redhat.com/mailman/listinfo/vfio-users
Re: [vfio-users] Z170X IOMMU Groups
On Sat, Sep 17, 2016 at 9:00 AM, Nick Sarniewrote: > Hi Alex, > > I'm on 4.7.4 which includes this patch, and there are the IOMMU groups. Is > there some extra info I can provide? > Hmm, based on the info you sent me previously, your PCH root ports don't even attempt to include the broken ACS capability, therefore the quirk doesn't get enabled on your system. Perhaps the Z170X is an especially broken version of Z170 :-\ ___ vfio-users mailing list vfio-users@redhat.com https://www.redhat.com/mailman/listinfo/vfio-users
Re: [vfio-users] Z170X IOMMU Groups
Hi Alex, I'm on 4.7.4 which includes this patch, and there are the IOMMU groups. Is there some extra info I can provide? Thanks, Sarnex On Sat, Sep 17, 2016 at 4:40 AM, Philip Abernethywrote: > You definitely are. Running Skylake on Arch myself. Had the mainline > package before, now using stock 4.7. I decided to buy a USB card to pass > through. > > On Sat, 17 Sep 2016, 03:58 Brett Peckinpaugh, wrote: > >> So I might be able to drop the ACS patch now that 4.7 is out on Arch. >> >> On September 16, 2016 5:39:43 PM PDT, Alex Williamson < >> alex.l.william...@gmail.com> wrote: >>> >>> >>> On Fri, Sep 16, 2016 at 6:30 PM, Nick Sarnie >>> wrote: >>> Hi, I was wondering if we could split group 7 any more. CPU is the 6700k. I'd like to be able to pass through the USB controller without the ACS patch. >>> >>> Run a newer kernel, Intel botched ACS in Skylake: >>> >>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/ >>> linux.git/commit/drivers/pci/quirks.c?id=1bf2bf229b64540f91ac6fa3af37c8 >>> 1249037a0b >>> >>> PCH PCIe root ports have isolation starting in v4.7. >>> >> vfio-users mailing list >> vfio-users@redhat.com >> https://www.redhat.com/mailman/listinfo/vfio-users >> >> ___ >> vfio-users mailing list >> vfio-users@redhat.com >> https://www.redhat.com/mailman/listinfo/vfio-users >> > ___ vfio-users mailing list vfio-users@redhat.com https://www.redhat.com/mailman/listinfo/vfio-users
Re: [vfio-users] Z170X IOMMU Groups
You definitely are. Running Skylake on Arch myself. Had the mainline package before, now using stock 4.7. I decided to buy a USB card to pass through. On Sat, 17 Sep 2016, 03:58 Brett Peckinpaugh,wrote: > So I might be able to drop the ACS patch now that 4.7 is out on Arch. > > On September 16, 2016 5:39:43 PM PDT, Alex Williamson < > alex.l.william...@gmail.com> wrote: >> >> >> On Fri, Sep 16, 2016 at 6:30 PM, Nick Sarnie >> wrote: >> >>> Hi, >>> >>> I was wondering if we could split group 7 any more. CPU is the 6700k. >>> I'd like to be able to pass through the USB controller without the ACS >>> patch. >>> >> >> Run a newer kernel, Intel botched ACS in Skylake: >> >> >> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/pci/quirks.c?id=1bf2bf229b64540f91ac6fa3af37c81249037a0b >> >> PCH PCIe root ports have isolation starting in v4.7. >> > vfio-users mailing list > vfio-users@redhat.com > https://www.redhat.com/mailman/listinfo/vfio-users > > ___ > vfio-users mailing list > vfio-users@redhat.com > https://www.redhat.com/mailman/listinfo/vfio-users > ___ vfio-users mailing list vfio-users@redhat.com https://www.redhat.com/mailman/listinfo/vfio-users