On Tue, 22 Nov 2022 12:35:56 -1000
Bryan Angelo wrote:
> Related follow up.
>
> When I add memory to a running VM via hotplug, QEMU preallocates this
> memory too (as expected based on your explanation). When I subsequently
> remove memory added to the VM via hotplug, QEMU does not always
On Tue, 22 Nov 2022 17:57:37 +1100
Ivan Volosyuk wrote:
> Is there something special about the pinning step? When I start a new
> VM with 16G in dedicated hugepages my system becomes quite
> unresponsive for several seconds, significant packet loss and random
> hang device oopses if I use
On Sun, 20 Nov 2022 16:36:58 -0800
Bryan Angelo wrote:
> When passing-through via vfio-pci using QEMU 7.1.0 and OVMF, it appears
> that qemu preallocates all guest system memory.
>
> qemu-system-x86_64 \
> -no-user-config \
> -nodefaults \
> -nographic \
> -rtc base=utc \
>
On Sun, 2 Oct 2022 15:03:04 +0200
Pim wrote:
> Hey,
>
> After noticing this commit:
> https://github.com/torvalds/linux/commit/7ab5e10eda02da1d9562ffde562c51055d368e9c
> and because of high energy prices here, I did some tests with an energy
> monitor to see what the power consumption is for my
On Fri, 29 Jul 2022 21:43:44 +0300
Cosmin Chenaru wrote:
> Hi,
>
> I have an Intel network card inside a VM using PCI-Passthrough and I would
> like to write one register (the Physical Hardware Clock) but from inside
> the host, as writing it from inside the VM is complicated with all the
>
On Fri, 3 Jun 2022 23:13:39 +0900
Tydus wrote:
> Hi list,
>
> I'm trying to passthrough an SR-IOV capable device (PF) into a VM and
> make it spawn VFs into it and had no luck.
>
> Digging into qemu vfio code I found
>
On Tue, 08 Mar 2022 20:09:02 +
"Bronek Kozicki" wrote:
> On Tue, 8 Mar 2022, at 7:35 PM, Bronek Kozicki wrote:
> > root@gdansk ~ # lscpu
> > Architecture:x86_64
> > CPU op-mode(s): 32-bit, 64-bit
> > Address sizes: 48 bits physical, 48
On Mon, 07 Mar 2022 22:13:01 +
"Bronek Kozicki" wrote:
> I know, this is such an old topic ...
>
> I have today upgraded my hypervisor from intel Ivy-Bridge to AMD Epyc
> Milan and, after making the necessary adjustments in vfio
> configuration to make my virtual machines work again, I
On Tue, 24 Aug 2021 19:07:11 -0400
Roger Lawhorn wrote:
> Hello,
> I have a friend using hypervisor and he cannot get his gtx750 TI passed
> through.
> He is getting the code 43.
> He says he thinks its the card being too old.
> Any ideas?
I test with a GTX750, Maxwell is not too old.
On Tue, 1 Jun 2021 13:48:22 +
Thanos Makatos wrote:
> (sending here as I can't find a relevant list in
> http://vger.kernel.org/vger-lists.html)
$ ./scripts/get_maintainer.pl include/uapi/linux/vfio.h
Alex Williamson (maintainer:VFIO DRIVER)
Cornelia Huck (reviewer:VFIO DRI
On Mon, 17 May 2021 12:40:43 +0100
Stefan Hajnoczi wrote:
> On Fri, May 14, 2021 at 11:15:18AM -0400, Steven Sistare wrote:
> > On 5/14/2021 7:53 AM, Stefan Hajnoczi wrote:
> > > On Thu, May 13, 2021 at 04:21:15PM -0400, Steven Sistare wrote:
> > >> On 5/12/2021 12:42 PM, Stefan Hajnoczi
On Tue, 5 Jan 2021 11:20:20 -0500
Roger Lawhorn wrote:
> Hello,
>
> I recently had to reinstall my OS, but kept all my personal files.
> One of the things that needs to be resetup partially is qemu.
>
> I am getting the following error when running my qemu4.0 script to start
> win10:
>
[There were a bunch of bounces from gmail accounts for this message, so
let me add some comments in hopes that whatever the issue was has been
resolved and more folks will see this thread.]
On Tue, 15 Dec 2020 00:08:27 +0100
boit sanssoif wrote:
> Hi,
>
> I'm trying to configure kvm to
On Wed, 9 Dec 2020 01:46:11 +0530
Vikas Aggarwal wrote:
> Alex,
> Thanks !
> A follow up question:
> Do i need to backport 3 kernel patches as mentioned in following dpdk
> patch submission comment
> https://mails.dpdk.org/archives/dev/2018-July/109039.html
I think you'll find that two of those
On Mon, 7 Dec 2020 19:19:20 +0530
Vikas Aggarwal wrote:
> Hello list vfio-users,
> Can someone help me understand reason that why mmap of requested address
> overlaps with MSI-X table during mmap-ing of PCIe resources.
>
> Platform : ARM64 architecture (Marvell OcteonTX2)
>
> Linux
gt; On Sun, Nov 1, 2020 at 9:52 PM Alex Williamson
> wrote:
> >
> >
> > [Please try to send plain text emails to mailing lists if possible,
> > trying to extract the content below...]
> >
> > On Sun, 1 Nov 2020 20:56:31 -0500
> > Roja malarvathi w
[Please try to send plain text emails to mailing lists if possible,
trying to extract the content below...]
On Sun, 1 Nov 2020 20:56:31 -0500
Roja malarvathi wrote:
> First of all, Thank you so much in advance for your time and help.
>
> I am using Jetson Xavier NX which integrates a Realtek
On Fri, 2 Oct 2020 12:54:28 +
Thanos Makatos wrote:
> According to linux/include/uapi/linux/vfio.h, for a device to support
> migration
> it must provide a VFIO capability of type VFIO_REGION_INFO_CAP_TYPE and set
> .type/.subtype to VFIO_REGION_TYPE_MIGRATION/VFIO_REGION_TYPE_MIGRATION.
>
On Wed, 23 Sep 2020 15:32:15 -0700
Maran Wilson wrote:
> On Wed, Sep 23, 2020 at 2:19 PM Alex Williamson
> wrote:
>
> > On Wed, 23 Sep 2020 13:08:10 -0700
> > Maran Wilson wrote:
> >
> > > Just wanted to wrap up this thread by confirming what Alex said i
On Wed, 23 Sep 2020 13:08:10 -0700
Maran Wilson wrote:
> Just wanted to wrap up this thread by confirming what Alex said is true (in
> case anyone else is interested in this topic in the future). After enabling
> IOMMU tracing on the host I was able to confirm that IOMMU mappings were,
> in
On Tue, 8 Sep 2020 11:31:42 -0700
Maran Wilson wrote:
> On Tue, Sep 8, 2020 at 10:22 AM Alex Williamson
> wrote:
>
> > On Tue, 8 Sep 2020 09:59:46 -0700
> > Maran Wilson wrote:
> >
> > > I'm trying to use the vfio-pci driver to pass-through two P
On Tue, 8 Sep 2020 09:59:46 -0700
Maran Wilson wrote:
> I'm trying to use the vfio-pci driver to pass-through two PCIe endpoint
> devices into a VM. On the host, each of these PCIe endpoint devices is in
> its own IOMMU group. From inside the VM, I would like to perform P2P DMA
> operations. So
On Thu, 27 Aug 2020 17:22:59 -0700
Micah Morton wrote:
> Hi Alex/Paolo,
>
> We talked a few months ago (some of which can be seen here
> https://www.spinics.net/lists/kvm/msg217578.html) about adding
> platform IRQ forwarding for platform devices behind PCI
> controller/adapter devices (for use
On Thu, 27 Aug 2020 23:17:52 +0200
daggs wrote:
> Greetings Alex,
>
> > Sent: Wednesday, August 26, 2020 at 8:02 PM
> > From: "Alex Williamson"
> > To: "daggs"
> > Cc: "Patrick O'Callaghan" , vfio-users@redhat.com
> > Subject
On Wed, 26 Aug 2020 07:27:59 +0200
daggs wrote:
> Greetings Alex,
>
> > Sent: Wednesday, August 26, 2020 at 12:54 AM
> > From: "Alex Williamson"
> > To: "daggs"
> > Cc: "Patrick O'Callaghan" , vfio-users@redhat.com
> > Subject
On Tue, 25 Aug 2020 23:34:48 +0200
daggs wrote:
> Greetings Alex,
>
> > Sent: Wednesday, August 12, 2020 at 8:04 PM
> > From: "Alex Williamson"
> > To: "Patrick O'Callaghan"
> > Cc: vfio-users@redhat.com, "daggs"
> > Subject
On Fri, 21 Aug 2020 15:36:40 -0500
Shawn Anastasio wrote:
> Hello,
>
> While developing the userspace VFIO components of libkvmchan[1], I've
> run into a dev_WARN in VFIO when hotplugging devices on the same pci
> host bridge as other VFIO-managed devices:
>
> [ 111.220260][ T6281] pci
On Wed, 12 Aug 2020 15:21:31 -0700
kram threetwoone wrote:
> Sure Alex, thanks for taking a look. Let me know if there's anything else
> you want to see. It is a shell script and I get the same errors if I run
> as user or root.
>
> #!/bin/bash
> qemu-system-x86_64 \
> -enable-kvm \
>
On Wed, 12 Aug 2020 14:56:08 -0700
kram threetwoone wrote:
> I have not gotten the VM to boot, there is always the multiple address
> spaces error. I don't think this is an ACS patch situation; the GPU sits
> in it's own vfio group with no other devices.
I can't think how a group with a single
On Wed, 12 Aug 2020 17:46:33 +0100
"Patrick O'Callaghan" wrote:
> On Wed, 2020-08-12 at 18:02 +0200, daggs wrote:
> > Greetings,
> >
> > I have a machine with an Intel igp of HD Graphics 610 [8086:5902].
> > I found several discussions on the subject stating that it isn't possible
> > but all
On Sun, Jul 12, 2020 at 6:36 PM Yv Lin wrote:
> After more thoughts, I guess that
> 1) normally ppl don't enable vIOMMU unless they need to use a nested
> guest, as vIOMMU is slow and the memory accounting issue you just mentioned.
>
vIOMMU w/ device assignment is more often used for DPDK in a
On Sun, Jul 12, 2020 at 6:16 PM Yv Lin wrote:
>
> Here are some summaries that I learned from what you told.
> 1) If a device is passed through to guestOS via vfio, and there is no
> IOMMU present in guestOS. all memory regions within the device address
> space will be pinned down. if IOMMU is
On Sun, Jul 12, 2020 at 5:38 PM Yv Lin wrote:
>
>
> On Sun, Jul 12, 2020 at 1:59 PM Alex Williamson <
> alex.l.william...@gmail.com> wrote:
>
>> On Sun, Jul 12, 2020 at 12:25 PM Yv Lin wrote:
>>
>>> Btw, IOMMUv2 can support peripheral page request (
On Sun, Jul 12, 2020 at 12:25 PM Yv Lin wrote:
> Btw, IOMMUv2 can support peripheral page request (PPR) so in theory if an
> end point pcie device can support ATS/PRI, pinning down all memory is not
> necessary, does current vfio driver or qemu has corresponding support to
> save pinned memory?
vfio_dma_map() is the exclusive means that QEMU uses to insert translations
for an assigned device. It is not only used by AMD vIOMMU, in fact that's
probably one of the less tested use vectors, it's used when QEMU
establishes any sort of memory mapping for the VM. Any mapping that could
On Tue, Jul 7, 2020 at 4:33 PM Roger Lawhorn wrote:
> Hello,
> I have an nvidia 980 ti oc 6gb card.
> I cannot use it with qemu as a passthrough card.
> I have had to passthrough my amd cards only.
> I have read of nvidia making it impossible to use some of their cards in
> virtual machines.
>
On Tue, 23 Jun 2020 23:33:22 -0400
Roger Lawhorn wrote:
> I have the answer.
>
> I had to change my lines from:
> -device
> vfio-pci,host=0c:00.0,bus=root_port1,addr=00.0,multifunction=on,x-vga=on \
> -device vfio-pci,host=0c:00.1,bus=root_port2,addr=00.1,multifunction=on \
> -device
hat just a size?
> I did a full reinstall and factory reset on the video driver.
AFAICT you've got 32GB between these GPUs, that would probably be the
minimum I'd try, maybe even 64GB.
>
> On 6/21/20 6:05 PM, Alex Williamson wrote:
> Use:
>
[intentionally de-threaded]
On Sun, 21 Jun 2020 17:02:44 -0400
Roger Lawhorn wrote:
> Hello,
>
> I have a video card with two gpus.
> The Radeon Pro Duo.
>
> I can get only one of the gpus passed off to windows 10.
> If I pass off the second one I am told by windows that there is not
> enough
On Sat, 20 Jun 2020 09:46:00 +0200
Kjeld Borch Egevang wrote:
> Hi VFIO users,
>
> I am working with a PCIe card that supports up to 255 VFs. In order to
> get the SR-IOV stuff to work I added some of the latest patches to the
> vfio-pci driver.
>
> I have two servers.
>
> The first one is
On Mon, 15 Jun 2020 15:41:54 -0600
"Edmund F. Nadolski" wrote:
> Hi,
>
> I'm a noob to VFIO so hopefully this is not to lame a question.
>
> I'm looking to set up a Linux guest VM with a direct-assigned nvme ssd,
> that I can control by a usermode driver with VFIO. I enable nested
>
On Fri, 17 Apr 2020 09:34:49 -0700
Micah Morton wrote:
> Hi Alex,
>
> I've been looking at device passthrough for platform devices on x86
> that are not behind an IOMMU by virtue of not being DMA masters. I
> think on some level this is an explicit non-goal of VFIO
>
On Mon, 13 Apr 2020 10:33:21 -0700
Ravi Kerur wrote:
> On Mon, Apr 13, 2020 at 8:36 AM Alex Williamson
> wrote:
>
> > On Sun, 12 Apr 2020 09:10:49 -0700
> > Ravi Kerur wrote:
> >
> > > Hi,
> > >
> > > I use Intel NICs for PF and VF devi
On Sun, 12 Apr 2020 09:10:49 -0700
Ravi Kerur wrote:
> Hi,
>
> I use Intel NICs for PF and VF devices. VFs are assigned to virtual
> machines and PF is used on the Host. I have intel-iommu=on on GRUB which
> enables DMAR and IOMMU capabilities (checked via 'dmesg | grep -e IOMMU -e
> DMAR) and
On Wed, 1 Apr 2020 11:33:21 +
"McLeod, Dennis" wrote:
> I am working on a project in which we have a driver that does the
> necessary register_netdev() stuff at the kernel level. If I wanted to
> take advantage of the vfio framework .. how would the device get
> registered as a network
On Tue, 31 Mar 2020 21:35:33 +0300
Артем Семенов wrote:
> Hello!
>
> I try to passthrough GPU to the virtual machine (qemu). I've tried
> different variants:
>
> -device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on
>
> or
>
> -device vfio-pci,host=02:00.0,x-vga=on
>
Probably better to ask on the vfio devel list (kvm) rather than the
vfio user's list...
On Fri, 28 Feb 2020 17:20:20 +
Thanos Makatos wrote:
> > Drivers that handle DMA region registration events without having to call
> > vfio_pin_pages (e.g. in muser we inject the fd backing that VMA to
On Mon, 24 Feb 2020 10:40:39 +
"Bronek Kozicki" wrote:
> Heads up to anyone running the latest vanilla kernels - after upgrade
> from 5.4.21 to 5.4.22 one of my VMs lost access to a vfio1
> passed-through GPU. This was restored when I downgraded to 5.4.21 so
> the problem seems related to
On Fri, 14 Feb 2020 00:17:43 +1100
Michael Slade wrote:
> Adding nointxmask=1 worked! With no issues at all. I think because all
> the devices are getting their own interrupts (on the host) anyway.
>
> So do you want me to try to add the card to quirks.c? I could probably
> manage it, just
On Thu, 13 Feb 2020 10:02:26 +
"Stark, Derek" wrote:
> Hello,
>
> I've been experimenting with VFIO with one of our FPGA cards using a
> Xilinx part and XDMA IP core. It's been smooth progress so far and
> I've had no issues with bar access and also DMA mapping/transfers to
> and from the
On Thu, 13 Feb 2020 13:41:44 +1100
Michael Slade wrote:
> Hi everyone,
>
> I'll attempt to start with enough info to describe my situation without
> pasting a complete `lspci -vvvxxx` output etc.
>
> My special sound card doesn't want to work when passed through to a guest.
>
> The card is a
On Mon, 13 Jan 2020 22:17:29 +0100
Davide Miola wrote:
> Hi,
>
> I'm trying to passthrough one of the two I2C controllers my laptop has to
> get the touchpad working in a VM (Disclaimer: I know it's not as simple as
> passing through the controller, but I've got to start somewhere).
>
> I saw
On Wed, 11 Dec 2019 14:40:56 -0800
Micah Morton wrote:
> On Wed, Dec 11, 2019 at 10:44 AM Alex Williamson
> wrote:
> >
> > On Wed, 11 Dec 2019 09:37:57 -0800
> > Micah Morton wrote:
> >
> > > On Tue, Dec 10, 2019 at 4:00 PM Alex Williamson
> > &
On Wed, 11 Dec 2019 09:37:57 -0800
Micah Morton wrote:
> On Tue, Dec 10, 2019 at 4:00 PM Alex Williamson
> wrote:
> >
> > On Mon, 9 Dec 2019 14:18:50 -0800
> > Micah Morton wrote:
> >
> > > On Thu, Sep 5, 2019 at 12:22 PM Micah Morton
> > &g
On Wed, 11 Dec 2019 13:17:18 +
cprt wrote:
> Hello,
> I am using VFIO with QEMU trying to passthrough my audio device.
>
> I successfully did this operation with my previous system, with a 7th
> generation intel and an older kernel.
> Now I am using a 10th generation intel and a newer
On Sat, 23 Nov 2019 15:34:21 +0100
Ede Wolf wrote:
> Hello,
>
> I am trying to pass through a PCIe USB card to a guest, instead of just
> the ports, due to very sensitive USB devices.
> Despite the unbind being reported as successful, the booting of the
> guest fails with an error:
>
>
ks,
Alex
> ____
> From: Alex Williamson
> Sent: Friday, 22 November, 2019, 10:30 pm
> To: Venumadhav Josyula
> Cc: vfio-users@redhat.com; Venumadhav Josyula
> Subject: Re: [vfio-users] No IOMMU Groups seen in /sys/kernel/iommu_groups/
>
> On Fri, 22 Nov 2019 22
On Fri, 22 Nov 2019 22:13:32 +0530
Venumadhav Josyula wrote:
> So in the bios I need I check for ACPI :DMAR ?
It's likely not represented that way, what I'm saying is that the list
of ACPI tables from your dmesg below should include one named "DMAR".
It currently does not and until it does,
On Fri, 22 Nov 2019 21:59:28 +0530
Venumadhav Josyula wrote:
> Hi Alex,
>
> Pl find the dmesg & cpu model attached.
The CPU supports VT-d (E5-2637v3), but the system firmware does not
seem to provide a DMAR table, which is required for enabling VT-d.
There should be an "ACPI: DMAR ..." line
On Fri, 22 Nov 2019 15:07:30 +0530
Venumadhav Josyula wrote:
> Hi All,
> We are trying to use vfio-pci. We have following
> - intel_iommu=on in bios
> - it shows in /proc/cmdline
>
> [root@vflac2-kvm ~]# cat /proc/cmdline
> BOOT_IMAGE=/vmlinuz-3.10.0-957.1.3.el7.x86_64
On Wed, 6 Nov 2019 00:29:52 +0100
Samuel Ortiz wrote:
> On Tue, Nov 05, 2019 at 01:21:48PM -0700, Alex Williamson wrote:
> > On Fri, 18 Oct 2019 05:48:49 +
> > "Boeuf, Sebastien" wrote:
> >
> > > Hi folks,
> > >
> > > I hav
On Fri, 18 Oct 2019 06:08:31 +
"Boeuf, Sebastien" wrote:
> Hi folks,
>
> We have been recently implementing a nested VFIO solution for our Cloud
> Hypervisor VMM. Thanks to virtio-iommu, we can now pass a device
> through nested virtualization.
>
> After some performances testing, we
On Fri, 18 Oct 2019 05:48:49 +
"Boeuf, Sebastien" wrote:
> Hi folks,
>
> I have been recently working with VFIO, and particularly trying to
> achieve device passthrough through multiple layers of virtualization.
>
> I wanted to assess QEMU's performances with nested VFIO, using the
>
On Wed, 28 Aug 2019 09:39:57 -0700
Micah Morton wrote:
> On Mon, Aug 5, 2019 at 11:14 PM Gerd Hoffmann wrote:
> >
> > On Mon, Aug 05, 2019 at 12:50:00PM -0700, Micah Morton wrote:
> > > On Thu, Aug 1, 2019 at 10:36 PM Gerd Hoffmann wrote:
> > > >
> > > > Hi,
> > > >
> > > > > From my
On Wed, 31 Jul 2019 09:05:54 -0700
Micah Morton wrote:
> Hi Alex,
>
> I've noticed that when doing device passthrough with VFIO, if an IRQ
> in the host machine is associated with a PCI device that's being
> passed to the guest, then the IRQ is automatically forwarded into the
> guest by
ww.freedesktop.org/software/systemd/man/systemd.path.html
>
> 26.05.2019, 22:54, "Alex Williamson" :
> > On Sun, 26 May 2019 21:28:36 +0300
> > Alex Ivanov wrote:
> >
> >> Could Intel fix that?
> >
> > I won't claim that mdev-core is bug free
On Tue, 18 Jun 2019 16:47:58 +0100
James Courtier-Dutton wrote:
> On Tue, 18 Jun 2019 at 16:18, Alex Williamson
> wrote:
>
> > [cc +vfio-users]
> >
> > You need a version of the hot reset unit test that accepts multiple
> > devices since each is in a separate
[cc +vfio-users]
On Tue, 18 Jun 2019 15:33:36 +0100
James Courtier-Dutton wrote:
> Hi,
>
> I could not see anywhere it mentioning ACS in the dmesg logs, so I don't
> think it is using ACS.
>
> Is there some way to tell from the logs that ACS is involved.
See previous reply.
> I have 2 AMD
[cc +vfio-users]
On Tue, 18 Jun 2019 15:23:08 +0100
James Courtier-Dutton wrote:
> Hi,
>
> Attaching dmesg and lspci -nnvvv
Vega 10 seems to have ACS, great. The root ports and the downstream
switch ports also support ACS, great. The grouping seems correct from
the bits I checked. System
On Tue, 18 Jun 2019 19:43:39 +0800
James wrote:
> Hi Alex:
>
> Many thanks for your detailed feedback and great helps!
>
> 1, yes, make our drive into upstream will also solve this problem :)
>
> 2, got it, checking device's some status register mapped via
> vfio persistently will be a better
On Tue, 18 Jun 2019 11:43:58 +0100
James Courtier-Dutton wrote:
> Hi,
>
> In the following list of iommu groups, I am wondering why sub-functions on
> the same PCIe card are not being given the same IOMMU group as I would
> expect.
I can't provide any specifics without further details, a full
On Mon, 17 Jun 2019 16:00:42 +0800
James wrote:
> Hi Experts:
>
> Sorry to disturb you.
>
>
>
> I failed to find any valid data about vfio interrupt performance in
> community, so send mail to you boldly.
>
>
>
> We have a pcie device work on x86 platform, and no VM in our env, I plan
>
On Sat, 15 Jun 2019 14:30:49 +0100
James wrote:
> Hi,
>
> Please find attached a kernel patch. This is based from a very old patch
> that never made it into the kernel in 2014.
> https://lkml.org/lkml/2014/10/20/295. I am not sure who else I should be
> adding to the Signed-off-by section.
>
On Wed, 12 Jun 2019 08:49:36 -0700
Micah Morton wrote:
> Hi Alex,
>
> Thanks for the help on this earlier, I was able to get IGD passthrough
> working on my device (In case you're interested, crbug.com/970820 has
> further details on the changes we needed to make to the kernel/i915
> driver to
On Mon, 3 Jun 2019 14:38:49 -0700
Micah Morton wrote:
> Hi Alex,
>
> Could you remind me whether there is a minimum recommended kernel
> version to be running in the VM guest when doing GPU passthrough?
>
> I'm fine running 4.14 in the host, but was looking to see if I could
> run 4.4 in the
On Thu, 30 May 2019 19:24:03 +0100
James Courtier-Dutton wrote:
> On Thu, 30 May 2019 at 18:54, James Courtier-Dutton
> wrote:
>
> > lspci -vvv on host:
> > 43:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI]
> > Vega 10 XL/XT [Radeon RX Vega 56/64] (rev c3) (prog-if 00
On Wed, 29 May 2019 17:03:07 -0500
Bjorn Helgaas wrote:
> [+cc Alex]
>
> On Fri, May 24, 2019 at 05:31:18PM +0200, Maik Broemme wrote:
> > The Intel PCI bridge on SuperMicro Atom C3xxx motherboards do not
> > successfully complete a bus reset when used with certain child devices.
> > After the
On Wed, 29 May 2019 09:25:59 -0700
Micah Morton wrote:
> So as I mentioned, the ChromeOS firmware writes the location of the
> OpRegion to the ASLS PCI config register
> (https://github.com/coreboot/coreboot/blob/master/src/drivers/intel/gma/opregion.c#L88).
> The i915 driver then gets the
On Tue, 28 May 2019 09:35:16 -0700
Micah Morton wrote:
> Ah ok thanks!
>
> The qemu command line i was using is here: `qemu-system-x86_64
> -chardev stdio,id=seabios -device
> isa-debugcon,iobase=0x402,chardev=seabios -m 2G -smp 2 -M pc -vga none
> -usbdevice tablet -cpu
On Sun, 26 May 2019 21:28:36 +0300
Alex Ivanov wrote:
> Could Intel fix that?
I won't claim that mdev-core is bug free in this area, but it's
probably worth noting that mdev support isn't necessarily a fundamental
feature of the parent device, it could theoretically be enabled much
later than
On Fri, 24 May 2019 14:10:03 -0700
Micah Morton wrote:
> On Fri, May 24, 2019 at 12:50 PM Alex Williamson
> wrote:
> >
> > On Fri, 24 May 2019 11:12:41 -0700
> > Micah Morton wrote:
> >
> > > I’ve been working with an Intel Chrome OS device to see if
On Fri, 24 May 2019 11:12:41 -0700
Micah Morton wrote:
> I’ve been working with an Intel Chrome OS device to see if integrated
> GPU passthrough works. The device is a 7th Generation (Kaby Lake)
> Intel Core i5-7Y57 with HD Graphics 615. So no discrete GPU. I think
> iGPU passthrough should work
On Fri, 24 May 2019 17:31:18 +0200
Maik Broemme wrote:
> The Intel PCI bridge on SuperMicro Atom C3xxx motherboards do not
> successfully complete a bus reset when used with certain child devices.
What are these 'certain child devices'? We can't really regression
test to know if/when the
On Wed, 22 May 2019 21:47:20 +0100
James Courtier-Dutton wrote:
> On Wed, 22 May 2019 at 00:11, Alex Williamson
> wrote:
>
> >
> > I think a better approach would be to extend the pci= kernel command
> > line option to include driver_override support, perhaps
On Tue, 21 May 2019 23:45:22 +0100
James Courtier-Dutton wrote:
> On Sun, 19 May 2019 at 09:30, James Courtier-Dutton
> wrote:
>
> > Hi,
> >
> > I have a PC with two identical GPUs.
> > One I wish to hand over to vfio and do passthru with, the other I wish the
> > host to use.
> > I know about
On Tue, 21 May 2019 10:00:46 +0800
Eddie Yen wrote:
> Yes. The QEMU version is 2.11.2 (qemu-2.11.2-4.fc28)
> How can I doing the separate test by using QEMU 4.0?
QEMU 4.0 is available for fc28 in the virt preview repo:
https://fedoraproject.org/wiki/Virtualization_Preview_Repository
Thanks,
On Tue, 21 May 2019 09:07:34 +0800
Eddie Yen wrote:
> Hi Alex,
>
> Here's VM profile.
> Basically this VM is created from virt-manager, and guest OS is Ubuntu so
> no need to add capabilities for Windows environment.
>
> https://pastebin.com/VZM63ZsC
>
> And GPU for this VM is Tesla P100. But
On Mon, 20 May 2019 15:52:56 +0800
Eddie Yen wrote:
> Hi everyone,
>
> I'm not sure it's VFIO or pure KVM issue on here.
>
> Now we have one GPU server which contains few Tesla GPUs. Installed Fedora
> 28 and using VFIO to passthrough GPU into VM.
> Everything is OK, except one annoying things
On Wed, 3 Apr 2019 23:31:22 -0500
Shawn Anastasio wrote:
> On 4/3/19 10:23 PM, Alex Williamson wrote:
> > On Wed, 3 Apr 2019 22:01:14 -0500
> > Shawn Anastasio wrote:
> >
> >> Hello all,
> >>
> >> I'm currently writing an application tha
On Wed, 3 Apr 2019 22:01:14 -0500
Shawn Anastasio wrote:
> Hello all,
>
> I'm currently writing an application that makes use of Qemu's ivshmem
> shared memory mechanism, which exposes shared memory regions from the
> host via PCI-E BARs. MSI-X interrupts that are tied to host eventfds are
>
On Fri, 29 Mar 2019 10:16:19 +0900
小川寿人 wrote:
>
> "model name=ioh3420" is Intel emulated pcie-root-port.
> defalut "model name=pcie-root-port" is QEMU Paravirtualized pcie-root-port.
There's nothing paravirtualized about pcie-root-port, it's just a
generic emulated root port rather than one
On Thu, 21 Mar 2019 18:00:23 +0800 (CST)
fulaiyang wrote:
> Hi Alex:
>
> Recently I am interested in igd passthrough. On the 'igd-assign' text, I
> don't understand why the lpc bridge need to be created ?
To satisfy some versions of the guest driver. Thanks,
Alex
On Wed, 20 Mar 2019 13:32:33 +
"Wuzongyong (Euler Dept)" wrote:
> Hi Alex,
>
> I notice a patch you pushed in https://lkml.org/lkml/2019/2/18/1315
> You said the previous commit you pushed may prone to deadlock, could you
> please share the details about how to reproduce the deadlock scene
[re-adding vfio-users]
On Mon, 11 Mar 2019 18:23:14 +0100
Cor Saelmans wrote:
> Thank you Alex for your reply.
>
> I tested earlier today with a fresh install of my ubuntu host in legacy
> mode and this worked directly without any further customization.
>
> Just reinstalled my host back to
On Mon, 11 Mar 2019 01:46:03 -0400
Nicolas Roy-Renaud wrote:
> Hey, Alex, thanks for replying.
>
> It seems like you're right on the Mem- part.
>
> [user@OCCAM ~]$ lspci -s 07:00.0 -vvv
> 07:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970]
> (rev a1) (prog-if 00 [VGA
On Sun, 10 Mar 2019 18:06:37 -0400
Nicolas Roy-Renaud wrote:
> I've seen a lot of people before reccomand VFIO newcomers to flash their
> GPU if they couldn't get their passthrough working right before, and
> since I know how potentially risky and avoidable this sort of procedure
> is (since
On Sat, 9 Mar 2019 23:22:49 +0100
Cor Saelmans wrote:
> I have Ubuntu 18 as host.
>
>
> Created two guests with virt-manager, setup vfio settings like described in
> several guides.
>
>
> GPU: Intel Graphics 655
>
>
> guest 1: Ubuntu: Passthrough is working. VM boots and after several
On Mon, 11 Feb 2019 20:39:07 -0500
Kyle Marek wrote:
> On 2/10/19 8:59 PM, Kash Pande wrote:
> > On 2019-02-10 5:25 p.m., Kyle Marek wrote:
> >> When I quit in the QEMU monitor, the image stays on the screen, and no
> >> further host dmesg output is produced.
> >>
> > You must do a full
On Mon, 11 Feb 2019 22:06:36 +0100
Tobias Geiger wrote:
> > On Thu, Jan 17, 2019 at 11:03:03PM +0100, Tobias Geiger wrote:
> >> Hello!
> >>
> >> after nearly 5 years of passing through my Radeon HD7800 - it feels old and
> >> slow when used with newer games and 1GB of RAM also doesn't feel
On Sun, 10 Feb 2019 20:01:47 +0100
Björn Ruytenberg wrote:
> Hi Alex,
>
> Thanks for your quick response and the patch!
>
> I am looking into passing through a muxless GeForce GPU to a Windows guest.
>
> Having been through several resources, passing through muxed and desktop
> cards seems
1 - 100 of 450 matches
Mail list logo