> Some screens attached (hopefully not too heavy).
> Didn't have time to do better. Select your favourite ones.
>
> I upgraded Linux to newer version (Ubuntu 16.10, kernel 4.8),
> and it broke the driver. OpenCL does not work at all anymore.
> The screens were made on newer system -- nothing seeme
> When `glxgears` is run in fullscreen mode, what's on screen depends
> on each run. Framerate varies from run to run, is mostly stable within
> one run but can change abruptly by 100's of FPS. When the screen is
> blank, the framerate is slowest (750~1500 FPS). When only parts of
> gears are rend
Hi,
Does bhyve not execute peripheral cards' option ROMs?
Not yet.
I guess it doesn't. This could explain a lot of strange
behaviour seen resulting from running in a VM.
Yes.
How does UEFI work in this regard? My guess is that cards
have to explicitly support the new boot method (UEFI)
Hi,
That is extremely likely. bhyve itself doesn't have a BIOS, though
bhyve/UEFI could be modified to handle options ROMs (see
http://awilliam.github.io/presentations/KVM-Forum-2014/#/)
Hm, interesting. I wonder if a card that's not designed for use
with UEFI is destined not to work well/at
> I suspect this is a failure to run the BIOS code that
> enables the secondary power connector so you can come
> out of slot only power mode.
Well, that Quadro does not have a power connector, but
I imagine card BIOS routines would be similar between
all cards in a family, including those that r
...
> > > -Performance State : P0
> > > +Performance State : P8
> >
> > Note sure what's happening here.
>
> Driver not kicking the card's BIOS into the right mode
> to switch to dynamic power state selection?
I suspect this is a failure to run the BIOS co
> -- VDPAU works, but I suspect it's not using the GPU [3][4];
> I haven't figure a way how to force the use of GPU. Also,
> the main window with text looks OK most of the time (when
> doing the video test and in the end, in particular), but
> show a smaller black rectangle i
> > First, `nvidia-smi -q` output diff [0] is interesting. It suggests
> > that the card may be in some incompletely initialized state: notice
> > the "Unknown Error" instead of real UUID, and the P8 power state.
> > Could it be that the driver doesn't put the card's BIOS in the right
> > state?
Good news, everyone!
I tried an AMD card, and it is almost working. I have a lot of logs
and info, but I will try to restrain the length of this message.
There was no need to do anything special to get the card to work,
other than figuring out how to deal with Linux, setting up drivers
and
Hi,
BTW, is it [generally] safe to decrease the BAR base address further?
> My workstation has a CPU with just 36 address bits...
Yes. The only potential conflict is with the top of guest RAM, and 36
bits is a lot of RAM :)
later,
Peter.
___
fre
Hi,
First, `nvidia-smi -q` output diff [0] is interesting. It suggests that
the card may be in some incompletely initialized state: notice the
"Unknown Error" instead of real UUID, and the P8 power state. Could it
be that the driver doesn't put the card's BIOS in the right state?
That is
...
>
> Incidentally, could someone put a note about that hardcoded BAR base
> on the bhyve PCI passthrough page [0] if it won't be fixed soon, so
> many others can play with VGA passthrough meanwhile?
I am working with Michael Dexter to get changes made
to this wiki page to reflect your work her
> This gives me the idea to try a different driver version in Linux...
Tried the same driver version in Linux as in FreeBSD. The driver seems
to talk to the card now, but not sure whether I can call this progress:
[0.536988] PCI host bridge to bus :00
[0.537291] pci_bus :00: root
> That's a different issue - it's unlikely, if not impossible, to
> configure bhyve with enough RAM to hit 37 bits worth where that would
> become a problem. No need to worry about that.
Well, there may be peripheral cards that have less bits... Anyway, I
see what you mean: memory manager can
> > Removing another signature of detecting virtualization and increasing
> > compatibility would be negligible gain? Just asking...
> I don't think we are going to try and defeat the NVidia virtualization
> checks, and I can probably assure you that they would patch them as
> fast as we bypasse
> I hope you keep it up or at least figure out what the driver is doing.
Not in my plans at the moment. I prefer AMD GPUs over nV for OpenCL.
nVidia did (and does) serve me well for the last 10 year with their
excellent FreeBSD graphics driver: I had very few problems with it;
it's stable and wel
> Xorg log bits [2] show that X is up. But the monitor stays in sleep
BTW, this is what happens to Xorg:
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
root 638 100.0 0.8 50481584 50612 u0 R< 12:214:33.05 |--
/usr/local/bin/X :0 -auth /root/.serverauth.624 (X
I hope you keep it up or at least figure out what the driver is doing. If
they haven't explicitly put in the license terms that virtualization is
forbidden for consumer cards, there's nothing wrong with hot patching the
driver ... assuming that they don't do things like Skype does where it
repeated
I had a bit more play with nVidia and FreeBSD guest.
First, `nvidia-smi -q` output diff [0] is interesting. It suggests that
the card may be in some incompletely initialized state: notice the
"Unknown Error" instead of real UUID, and the P8 power state. Could it
be that the driver doesn't pu
>
> > Hi,
> >
> > >> The problem appears to be in the area of assigning memory-mapped
> > >> I/O ranges by bhyve for the VGA card to a region outside of the
> > >> CPU's addressable space; i.e., bhyve does not check CPUID's
> > >> 0x8008 AL value (0x27 for my CPU, which is 39 bits -- while
>
Hi,
That's correct - it's a bug in bhyve.
Baking a proper fix will be more complicated by the fact that PCIe
cards themselves may have limitations. For example, most nVidia GPUs
have 40 bits DMA addressing capability, some 39, an a few (still
modern) ones -- just 37 [ref. nVidia "README" in
> Is the VM checking documented in the driver notes somewhere? I have a
It's not in their driver's "README" file.
> Titan X that I need to run CUDA on and would be much happier if I
> didn't have to actually switch back and forth between FreeBSD and
> Ubuntu on my desktop. Are we new fairly cert
> Hi,
>
> >> The problem appears to be in the area of assigning memory-mapped
> >> I/O ranges by bhyve for the VGA card to a region outside of the
> >> CPU's addressable space; i.e., bhyve does not check CPUID's
> >> 0x8008 AL value (0x27 for my CPU, which is 39 bits -- while
> >> bhyve assig
> IIRC the 367.44 version of the nvidia drivers do NOT support the
> Quadro 2000, you need to be using the 340.xx version of them. I
> ran into problems on native hardware.
I pulled the Quadro 2000 out of my workstation [and put the 600 in],
which is running fine with the latest driver from port
> As far as I can tell it's the Hypervisor extension flags list. The lack
> of these extensions/optimisations might explain why your FreeBSD VM
> runs slow
The guest isn't slow, actually -- just the `nvidia-smi` tool was
much slower than normal to produce output. CPU speed in the guest
is less
Hi,
There doesn't seem to be support for CPUID 0x4001 in bhyve either.
What is it supposed to do?
As far as I can tell it's the Hypervisor extension flags list. The lack
of these extensions/optimisations might explain why your FreeBSD VM runs
slow but their presence also causes the nVidia
Is the VM checking documented in the driver notes somewhere? I have a Titan
X that I need to run CUDA on and would be much happier if I didn't have to
actually switch back and forth between FreeBSD and Ubuntu on my desktop.
Are we new fairly certain that this won't work? (Yet another reason to go
w
On 11/01/2017 02:01, sor...@cydem.org wrote:
Dom wote:
There doesn't seem to be support for CPUID 0x4001 in bhyve either.
What is it supposed to do?
As far as I can tell it's the Hypervisor extension flags list. The lack
of these extensions/optimisations might explain why your FreeBSD VM
IIRC the 367.44 version of the nvidia drivers do NOT support the
Quadro 2000, you need to be using the 340.xx version of them. I
ran into problems on native hardware.
Also before you attempt to get VGA passthrough working it is best
to make sure you can run native, have you tried running your gue
Hi,
The problem appears to be in the area of assigning memory-mapped
I/O ranges by bhyve for the VGA card to a region outside of the
CPU's addressable space; i.e., bhyve does not check CPUID's
0x8008 AL value (0x27 for my CPU, which is 39 bits -- while
bhyve assigns 0xd0 & above for
> The problem appears to be in the area of assigning memory-mapped
> I/O ranges by bhyve for the VGA card to a region outside of the
> CPU's addressable space; i.e., bhyve does not check CPUID's
> 0x8008 AL value (0x27 for my CPU, which is 39 bits -- while
> bhyve assigns 0xd0 & above
> Found my original attempt by modifying /usr/src/sys/amd64/vmm/x86.c
> Unified diff follows, but this didn't work for me.
> ("bhyve_id[]" commented out to prevent compiler complaints)
Who knows what sort of trickery nVidia's driver is up to besides
CPUID when determining the presence of virtuali
Found my original attempt by modifying /usr/src/sys/amd64/vmm/x86.c
Unified diff follows, but this didn't work for me.
("bhyve_id[]" commented out to prevent compiler complaints)
There doesn't seem to be support for CPUID 0x4001 in bhyve either.
--- x86.c.orig 2016-09-11 14:40:22.410462000 +
With QEMU, they have the "kvm=off" option which hides hypervisor info
from the guest.
See: https://www.redhat.com/archives/libvir-list/2014-August/msg00512.html
I did try to replicate this a while back but didn't have much success -
maybe I missed a flag?
The QEMU diff seems relatively small, s
Howdy, virtualization zealots!
This is in reply to maillist thread [0].
It so happens that I have to get GPU-accelerated OpenCL working on
my machine, so I had a play with bhyve & PCI-e passthrough for VGA.
I was using nVidia Quadro 600 (GF108) for testing (planning to use
AMD/ATI for OpenC
It looks like there may not be an issue with MSI after all.
The nvidia driver is issued an IRQ when first used, not at boot time.
If I run the CUDA "deviceQuery" sample then this appears in dmesg:
[ 67.207929] nvidia :00:06.0: irq 29 for MSI/MSI-X
[ 67.646207] NVRM: RmInitAdapter failed!
Hi Peter,
Thanks for getting back to me. Here's the info you requested:
[0.163085] acpi PNP0A03:00: host bridge window
[0xd0-0xd0100f] (ignored, not CPU addressable)
That one is most likely a bug in bhyve, where the space used for 64-bit
BAR placement isn't tested against the
Hi Dom,
Bhyve's ACPI table produces this error linux-side regardless of
"pci=" setting:
[0.163085] acpi PNP0A03:00: host bridge window
[0xd0-0xd0100f] (ignored, not CPU addressable)
That one is most likely a bug in bhyve, where the space used for 64-bit
BAR placement isn't te
Hello,
Setup:
nvidia GTX960 in PCIe slot
intel i7-4790K CPU
FreeBSD 11-RC2 host
CentOS 7 guest with kernel 3.10.0-327.28.3.el7.x86_64
Using vm-bhyve port
I've hit two issues:
1. BAR allocation
Workaround (for me) is adding "pci=nocrs" to linux guest's kernel
command line.
Without "pci=nocrs"
39 matches
Mail list logo