[Bug 222996] FreeBSD 11.1 on Hyper-V with PCI Express Pass Through

2017-10-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=222996

Mark Linimon  changed:

   What|Removed |Added

   Assignee|freebsd-standards@FreeBSD.o |freebsd-virtualization@Free
   |rg  |BSD.org
  Component|standards   |bin

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 222996] FreeBSD 11.1 on Hyper-V with PCI Express Pass Through

2017-10-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=222996

Sepherosa Ziehau  changed:

   What|Removed |Added

 CC||sepher...@gmail.com

--- Comment #1 from Sepherosa Ziehau  ---
Do you mean pass-through or SR-IOV?

For SR-IOV, we have got mlx4_en work (connectx-3), which is also used in Azure.
 Last time we tested, ixgbe's VF didn't not work.

PCI pass-through requires some extra per-VM configuration through powershell, I
will check w/ Dexuan next Monday.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 222916] [bhyve] Debian guest kernel panics with message "CPU#0 stuck for Xs!"

2017-10-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=222916

--- Comment #6 from Peter Grehan  ---
>Or does the interaction between bhyve and the host scheduler somehow
>result in the virtual cpus being set aside

 Yes, though:

> for tens of seconds?

 The error message from Linux is a bit misleading. There is a low-priority
kernel thread that tries to run every 5 seconds and then sleeps. If it hasn't
been able to run for an extended amount of time, for example due to high
interrupt activity, higher priority threads running, or spinlocks being held,
the error message will be displayed.

 What I believe you are seeing is a classic hypervisor problem, not specific to
bhyve, known as "lock-holder preemption" where a vCPU holding a spin-lock is
preempted by the host, and other vCPUs that are running then spin attempting to
acquire that lock which can't be released. A search will show the large amount
of literature on this issue :)

 Maybe the best reading on this is the ESXi scheduler paper:
   
http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-vsphere-cpu-sched-performance-white-paper.pdf

 There has been some talk of putting knowledge of vCPUs in the FreeBSD
scheduler to allow some form of gang scheduling, but nothing has come of that
so far.

 As to your point; it's more than just fairness that the hypervisor scheduler
has to provide - heuristics about guest o/s behaviour are also needed.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 222996] FreeBSD 11.1 on Hyper-V with PCI Express Pass Through

2017-10-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=222996

--- Comment #2 from Dmitry  ---
No, i mean not SR-IOV.
I am already have extra configuration through powershell for PCI Passthrough,
and it's works for other OS in Generation 2 Hyper-V VM, except FreeBSD.
I have reconfigured VM for legacy Generation 1, and PCI Passtrought works great
in FreeBSD 11.1, but in new UEFI Generation 2 VM Passthrought didn't work.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"


[Bug 222916] [bhyve] Debian guest kernel panics with message "CPU#0 stuck for Xs!"

2017-10-14 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=222916

--- Comment #5 from kari...@gmail.com ---
Thanks for that tip. I reduced the ARC size with sysctl and confirmed it to be
20GB with zfs-info.

Thinking back to the 4 guest / 4 host cpus. Lets say the collection of guests
consume 4 cpus and four tasks on the host consume 4 cpus (totaling a load
average of 8), does the host system scheduler not shuffle tasks around like it
would if I were running 8 cpu intensive processes on the host? Or does the
interaction between bhyve and the host scheduler somehow result in the virtual
cpus being set aside for tens of seconds?

I guess I'm just trying to understand, I would think one of the main
motivations for using a hypervisor is exactly over-subscribing cpu cores as you
may have guests with "bursty" load behavior, so on average your total
guests+host load is less than the number of cpus, but surely you can divide the
cpu time in a "fair" manner when the system is overloaded.

Memory I would think is a little trickyer, there it makes sense to make sure
the host system consumption + guest consumption never exceeds the total host
memory.

Anyhow, just trying to make sense of this, there doesn't seem to be too much
information available online on these topics, or perhaps I'm looking in all the
wrong places.

Thank you,
Kari

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"