Yes it's a problem, and it's not just BHyve. The problem comes from stuff like spinlocks. Unlike normal userland locks, when two CPUs contend on a spinlock both are running at the same time. When two vCPUs are contending
on a spinlock, the host has no idea how to prioritize them.  Normally
that's not a problem, because physical CPUs are always supposed to be able to run. But when you overcommit vCPUs, some of them must get swapped out at all times. If a spinlock is being contended by both a running vCPU and a swapped out vCPU, then it might be contended for a long time. The host's scheduler simply isn't able to fix that problem. The problem is even worse when you're using hyperthreading (which I am) because those eight logical
cores are really only four physical cores, and spinning on a spinlock
doesn't generate enough pipeline stalls to cause a hyperthread switch. So it's probably best to stick with the n - 1 rule. Overcommitting is ok if all guests are single-cored because then they won't use spinlocks. But my
guests aren't all single-cored.

I've just checked the handbooks for KVM and although they do recommend to keep the vCPUs single-cored whenever possible, it says that up to 3:1 vCPU vs CPU ratio should not mean significant peformance penalty.

We've just run out of the (physical) cores on one of our BHYVE hypervisors... 32 cores (including HT), 16 VMs... and now I need to get another machine.

My question is -- does KVM (on Linux) handle the CPU overprovisioning with multiple-vCPU guest better than the current BHYVE implamentation?

If yes, how can it be improved? Can we (co)sponsor the development somehow? :)

Jakub


--
regards

www.cgi.cz
_______________________________________________
freebsd-virtualization@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
"freebsd-virtualization-unsubscr...@freebsd.org"

Reply via email to