1. Can anyone tell me what the cause might be? What may be happening?
2. Do you know if there is currently any work in investigation this
problem? Or anything related?
3. Is Gang Scheduling or Coscheduling implemented in FreeBSD?
4. Do you know of any other solution to this kind of problem?
5. Can you recommend me any papers/videos/links in anyway related to this?

I answered these in the FreeBSD forums post, but reproduced again here for the list:

1. The main issue is 'lock holder preemption', where a vCPU that is holding a spinlock has been pre-empted by the host scheduler, resulting in other vCPUs that are trying to acquire that lock to spin for full quantums.

Booting is a variant of this for FreeBSD since the AP spins on a memory location waiting for a BSP to start up.

2. There's some minor investigation going on.

3. No.

4. I don't know that 'classic' gang scheduling is the answer (see 5). What has been thought of for bhyve at least is to a) have the concept of vCPU 'groups' in the scheduler, b) provide metrics to assist the scheduler in trying to spread out threads associated with a vCPU group so they don't end up on the same physical CPU (avoidance of lock-holder preemption), and c) implement pause-loop exits (see the Intel SDM, 24.6.13) in the hypervisor and provide that information to the scheduler so it can give a temporary priority boost to vCPUs that have been preempted but aren't currently running.

5. The classic reference on this is VMWare's scheduler paper: www.vmware.com/files/pdf/techpaper/VMware-vSphere-CPU-Sched-Perf.pdf



freebsd-virtualization@freebsd.org mailing list
To unsubscribe, send any mail to 

Reply via email to