Sukanto Ghosh wrote:
1. Is the maximum no. of  vcpus in a particular guest limited to the
no. of host cpus ? (I guess not)

No.

2. Is there any attempt made to co-schedule all of a guest's vcpus, in
order to avoid any spinlock holding problem ?

Right now, no. There is some discussion of gang scheduling within Linux but I don't know that it's going anywhere. Recently, Jeremy Fitzhardinge posted some paravirtual spinlock patches for Linux that would at least allow for spinlock yielding.

3. Are there any means to do content-based page sharing between guests
as VMware does ?

Yes, KSM.

4. Does kvm makes any attempt to avoid TLB flushes while vm-exits and
vm-entries ? like Xen makes a memory hole ?
(my guess is that it doesn't needs to as kvm is mapped into a guest's
address-space and the pages are protected with the help of linux vm.
Am i right ? )

A TLB flush is mandatory when using hardware virtualization support (even with Xen--you are thinking of Xen PV). However, both Intel and AMD now support hardware TLB tagging which reduces the pain of this TLB flush.

Slightly different ones,

5. How much useful is a balloon driver for kvm, which doesn't makes
any hard partitions of available physical memory between the guests ?
Shouldn't the linux VM's knowledge be superior in this case than the
guest-vm's  ?

Ballooning can be very useful when doing very large changes to the amount of guest memory.

6. What is coalesced mmio ?

There are a number of times when a large number of MMIO writes occur back-to-back. Think VGA planar updates, writes of a network packet to on-chip memory, etc. Instead of passing each of these writes to userspace, we buffer a certain number of them and send a good bit of them down to QEMU. The real win from this is making N-1 of the buffered writes into light-weight exits instead of heavy-weight exits.

Regards,

Anthony Liguori



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to