Piotr Jasiukajtis wrote:
On Mon, Mar 2, 2009 at 3:37 PM, Mark Johnson <[email protected]> wrote:
I tried PV drivers on another host and there is a long way to go to
improve performance of HVM systems (Windows, S10).
With respect to metal or to other virtualization
platforms?
With respect to the metal and pvm domUs like CentOS 5, SXCE and 2008.11.

Ah, OK.. You need to do some fine tuning with HVM guests
and we have more performance work to do their too..

You should have some cpus dedicated to dom0 for
HVM guests. There is a qemu process per HVM guest
which runs in dom0..  You don't want the qemu
process schedule for a dom0 CPU being used by
another guest.

e.g.. you can put something like the following in your
xen.gz menu.lst entry (which will put dom0 only on
cpus 0 and 1).
   dom0_max_vcpus=2 dom0_vcpus_pin=true

You should be running as little as possible on
dom0. qemu, IO, and domain management.

If your doing MP HVM, you really need to dedicate
CPUs to the HVM guest. Since Xen doesn't currently
have a gang scheduler, things can degrade fast
if the CPUs are not running at the same time.

Even with that, we have some perf work to do with
HVM domains..  We will be spending time on that with
the 3.3. work.


MRJ


Where the guests MP?  Did you give dom0 some
dedicated CPU cores?
No, I didn't give any dedicated CPU.

Right. Anyway, I found there are some issues with local ZFS pools and
dom0.
Sometimes 'zfs snapshot' from dom0 can kill (halt) the machine.
I guess you are aware of that?
No I wasn't..  What build are the dom0 bits?
SXCE104.

_______________________________________________
xen-discuss mailing list
[email protected]

Reply via email to