On 08/18/2009 11:54 AM, Yu Jiang (yujia) wrote:
Thanks, Dor.
More question inline:-)


Best regards,
Yu

On 08/17/2009 11:04 AM, Yu Jiang (yujia) wrote:
Hi Dor,

Thank you very much for your reply!

Our kernel of host os was based on 2.6.18, so cgroup is not
avaialbe
for us. And our Intel chip supports constant_tsc.
We will run RHEL 5.4 as guest os. Will the clock of guest
os still be
a issue if we use kvm-clock in guest os?

The first release of rhel5.4 guest won't support guest pv
clock. It does support the host side of the clock. Further
updates should enable this feature in the guest.

So, the kvm-clock is not available for us. What do you mean about host
side of clock? What clocksource should be used for the first release of

I meant that kvmclock does exist in the hypervisor/host side but not in the guest.


rhel5.4 as guest os. TSC? If we use TSC as clocksource, should we run
KVM with the -no-kvm-pit-reinjection flag?

You need both the -no-kvm-pit-reinjection and add these guest kernel command line:
rhel5.4 64 bit: divider=10 notsc lpj=n (n is the number that the host uses)
rhel5.4 32bit: divider=10 clocksource=acpi_pm lpj=n



And does it mean guest os will always has clock issue, if
vcpu thread
was not scheduled on time?

The stable tsc is good, but the problem is that it works in
conjunction with the pit clock. The OS tries to compensate
for lost ticks that it automatically identifies (also on real
hardware). This is why we recommend to run RHEL guest with
-no-kvm-pit-reinjection flag.
My fear is that too large pauses will drive the OS crazy.
You can test if it can cope with it.

Since we are not able to avoid larger pause in virtual machine
envrionment on heavy load, hopefully it will not cause issue.


Why not use nice instead? The effect is similar, but it might
be a bit more smooth.

With nice, we are able to remove the lantency of application on host os.
But, our customer may become crazy if they found the CPU usage of host
reach 100%.

If you fine the guests with positive nice, other applications won't be affected.




Thanks,
Yu




On 08/17/2009 08:09 AM, Yu Jiang (yujia) wrote:
Hi KVM experts,

Our user case needs to run KVM and application on host
together. To
reserve some CPU resource for application, we want to
limit the CPU
usage of KVM. Without KVM CPU usage limitation, the idle
CPU of host
OS becomes 0% in peak time.

I have searched this topic on internet, but didn't find
much comments.

One possible solution could be managing KVM process as regular
process

on host OS, and use tool like http://cpulimit.sourceforge.net/ to
limit maximum CPU usage of VM. Basically, the cpulimit tool use
SIGSTP

and SIGCONT signals to stop and resume the execution of
KVM process.
It works fine for us at moment. But, I feel there may be
some risk to
do this, because the signal will cause whole process of KVM
paused(not

only vcpu thread). Do you think it's safe to use cpulimit kinds of
tool to SIGSTP/SIGCONT kvm?

Another possible solution was:
Enhance QEMU user space to monitor the CPU usage of
itself, and use
existing way(pause_all_vcpus?) to pause vcpu thread of KVM in case
KVM

reaches CPU usage limitation. Is this solution possible?

A mgmt daemon can control qemu using the monitor and
stop/cont it on
these cases.

The main problem with the two solutions above is that the
guest clock
might drift. Moreover, you increase the latency for the guest
OS/applications.

You can use the 'nice' command to priorities the host applications.
For newer kernels you should use cgroups that solves this specific
issue exactly.


Any idea?


Thanks,
Yu

--
To unsubscribe from this list: send the line "unsubscribe
kvm" in the
body of a message to majord...@vger.kernel.org More
majordomo info at

http://vger.kernel.org/majordomo-info.html



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to