What's the status with this for full virt guests?

I am still seeing systematic time drifts in RHEL 3 and RHEL4 guests
which I've been digging into it the past few days. In the course of it I
have been launching guests with boosted priority (both nice -20 and
realtime priority (RR 1)) on a nearly 100% idle host.

One host is a PowerEdge 2950 running RHEL5.2 with kvm-70. With the
realtime priority boot I have routinely seen bogomips in the guest which
do not make sense. e.g.,

ksyms.2:bogomips        : 4639.94
ksyms.2:bogomips        : 4653.05
ksyms.2:bogomips        : 4653.05
ksyms.2:bogomips        : 24.52

and

ksyms.3:bogomips        : 4639.94
ksyms.3:bogomips        : 4653.05
ksyms.3:bogomips        : 16.33
ksyms.3:bogomips        : 12.87


Also, if I launch qemu with the "-no-kvm-pit -tdf" option the panic
guests panics with the message Marcelo posted at the start of the thread:

----

Calibrating delay loop... 4653.05 BogoMIPS

CPU: L2 cache: 2048K

Intel machine check reporting enabled on CPU#2.

CPU2: Intel QEMU Virtual CPU version 0.9.1 stepping 03

Booting processor 3/3 eip 2000

Initializing CPU#3

masked ExtINT on CPU#3

ESR value before enabling vector: 00000000

ESR value after enabling vector: 00000000

Calibrating delay loop... 19.60 BogoMIPS

CPU: L2 cache: 2048K

Intel machine check reporting enabled on CPU#3.

CPU3: Intel QEMU Virtual CPU version 0.9.1 stepping 03

Total of 4 processors activated (14031.20 BogoMIPS).

ENABLING IO-APIC IRQs

Setting 4 in the phys_id_present_map
...changing IO-APIC physical APIC ID to 4 ... ok.
..TIMER: vector=0x31 pin1=0 pin2=-1
..MP-BIOS bug: 8254 timer not connected to IO-APIC
...trying to set up timer (IRQ0) through the 8259A ...  failed.
...trying to set up timer as Virtual Wire IRQ... failed.
...trying to set up timer as ExtINT IRQ... failed :(.
Kernel panic: IO-APIC + timer doesn't work! pester [EMAIL PROTECTED]

----

I'm just looking for stable guest times. I'm not planning to keep the
boosted guest priority, just using it to ensure the guest is not
interrupted as I try to understand why the guest systematically drifts.

david


Glauber Costa wrote:
> Glauber Costa wrote:
>> On Mon, Jul 7, 2008 at 4:21 PM, Anthony Liguori <[EMAIL PROTECTED]>
>> wrote:
>>> Marcelo Tosatti wrote:
>>>> On Mon, Jul 07, 2008 at 03:27:16PM -0300, Glauber Costa wrote:
>>>>
>>>>>> I agree.  A paravirt solution solves the problem.
>>>>>>
>>>>> Please, look at the patch I've attached.
>>>>>
>>>>> It does  __delay with host help. This may have the nice effect of not
>>>>> busy waiting for long-enough delays, and may well.
>>>>>
>>>>> It is _completely_ PoC, just to show the idea. It's ugly, broken,
>>>>> obviously have to go through pv-ops, etc.
>>>>>
>>>>> Also, I intend to add a lpj field in the kvm clock memory area. We
>>>>> could do just this later, do both, etc.
>>>>>
>>>>> If we agree this is a viable solution, I'll start working on a patch
>>>>>
>>>> This stops interrupts from being processed during the delay. And also
>>>> there are cases like this recently introduced break:
>>>>
>>>>                /* Allow RT tasks to run */
>>>>                preempt_enable();
>>>>                rep_nop();
>>>>                preempt_disable();
>>>>
>>>> I think it would be better to just pass the lpj value via paravirt and
>>>> let the guest busy-loop as usual.
>>>>
>>> I agree.  VMI and Xen already pass a cpu_khz paravirt value.  Something
>>> similar would probably do the trick.
>>
>> yeah, there is a pv-op for this, so I won't have to mess with the
>> clock interface. I'll draft a patch for it, and sent it.
>>
>>> It may be worthwhile having udelay() or spinlocks call into KVM if
>>> they've
>>> been spinning long enough but I think that's a separate discussion.
>>
>> I think it is, but I'd have to back it up with numbers. measurements
>> are on the way.
>>> Regards,
>>>
>>> Anthony Liguori
>>>
>>>
>>
>>
>>
> How about this? RFC only for now
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to