On Fri, Feb 22, 2008 at 06:16:16PM +0200, Avi Kivity wrote:
> > 2. The critical one. In normal condition, VCPU0 migrated much more 
> > frequently than other VCPUs. And the patch add more "delta" (always 
> > negative 
> > if host TSC is stable) to TSC_OFFSET each 
> > time migrated. Then after boot for a while, VCPU0 became much 
> > slower than others (In my test, VCPU0 was migrated about two times than the
> > others, and easily to be more than 100k cycles slower). In the guest 
> > kernel, 
> > clocksource TSC is global variable, the variable "cycle_last" may got the 
> > VCPU1's TSC value, then turn to VCPU0. For VCPU0's TSC_OFFSET is 
> > smaller than VCPU1's, so it's possible to got the "cycle_last" (from VCPU1) 
> > bigger than current TSC value (from VCPU0) in next tick. Then "u64 offset = 
> > clocksource_read() - cycle_last" overflowed and caused the "infinite" loop. 
> > And it can also explained why Marcelo's patch don't work - it just reduce 
> > the 
> > rate of gap increasing.

Another source of problems in this area is that the TSC_OFFSET is
initialized to represent zero at different times for VCPU0 (at boot) and
the remaining ones (at APIC_DM_INIT).

> > The freezing didn't happen when using userspace IOAPIC, just because the 
> > qemu 
> > APIC didn't implement real LOWPRI(or round_robin) to choose CPU for 
> > delivery.
> > It choose VCPU0 everytime if possible, so CPU1 in guest won't update 
> > cycle_last. :( 
> >
> > This freezing only occurred on RHEL5/5.1 pae (kernel 2.6.18), because of 
> > they 
> > set IO-APIC IRQ0's dest_mask to 0x3 (with 2 vcpus) and dest_mode as 
> > LOWEST_PRIOITY, then other vcpus had chance to modify "cycle_last". In 
> > contrast, RHEL5/5.1 32e set IRQ0's dest_mode as FIXED, to CPU0, then don't 
> > have this problem. So does RHEL4(kernel 2.6.9). 
> >
> > I don't know if the patch was still needed now, since it was posted long 
> > ago(I 
> > don't know which issue it solved). I'd like to post a revert patch if 
> > necessary.
> >   
> 
> I believe the patch is still necessary, since we still need to guarantee 
> that a vcpu's tsc is monotonous.  I think there are three issues to be 
> addressed:
> 
> 1. The majority of intel machines don't need the offset adjustment since 
> they already have a constant rate tsc that is synchronized on all cpus.  
> I think this is indicated by X86_FEATURE_CONSTANT_TSC (though I'm not 
> 100% certain if it means that the rate is the same for all cpus, Thomas 
> can you clarify?)

The TSC might be marked unstable for other reasons (C3 state, large
machines with clustered APIC, cpufreq).

> This will improve tsc quality for those machines, but we can't depend on 
> it, since some machines don't have constant tsc.  Further, I don't think 
> really large machines can have constant tsc since clock distribution 
> becomes difficult or impossible.

As discussed earlier, in case the host kernel does not have the TSC
stable, it needs to enforce a state which the guest OS will not trust
the TSC. The easier way to do that is to fake a C3 state. However, QEMU
does not emulate IO port based wait. This appears to be the reason for
the high-CPU-usage-on-idle with Windows guests, fixed by disabling C3
reporting on rombios (commit cb98751267c2d79f5674301ccac6c6b5c2e0c6b5 of
kvm-userspace).

> 
> 2. We should implement round robin and lowest priority like qemu does.  
> Xen does the same thing:
> 
> > /* HACK: Route IRQ0 only to VCPU0 to prevent time jumps. */
> > #define IRQ0_SPECIAL_ROUTING 1
> in arch/x86/hvm/vioapic.c, at least for irq 0.
> 
> 3. The extra migrations on vcpu 0 are likely due to its role servicing 
> I/O on behalf of the entire virtual machine.  We should move this extra 
> work to an independent thread.  I have done some work in this area.  It 
> is becoming more important as kvm becomes more scalable.

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to