Gregory Haskins wrote:
> On Tue, 2007-10-23 at 16:19 +0200, Avi Kivity wrote:
>> Jan Kiszka wrote:
>>> Avi,
>>>
>>> [somehow your mails do not get through to my private account, so I'm
>>> switching]
>>>
>>> Avi Kivity wrote:
>>>   
>>>> Jan Kiszka wrote:
>>>>     
>>>>> Clarification: I can't precisely tell what code is executed in VM mode,
>>>>> as I don't have qemu or that guest instrumented. I just see the kernel
>>>>> entering VM mode and leaving it again more than 300 us later. So I
>>>>> wonder why this is allowed while some external IRQ is pending.
>>>>>
>>>>>   
>>>>>       
>>>> How do you know an external interrupt is pending?
>>>>     
>>> It's the host timer IRQ, programmed to fire in certain intervals (100 us
>>> here). Test case is some latency measurement tool like tglx's cyclictest
>>> or similar programs we use in Xenomai.
>>>
>>>   
>>>> kvm programs the hardware to exit when an external interrupt arrives.
>>>>
>>>>     
>>> Here is a latency trace I just managed to capture over 2.6.23.1-rt1 with 
>>> latest kvm from git hacked into (kvm generally seems to work fine this way):
>>>
>>> ...
>>> qemu-sys-7543  0...1 13897us : vmcs_write16+0xb/0x20 
>>> (vmx_save_host_state+0x1a7/0x1c0)
>>> qemu-sys-7543  0...1 13897us : vmcs_writel+0xb/0x30 (vmcs_write16+0x1e/0x20)
>>> qemu-sys-7543  0...1 13898us : segment_base+0xc/0x70 
>>> (vmx_save_host_state+0xa0/0x1c0)
>>> qemu-sys-7543  0...1 13898us : vmcs_writel+0xb/0x30 
>>> (vmx_save_host_state+0xb0/0x1c0)
>>> qemu-sys-7543  0...1 13898us : segment_base+0xc/0x70 
>>> (vmx_save_host_state+0xbf/0x1c0)
>>> qemu-sys-7543  0...1 13898us : vmcs_writel+0xb/0x30 
>>> (vmx_save_host_state+0xcf/0x1c0)
>>> qemu-sys-7543  0...1 13898us : load_msrs+0xb/0x40 
>>> (vmx_save_host_state+0xe7/0x1c0)
>>> qemu-sys-7543  0...1 13898us : kvm_load_guest_fpu+0x8/0x40 
>>> (kvm_vcpu_ioctl_run+0xbf/0x570)
>>> qemu-sys-7543  0D..1 13899us : vmx_vcpu_run+0xc/0x110 
>>> (kvm_vcpu_ioctl_run+0x120/0x570)
>>> qemu-sys-7543  0D..1 13899us!: vmcs_writel+0xb/0x30 
>>> (vmx_vcpu_run+0x22/0x110)
>>> qemu-sys-7543  0D..1 14344us : vmcs_read32+0xb/0x20 
>>> (vmx_vcpu_run+0xc7/0x110)
>>> qemu-sys-7543  0D..1 14345us : vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
>>> qemu-sys-7543  0D..1 14345us : vmcs_read32+0xb/0x20 
>>> (vmx_vcpu_run+0xf4/0x110)
>>> qemu-sys-7543  0D..1 14345us+: vmcs_readl+0x8/0x10 (vmcs_read32+0x16/0x20)
>>> qemu-sys-7543  0D..1 14349us : irq_enter+0xb/0x30 (do_IRQ+0x45/0xc0)
>>> qemu-sys-7543  0D.h1 14350us : do_IRQ+0x73/0xc0 (f8caae24     0     0)
>>> qemu-sys-7543  0D.h1 14351us : handle_level_irq+0xe/0x120 (do_IRQ+0x7d/0xc0)
>>> qemu-sys-7543  0D.h1 14351us : __spin_lock+0xc/0x30 
>>> (handle_level_irq+0x24/0x120)
>>> qemu-sys-7543  0D.h2 14352us : mask_and_ack_8259A+0x14/0x120 
>>> (handle_level_irq+0x37/0x120)
>>> qemu-sys-7543  0D.h2 14352us+: __spin_lock_irqsave+0x11/0x60 
>>> (mask_and_ack_8259A+0x2a/0x120)
>>> qemu-sys-7543  0D.h3 14357us : __spin_unlock_irqrestore+0xc/0x60 
>>> (mask_and_ack_8259A+0x7a/0x120)
>>> qemu-sys-7543  0D.h2 14358us : redirect_hardirq+0x8/0x70 
>>> (handle_level_irq+0x72/0x120)
>>> qemu-sys-7543  0D.h2 14358us : __spin_unlock+0xb/0x40 
>>> (handle_level_irq+0x8e/0x120)
>>> qemu-sys-7543  0D.h1 14358us : handle_IRQ_event+0xe/0x110 
>>> (handle_level_irq+0x9a/0x120)
>>> qemu-sys-7543  0D.h1 14359us : timer_interrupt+0xb/0x60 
>>> (handle_IRQ_event+0x67/0x110)
>>> qemu-sys-7543  0D.h1 14359us : hrtimer_interrupt+0xe/0x1f0 
>>> (timer_interrupt+0x20/0x60)
>>> ...
>>>
>>> One can see 345 us latency between vm-enter and vm-exit in vmx_vcpu_run -
>>> and this while cyclictest runs at a period of 100 us!
>>>
>>> I got the same results over Adeos/I-pipe & Xenomai with the function
>>> tracer there, also pointing to the period while the CPU is in VM mode.
>>>
>>> Anyone any ideas? Greg, I put you on CC as you said you once saw "decent
>>> latencies" with your patches. Are there still magic bits missing in
>>> official kvm?
>>>   
>> No bits missing as far as I know.  It should just work.
> 
> That could very well be the case these days.  I know back when I was
> looking at it, KVM would not run on VMX + -rt without modification or it
> would crash/hang (this was around the time I was working on that
> smp_function_call stuff).  And without careful modification it would run
> very poorly (with high (300us+ latencies) revealed in cyclictest.
> 
> However, I was able to craft the vmx_vcpu_run path so that a VM could
> run side-by-side with cyclictest with sub 40us latencies.  In fact,
> normally it was sub 30us, but on an occasional run I would get a spike
> to ~37us.
> 
> Unfortunately I am deep into other non-KVM related -rt issues at the
> moment, so I can't work on it any further for a bit.

Do you have some patch fragments left over? At least /me would be
interested to study and maybe forward port them. Or can you briefly
explain the issue above and/or the general problem behind this delay?

Thanks,
Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to