Il 26/08/2013 21:15, Brian Rak ha scritto:
> 
> Samples: 62M of event 'cycles', Event count (approx.): 642019289177
>  64.69%  [kernel]                    [k] _raw_spin_lock
>   2.59%  qemu-system-x86_64          [.] 0x00000000001e688d
>   1.90%  [kernel]                    [k] native_write_msr_safe
>   0.84%  [kvm]                       [k] vcpu_enter_guest
>   0.80%  [kernel]                    [k] __schedule
>   0.77%  [kvm_intel]                 [k] vmx_vcpu_run
>   0.68%  [kernel]                    [k] effective_load
>   0.65%  [kernel]                    [k] update_cfs_shares
>   0.62%  [kernel]                    [k] _raw_spin_lock_irq
>   0.61%  [kernel]                    [k] native_read_msr_safe
>   0.56%  [kernel]                    [k] enqueue_entity

Can you capture the call graphs, too (perf record -g)?

> I've captured 20,000 lines of kvm trace output.  This can be found
> https://gist.github.com/devicenull/fa8f49d4366060029ee4/raw/fb89720d34b43920be22e3e9a1d88962bf305da8/trace

The guest is doing quite a lot of exits per second, mostly to (a) access
the ACPI timer (b) service NMIs.  In fact, every NMI is reading the
timer too and causing an exit to QEMU.

So it is also possible that you have to debug this inside the guest, to
see if these exits are expected or not.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to