On Fri, Apr 25, 2008 at 11:33:18AM -0600, David S. Ahern wrote:
> Most of the cycles (~80% of that 54k+) are spent in paging64_prefetch_page():
> 
>         for (i = 0; i < PT64_ENT_PER_PAGE; ++i) {
>                 gpa_t pte_gpa = gfn_to_gpa(sp->gfn);
>                 pte_gpa += (i+offset) * sizeof(pt_element_t);
> 
>                 r = kvm_read_guest_atomic(vcpu->kvm, pte_gpa, &pt,
>                                           sizeof(pt_element_t));
>                 if (r || is_present_pte(pt))
>                         sp->spt[i] = shadow_trap_nonpresent_pte;
>                 else
>                         sp->spt[i] = shadow_notrap_nonpresent_pte;
>         }
> 
> This loop is run 512 times and takes a total of ~45k cycles, or ~88 cycles per
> loop.
> 
> This function gets run >20,000/sec during some of the kscand loops.

Hi David,

Do you see the mmu_recycled counter increase?

-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to