Ingo Molnar wrote: > * Avi Kivity <[EMAIL PROTECTED]> wrote: > > >> This is a request for testing of the mmu optimizations branch. >> >> Currently the shadow page tables are discarded every time the guest >> performs a context switch. The mmu branch allows shadow page tables >> to be cached across context switches, greatly reducing the cpu >> utilization on multi process workloads. It is now stable enough for >> testing (though perhaps not for general use). >> > > i have tested it with Fedora Core 6 guest (32-bit, nopae), under a FC6 > host (32-bit CoreDuo2, nopae, enough RAM), and it's working great! > > Here are some quick numbers. Context-switch overhead with lmbench > lat_ctx -s 0 [zero memory footprint]: > > ------------------------------------------------- > #tasks native kvm-r4204 kvm-r4232(mmu) > ------------------------------------------------- > 2: 2.02 180.91 9.19 > 20: 4.04 183.21 10.01 > 50: 4.30 185.95 11.27 > > so here it's a /massive/, almost 20 times speedup! >
Excellent. 10us is approximately the vmexit overhead on intel (we regularly see 100-120k exits/sec), so it means a context switch is exactly one exit. Hard to beat without nested page tables. > Context-switch overhead with -s 1000 (1MB memory footprint): > > ------------------------------------------------- > #tasks native kvm-r4204 kvm-r4232(mmu) > ------------------------------------------------- > 2: 150.5 1032.97 295.16 > 20: 216.6 1020.34 393.01 > 50: 218.1 1015.58 2335.99[*] > > the speedup is nice here too. Note the outlier at 50 tasks: it's > consistently reproducable. Could KVM be trashing the pagetable cache due > to some sort of internal limit? It's not due to guest size > kvm now caches 256 page tables; so if every process uses 5 page tables, plus some for the kernel, you'd get thrashing. I don't understand why we're lower than native with 2 processes. Maybe background work causes page tables to be evicted (see page replacement, below). I plan to add a tunable for the cache size, and autotuning later on. The shadow page replacement algorithm can also use some work, currently it's FIFO. It can be easily made to mimic the Linux active/inactive lists to approximate LRU by examining the accessed bits on parent page tables. > The -mmu FC6 guest is visibly faster, so it's not just microbenchmarks > that benefit from this change. KVM got /massively/ faster in every > aspect, kudos Avi! (Note that r4204 already included the interactivity > IRQ fixes so the improvements are i think purely due to pagetable > caching speedups.) > > on a related note, i also got: > > vmwrite error: reg 6802 value cfd3c4a4 (err 17408) > > This is already fixed on the trunk (which now has mmu merged). > and: > > kvm: unhandled wrmsr: 0xc1 > inject_general_protection: rip 0xc011f7f3 > kvm: unhandled wrmsr: 0x186 > inject_general_protection: rip 0xc011f7f3 > kvm: unhandled wrmsr: 0xc1 > inject_general_protection: rip 0xc011f7f3 > kvm: unhandled wrmsr: 0x186 > inject_general_protection: rip 0xc011f7f3 > > unfortunately 0xc011f7f3 is in native_write_msr(), which isnt very > helpful. (i have CONFIG_PARAVIRT enabled in the -rt guest and host > kernels) But the MSR values suggest that this is the NMI watchdog thing > again, trying to program MSR_ARCH_PERFMON_EVENTSEL0 and > MSR_ARCH_PERFMON_PERFCTR0, but this time Linux recovered due to a more > robust MSR handling. The guest disabled the NMI watchdog with: > > Testing NMI watchdog ... CPU#0: NMI appears to be stuck (0->0)! > > the FC6 installer hang that i saw with earlier MMU-branch snapshots is > fixed. > Good. Handling the counter well would have been very difficult, especially if attempting to support cross migration. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ kvm-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/kvm-devel
