David S. Ahern wrote:
I've been instrumenting the guest kernel as well. It's the scanning of
the active lists that triggers a lot of calls to paging64_prefetch_page,
and, as you guys know, correlates with the number of direct pages in the
list. Earlier in this thread I traced the kvm cycles to
paging64_prefetch_page(). See

http://www.mail-archive.com/[EMAIL PROTECTED]/msg16332.html

In the guest I started capturing scans (kscand() loop) that took longer
than a jiffie. Here's an example for 1 trip through the active lists,
both anonymous and cache:

active_anon_scan: HighMem, age 4, count[age] 41863 -> 30194, direct
36234, dj 225


HZ=512, so half a second.

41K pages in 0.5s -> 80K pages/sec. Considering we have _at_least_ two emulations per page, this is almost reasonable.

active_anon_scan: HighMem, age 3, count[age] 1772 -> 1450, direct 1249, dj 3

active_anon_scan: HighMem, age 0, count[age] 104078 -> 101685, direct
84829, dj 848

Here we scanned 100K pages in ~2 seconds.  50K pages/sec, not too good.

I'll pull down the git branch and give it a spin.

I've rebased it again to include the prefetch_page optimization.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to