The idea to shift/share mmu cache memory between guests is great.
You do need to take care of proper inter vm locking (kvm_lock lock,
don't mix the kvm->lock).

Combining the idea with LRU replacment algorithm and a rebalance timer
for the mmu cache can
be a winning combination.
-- Dor.

Btw: Later on the static parameters should be set as a function of the
host/guest memory size and usage.
Another nice thing is to ask certain cache size as a parameter on vm
creation time.

>i forgot the most important thing
>this is request for comment, not any more than this
>(as you can see it doesnt have clean up function)
>so all i am trying to get here, is your ideas to what to do with it
now.
>add one of the options below?, add both of them?, not at all?
>
>
>anyway thanks! :)
>
>> On Sat, 2007-08-18 at 22:51 +0300, Izik Eidus wrote:
>> > this patch make kvm dynamicly allocate memory to its mmu pages
>buffer.
>> >
>> > untill now kvm used to allocate just 1024 pages ( 4MB ) no matter
>what
>> > was the guest ram size.
>> >
>> > beacuse the mmu pages buffer was very small alot of pages that had
>> > "correct" information about the guest pte, had to be released.
>> >
>> > what i did here is the first step to get one or both of the below
>> > options:
>> >
>> > 1)adding support to kvm to increase and decrease at runtime its mmu
>> > pages buffer by considering how much times the mmu_free_some_pages
>> > function is called.
>> >
>> > 2)adding support to kvm to share the mmu buffers with all VMs that
>run,
>> > in this case an idle vm will give some of it mmu buffer to "highly
>> > working vm"
>> >
>> > i wrote this patch with this 2 options in mind, and therefor
>> > i used lists and not arry, and created each entry of the list 1MB
>> > (holding list of 256 pages).
>> > it is now very easy and inexpensive to delete/add/move or doing
>anything
>> > we want with this 1MB block.
>> >
>> > ugly "benchmark" i ran showed that when the guest used 1% of 512mb
>vm to
>> > its mmu buffer and compiled the linux kernel with -j 8 it had
number
>of
>> > 21,100,000 fix page_fault exits and it took 8:10 secs
>> >
>> > when the same guest with the same number of ram used 2% of the
512mb
>bm
>> > to its mmu buffer it compiled the linux kernel with -8 at 7:48 secs
>and
>> > had just 17,500,000 fix page_faults exits.
>> >
>> > (as far as the guest will have more ram the results should be much
>> > faster than without this patch)
>> >
>> > (this benchmark was really ugly, i didnt use ram drive or anything
>like
>> > that to compiling it..)
>> >
>> > ohh, i must to add that i added a function to remove the lists and
>all
>> > the pages it allocated to the mmu pages, but i didnt write any line
>in
>> > it because i want to ask avi something first, so dont blame me for
>> > stealing your ram :)
>> >
>> > anyway enjoy.
>> >
>
>
>-----------------------------------------------------------------------
-
>-
>This SF.net email is sponsored by: Splunk Inc.
>Still grepping through log files to find problems?  Stop.
>Now Search log events and configuration files using AJAX and a browser.
>Download your FREE copy of Splunk now >>  http://get.splunk.com/
>_______________________________________________
>kvm-devel mailing list
>[email protected]
>https://lists.sourceforge.net/lists/listinfo/kvm-devel

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to