On 10/27/15 10:11 AM, Markus Armbruster wrote:
[...]
Eduardo, I did try this approach. It takes 2 line changes in exec.c:
comment the unlink out, and making sure MAP_SHARED is used when
-mem-path and -mem-prealloc are given. It works beautifully, and
libvmi accesses are fast. However, the VM is slowed down to a crawl,
obviously, because each RAM access by the VM triggers a page fault on
the mmapped file. I don't think having a crawling VM is desirable, so
this approach goes out the door.
Uh, I don't understand why "each RAM access by the VM triggers a page
fault".  Can you show us the patch you used?
Sorry, too brief of an explanation. Every time the guest flips a byte in
physical RAM, I think that triggers a page write to the mmaped file. My
understanding is that, with MAP_SHARED, each write to RAM triggers a
file write, hence the slowness. These are the simple changes I made, to
test it - as a proof of concept.
Ah, that actually makes sense.  Thanks!

[...]
However, when the guest RAM mmap'ed file resides on a RAMdisk on the host, the guest OS responsiveness is more than acceptable. Perhaps this is a viable approach. It might requires a minimum of changes to the QEMU source and maybe 1 extra command line argument.





Reply via email to