Re: Live memory allocation?
Javier Guerra schrieb: On Mon, Mar 30, 2009 at 10:15 AM, Tomasz Chmielewski man...@wpkg.org wrote: Still, if there is free memory on host, why not use it for cache? because it's best used on the guest; It is correct, but not realistic from the administrative point of view. Let's say you have several KVM hosts, each with 16 GB RAM. Guests can come and go - so you give them only as much memory as they need (more or less). In other words, normally, you don't create the first guest with 16 GB RAM assigned. Upon creation of the second guest 2 hours later, you don't stop guest 1, just to start both guests with 8 GB RAM a while later. And so on. And so on, stopping and starting a whole bunch of guests until each of them has 512 MB RAM. No, not all guests support ballooning. But for those which support ballooning, the easiest way to implement it would be to write a user-space daemon I guess. so, not cacheing already-cached data, it's free to cache other more important things, or to keep more of the VMs memory on RAM. Correct - if the host knew what the guest already cached, the host could use RAM for other things. Anyway, there are still more pressing issues than that ;) -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
On Saturday 28 March 2009 11:17:42 am you wrote: KVM devs have a patch called KSM (short for kernel shared memory I think) that helps windows guests a good bit. See the original announcement [1] for some numbers. I spoke to one of the devs recently and they said they are going to resubmit it soon. I remember the discussion about KSM. First, the kernel developers were not very happy with the approach, and second, there were some patent implications with VMware. Have these issues been resolved? Don't get me wrong. I'm not trying to stop KSM, I'm just wondering if I can get my hopes up again. I thought KSM was a great idea and I'd love to get my hands on it. -- Alberto Treviño BYU Testing Center Brigham Young University -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Avi Kivity schrieb: (...) Perhaps KSM would help you? Alternately, a heuristic that scanned for (and collapsed) fully zeroed pages when a page is faulted in for the first time could catch these. ksm will indeed collapse these pages. Lighter-weight alternatives exist -- ballooning (need a Windows driver), or, like you mention, a simple scanner that looks for zero pages and drops them. That could be implemented within qemu (with some simple kernel support for dropping zero pages atomically, say madvise(MADV_DROP_IFZERO). From KSM description I can conclude that it allows dynamicly sharing identical memory pages between one or more processes. What about cache/buffers sharing between the host kernel and running processes? If I'm not mistaken, right now, memory is wasted by caching the same data by host and guest kernels. For example, let's say we have a host with 2 GB RAM and it runs a 1 GB guest. If we read ~900 MB file_1 (block device) on guest, then: - guest's kernel will cache file_1 - host's kernel will cache the same area of file_1 (block device) Now, if we want to read ~900 MB file_2 (or lots of files with that size), cache for file_1 will be emptied on both guest and host as we read file_2. Ideal situation would be if host and guest caches could be shared, to a degree (and have both file_1 and file_2 in memory, doesn't matter if it's guest or host). -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Tomasz Chmielewski wrote: What about cache/buffers sharing between the host kernel and running processes? If I'm not mistaken, right now, memory is wasted by caching the same data by host and guest kernels. For example, let's say we have a host with 2 GB RAM and it runs a 1 GB guest. If we read ~900 MB file_1 (block device) on guest, then: - guest's kernel will cache file_1 - host's kernel will cache the same area of file_1 (block device) Now, if we want to read ~900 MB file_2 (or lots of files with that size), cache for file_1 will be emptied on both guest and host as we read file_2. Ideal situation would be if host and guest caches could be shared, to a degree (and have both file_1 and file_2 in memory, doesn't matter if it's guest or host). Double caching is indeed a bad idea. That's why you have cache=off (though it isn't recommended with qcow2). -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Avi Kivity schrieb: Tomasz Chmielewski wrote: What about cache/buffers sharing between the host kernel and running processes? If I'm not mistaken, right now, memory is wasted by caching the same data by host and guest kernels. For example, let's say we have a host with 2 GB RAM and it runs a 1 GB guest. If we read ~900 MB file_1 (block device) on guest, then: - guest's kernel will cache file_1 - host's kernel will cache the same area of file_1 (block device) Now, if we want to read ~900 MB file_2 (or lots of files with that size), cache for file_1 will be emptied on both guest and host as we read file_2. Ideal situation would be if host and guest caches could be shared, to a degree (and have both file_1 and file_2 in memory, doesn't matter if it's guest or host). Double caching is indeed a bad idea. That's why you have cache=off (though it isn't recommended with qcow2). cache= option is about write cache, right? Here, I'm talking about read cache. Or, does cache=none disable read cache as well? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Tomasz Chmielewski wrote: Double caching is indeed a bad idea. That's why you have cache=off (though it isn't recommended with qcow2). cache= option is about write cache, right? Here, I'm talking about read cache. Or, does cache=none disable read cache as well? cache=writethrough disables the write cache cache=none disables host caching completely -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Avi Kivity schrieb: Tomasz Chmielewski wrote: Double caching is indeed a bad idea. That's why you have cache=off (though it isn't recommended with qcow2). cache= option is about write cache, right? Here, I'm talking about read cache. Or, does cache=none disable read cache as well? cache=writethrough disables the write cache cache=none disables host caching completely Still, if there is free memory on host, why not use it for cache? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
On Mon, Mar 30, 2009 at 10:15 AM, Tomasz Chmielewski man...@wpkg.org wrote: Still, if there is free memory on host, why not use it for cache? because it's best used on the guest; which will do anyway. so, not cacheing already-cached data, it's free to cache other more important things, or to keep more of the VMs memory on RAM. -- Javier -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
On Monday 30 March 2009 08:23:44 Alberto Treviño wrote: On Saturday 28 March 2009 11:17:42 am you wrote: KVM devs have a patch called KSM (short for kernel shared memory I think) that helps windows guests a good bit. See the original announcement [1] for some numbers. I spoke to one of the devs recently and they said they are going to resubmit it soon. I remember the discussion about KSM. First, the kernel developers were not very happy with the approach, and second, there were some patent implications with VMware. Some (one?) of the kernel devs didn't like it, then admitted that he hadn't even read the patch. And as Alan Cox pointed out, if there was some patent problem, it should be handled by lawyers. There was also prior art (even in Linux) from quite some time ago. So, I think we are safe for now. --Brian Jackson Have these issues been resolved? Don't get me wrong. I'm not trying to stop KSM, I'm just wondering if I can get my hopes up again. I thought KSM was a great idea and I'd love to get my hands on it. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Nolan wrote: Windows does zero all memory at boot, and also runs a idle-priority thread in the background to zero memory as it is freed. This way it is far less likely to need to zero a page to satisfy a memory allocation request. Whether or not this is still a win now that people care about power consumption is an open question. I suspect the difference of behavior between KVM and VMware is related to VMware's page sharing. All those zeroed pages can be collapsed into one COW zero page. I wouldn't be surprised to learn that VMware has heuristics in the page sharing code specifically for windows guests. Perhaps KSM would help you? Alternately, a heuristic that scanned for (and collapsed) fully zeroed pages when a page is faulted in for the first time could catch these. ksm will indeed collapse these pages. Lighter-weight alternatives exist -- ballooning (need a Windows driver), or, like you mention, a simple scanner that looks for zero pages and drops them. That could be implemented within qemu (with some simple kernel support for dropping zero pages atomically, say madvise(MADV_DROP_IFZERO). -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
On Thursday 26 March 2009 08:11:02 am Tomasz Chmielewski wrote: Like, two guests, each with 2 GB memory allocated only use 1 GB of host's memory (as long as they don't have many programs/buffers/cache)? So yes, it's also supported by KVM. The problem I've seen with this feature is that Windows guests end up taking all of their available memory once they are up and running. For example, booting Windows XP in KVM 82 show a steady increase in memory. Then about the time the login box is about to appear, memory usage jumps to the maximum allowed to the VM (512 MB in this case). I remember reading somewhere Windows would try to initialize all memory during boot, causing KVM to allocate all memory. VMware, however (and I don't know about VirtualBox) knows about this and works around it, making sure memory isn't all allocated during the Windows boot process. Would there a way to work around the Windows memory allocation issue in KVM as well? -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
On Saturday 28 March 2009 08:38:33 Alberto Treviño wrote: On Thursday 26 March 2009 08:11:02 am Tomasz Chmielewski wrote: Like, two guests, each with 2 GB memory allocated only use 1 GB of host's memory (as long as they don't have many programs/buffers/cache)? So yes, it's also supported by KVM. The problem I've seen with this feature is that Windows guests end up taking all of their available memory once they are up and running. For example, booting Windows XP in KVM 82 show a steady increase in memory. Then about the time the login box is about to appear, memory usage jumps to the maximum allowed to the VM (512 MB in this case). I remember reading somewhere Windows would try to initialize all memory during boot, causing KVM to allocate all memory. VMware, however (and I don't know about VirtualBox) knows about this and works around it, making sure memory isn't all allocated during the Windows boot process. Would there a way to work around the Windows memory allocation issue in KVM as well? KVM devs have a patch called KSM (short for kernel shared memory I think) that helps windows guests a good bit. See the original announcement [1] for some numbers. I spoke to one of the devs recently and they said they are going to resubmit it soon. [1] http://marc.info/?l=kvmm=122688851003046w=2 -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Alberto Treviño alberto at byu.edu writes: The problem I've seen with this feature is that Windows guests end up taking all of their available memory once they are up and running. For example, booting Windows XP in KVM 82 show a steady increase in memory. Then about the time the login box is about to appear, memory usage jumps to the maximum allowed to the VM (512 MB in this case). I remember reading somewhere Windows would try to initialize all memory during boot, causing KVM to allocate all memory. VMware, however (and I don't know about VirtualBox) knows about this and works around it, making sure memory isn't all allocated during the Windows boot process. Windows does zero all memory at boot, and also runs a idle-priority thread in the background to zero memory as it is freed. This way it is far less likely to need to zero a page to satisfy a memory allocation request. Whether or not this is still a win now that people care about power consumption is an open question. I suspect the difference of behavior between KVM and VMware is related to VMware's page sharing. All those zeroed pages can be collapsed into one COW zero page. I wouldn't be surprised to learn that VMware has heuristics in the page sharing code specifically for windows guests. Perhaps KSM would help you? Alternately, a heuristic that scanned for (and collapsed) fully zeroed pages when a page is faulted in for the first time could catch these. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Evert schrieb: Hi all, According to the Wikipedia ( http://en.wikipedia.org/wiki/Comparison_of_platform_virtual_machines ) both VirtualBox VMware server support something called 'Live memory allocation'. Does KVM support this as well? What does this term mean exactly? Is it the same as ballooning used by KVM? -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Tomasz Chmielewski wrote: Evert schrieb: Hi all, According to the Wikipedia ( http://en.wikipedia.org/wiki/Comparison_of_platform_virtual_machines ) both VirtualBox VMware server support something called 'Live memory allocation'. Does KVM support this as well? What does this term mean exactly? Is it the same as ballooning used by KVM? I guess it referring to memory allocation on first time access to the memory areas, Meaning the memory allocation will be made only when it really going to be used. (But this is just a guess) -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Izik Eidus schrieb: Tomasz Chmielewski wrote: Evert schrieb: Hi all, According to the Wikipedia ( http://en.wikipedia.org/wiki/Comparison_of_platform_virtual_machines ) both VirtualBox VMware server support something called 'Live memory allocation'. Does KVM support this as well? What does this term mean exactly? Is it the same as ballooning used by KVM? I guess it referring to memory allocation on first time access to the memory areas, Meaning the memory allocation will be made only when it really going to be used. Like, two guests, each with 2 GB memory allocated only use 1 GB of host's memory (as long as they don't have many programs/buffers/cache)? So yes, it's also supported by KVM. -- Tomasz Chmielewski http://wpkg.org -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Live memory allocation?
Tomasz Chmielewski wrote: Izik Eidus schrieb: Tomasz Chmielewski wrote: Evert schrieb: Hi all, According to the Wikipedia ( http://en.wikipedia.org/wiki/Comparison_of_platform_virtual_machines ) both VirtualBox VMware server support something called 'Live memory allocation'. Does KVM support this as well? What does this term mean exactly? Is it the same as ballooning used by KVM? I guess it referring to memory allocation on first time access to the memory areas, Meaning the memory allocation will be made only when it really going to be used. Like, two guests, each with 2 GB memory allocated only use 1 GB of host's memory (as long as they don't have many programs/buffers/cache)? So yes, it's also supported by KVM. I have amended http://en.wikipedia.org/wiki/Comparison_of_platform_virtual_machines based on this thread :-) Greetings, Evert -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html