Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On 10/07/2010 07:23 PM, Gleb Natapov wrote: On Thu, Oct 07, 2010 at 06:20:53PM +0200, Avi Kivity wrote: On 10/07/2010 06:03 PM, Gleb Natapov wrote: Isn't SET_USER_MEMORY_REGION so slow that calling it 2^32 times isn't really feasible? Assuming it takes 1ms, it would take 49 days. We may fail ioctl when max value is reached. The question is how much slot changes can we expect from real guest during its lifetime. A normal guest has a 30 Hz timer for reading the vga framebuffer, multiple slots. Let's assume 100 Hz frequency, that gives 490 days until things stop working. And reading vga framebuffer needs slots changes because of dirty map tracking? Yes. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On 10/06/2010 10:08 PM, Gleb Natapov wrote: Malicious userspace can cause entry to be cached, ioctl SET_USER_MEMORY_REGION 2^32 times, generation number will match, mark_page_dirty_in_slot will be called with pointer to freed memory. Hmm. To zap all cached entires on overflow we need to track them. If we will track then we can zap them on each slot update and drop generation entirely. To track them you need locking. Isn't SET_USER_MEMORY_REGION so slow that calling it 2^32 times isn't really feasible? In any case, can use u64 generation count. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On 10/04/2010 05:56 PM, Gleb Natapov wrote: Keep track of memslots changes by keeping generation number in memslots structure. Provide kvm_write_guest_cached() function that skips gfn_to_hva() translation if memslots was not changed since previous invocation. btw, this patch (and patch 5, and perhaps more) can be applied independently. If you like, you can submit them before the patch set is complete to reduce your queue length. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On Thu, Oct 07, 2010 at 12:00:13PM +0200, Avi Kivity wrote: On 10/06/2010 10:08 PM, Gleb Natapov wrote: Malicious userspace can cause entry to be cached, ioctl SET_USER_MEMORY_REGION 2^32 times, generation number will match, mark_page_dirty_in_slot will be called with pointer to freed memory. Hmm. To zap all cached entires on overflow we need to track them. If we will track then we can zap them on each slot update and drop generation entirely. To track them you need locking. Isn't SET_USER_MEMORY_REGION so slow that calling it 2^32 times isn't really feasible? Assuming it takes 1ms, it would take 49 days. In any case, can use u64 generation count. Agree. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On Thu, Oct 07, 2010 at 12:42:48PM -0300, Marcelo Tosatti wrote: On Thu, Oct 07, 2010 at 12:00:13PM +0200, Avi Kivity wrote: On 10/06/2010 10:08 PM, Gleb Natapov wrote: Malicious userspace can cause entry to be cached, ioctl SET_USER_MEMORY_REGION 2^32 times, generation number will match, mark_page_dirty_in_slot will be called with pointer to freed memory. Hmm. To zap all cached entires on overflow we need to track them. If we will track then we can zap them on each slot update and drop generation entirely. To track them you need locking. Isn't SET_USER_MEMORY_REGION so slow that calling it 2^32 times isn't really feasible? Assuming it takes 1ms, it would take 49 days. We may fail ioctl when max value is reached. The question is how much slot changes can we expect from real guest during its lifetime. In any case, can use u64 generation count. Agree. Yes, 64 bit ought to be enough for anybody. -- Gleb. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On 10/07/2010 06:03 PM, Gleb Natapov wrote: Isn't SET_USER_MEMORY_REGION so slow that calling it 2^32 times isn't really feasible? Assuming it takes 1ms, it would take 49 days. We may fail ioctl when max value is reached. The question is how much slot changes can we expect from real guest during its lifetime. A normal guest has a 30 Hz timer for reading the vga framebuffer, multiple slots. Let's assume 100 Hz frequency, that gives 490 days until things stop working. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On Thu, Oct 07, 2010 at 06:20:53PM +0200, Avi Kivity wrote: On 10/07/2010 06:03 PM, Gleb Natapov wrote: Isn't SET_USER_MEMORY_REGION so slow that calling it 2^32 times isn't really feasible? Assuming it takes 1ms, it would take 49 days. We may fail ioctl when max value is reached. The question is how much slot changes can we expect from real guest during its lifetime. A normal guest has a 30 Hz timer for reading the vga framebuffer, multiple slots. Let's assume 100 Hz frequency, that gives 490 days until things stop working. And reading vga framebuffer needs slots changes because of dirty map tracking? -- Gleb. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On Tue, Oct 05, 2010 at 01:57:38PM -0300, Marcelo Tosatti wrote: On Mon, Oct 04, 2010 at 05:56:26PM +0200, Gleb Natapov wrote: Keep track of memslots changes by keeping generation number in memslots structure. Provide kvm_write_guest_cached() function that skips gfn_to_hva() translation if memslots was not changed since previous invocation. Signed-off-by: Gleb Natapov g...@redhat.com --- include/linux/kvm_host.h |7 + include/linux/kvm_types.h |7 + virt/kvm/kvm_main.c | 57 +--- 3 files changed, 67 insertions(+), 4 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index a08614e..4dff9a1 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -199,6 +199,7 @@ struct kvm_irq_routing_table {}; struct kvm_memslots { int nmemslots; + u32 generation; struct kvm_memory_slot memslots[KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS]; }; @@ -352,12 +353,18 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, const void *data, int offset, int len); int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data, unsigned long len); +int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + void *data, unsigned long len); +int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + gpa_t gpa); int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len); int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len); struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn); unsigned long kvm_host_page_size(struct kvm *kvm, gfn_t gfn); void mark_page_dirty(struct kvm *kvm, gfn_t gfn); +void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot, +gfn_t gfn); void kvm_vcpu_block(struct kvm_vcpu *vcpu); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu); diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 7ac0d4e..ee6eb71 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -67,4 +67,11 @@ struct kvm_lapic_irq { u32 dest_id; }; +struct gfn_to_hva_cache { + u32 generation; + gpa_t gpa; + unsigned long hva; + struct kvm_memory_slot *memslot; +}; + #endif /* __KVM_TYPES_H__ */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index db58a1b..45ef50c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -687,6 +687,7 @@ skip_lpage: memcpy(slots, kvm-memslots, sizeof(struct kvm_memslots)); if (mem-slot = slots-nmemslots) slots-nmemslots = mem-slot + 1; + slots-generation++; slots-memslots[mem-slot].flags |= KVM_MEMSLOT_INVALID; old_memslots = kvm-memslots; @@ -723,6 +724,7 @@ skip_lpage: memcpy(slots, kvm-memslots, sizeof(struct kvm_memslots)); if (mem-slot = slots-nmemslots) slots-nmemslots = mem-slot + 1; + slots-generation++; /* actual memory is freed via old in kvm_free_physmem_slot below */ if (!npages) { @@ -1247,6 +1249,47 @@ int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data, return 0; } +int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + gpa_t gpa) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + int offset = offset_in_page(gpa); + gfn_t gfn = gpa PAGE_SHIFT; + + ghc-gpa = gpa; + ghc-generation = slots-generation; + ghc-memslot = gfn_to_memslot(kvm, gfn); + ghc-hva = gfn_to_hva(kvm, gfn); + if (!kvm_is_error_hva(ghc-hva)) + ghc-hva += offset; + else + return -EFAULT; + + return 0; +} Should use a unique kvm_memslots structure for the cache entry, since it can change in between (use gfn_to_hva_memslot, etc on slots pointer). I do not understand what do you mean here. kvm_memslots structure itself is not cached only various translation that use it are cached. Translation result are never used if kvm_memslots was changed. Also should zap any cached entries on overflow, otherwise malicious userspace could make use of stale slots: There is only one cached entry at each given time. User who wants to write into guest memory often defines gfn_to_hva_cache variable somewhere. Init it with kvm_gfn_to_hva_cache_init() and then calls kvm_write_guest_cached() on it. If there was no slot changes in between cached translation are used. Otherwise cache is recalculated. +void mark_page_dirty(struct kvm *kvm, gfn_t gfn) +{ + struct kvm_memory_slot *memslot; + + memslot = gfn_to_memslot(kvm, gfn);
Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On Wed, Oct 06, 2010 at 01:14:17PM +0200, Gleb Natapov wrote: +int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + gpa_t gpa) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + int offset = offset_in_page(gpa); + gfn_t gfn = gpa PAGE_SHIFT; + + ghc-gpa = gpa; + ghc-generation = slots-generation; kvm-memslots can change here. + ghc-memslot = gfn_to_memslot(kvm, gfn); + ghc-hva = gfn_to_hva(kvm, gfn); And if so, gfn_to_memslot / gfn_to_hva will use new memslots pointer. Should dereference all values from one copy of kvm-memslots pointer. + if (!kvm_is_error_hva(ghc-hva)) + ghc-hva += offset; + else + return -EFAULT; + + return 0; +} Should use a unique kvm_memslots structure for the cache entry, since it can change in between (use gfn_to_hva_memslot, etc on slots pointer). I do not understand what do you mean here. kvm_memslots structure itself is not cached only various translation that use it are cached. Translation result are never used if kvm_memslots was changed. Also should zap any cached entries on overflow, otherwise malicious userspace could make use of stale slots: There is only one cached entry at each given time. User who wants to write into guest memory often defines gfn_to_hva_cache variable somewhere. Init it with kvm_gfn_to_hva_cache_init() and then calls kvm_write_guest_cached() on it. If there was no slot changes in between cached translation are used. Otherwise cache is recalculated. Malicious userspace can cause entry to be cached, ioctl SET_USER_MEMORY_REGION 2^32 times, generation number will match, mark_page_dirty_in_slot will be called with pointer to freed memory. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On Wed, Oct 06, 2010 at 11:38:47AM -0300, Marcelo Tosatti wrote: On Wed, Oct 06, 2010 at 01:14:17PM +0200, Gleb Natapov wrote: +int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + gpa_t gpa) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + int offset = offset_in_page(gpa); + gfn_t gfn = gpa PAGE_SHIFT; + + ghc-gpa = gpa; + ghc-generation = slots-generation; kvm-memslots can change here. + ghc-memslot = gfn_to_memslot(kvm, gfn); + ghc-hva = gfn_to_hva(kvm, gfn); And if so, gfn_to_memslot / gfn_to_hva will use new memslots pointer. Should dereference all values from one copy of kvm-memslots pointer. Ah, I see now. Thanks! Will fix. + if (!kvm_is_error_hva(ghc-hva)) + ghc-hva += offset; + else + return -EFAULT; + + return 0; +} Should use a unique kvm_memslots structure for the cache entry, since it can change in between (use gfn_to_hva_memslot, etc on slots pointer). I do not understand what do you mean here. kvm_memslots structure itself is not cached only various translation that use it are cached. Translation result are never used if kvm_memslots was changed. Also should zap any cached entries on overflow, otherwise malicious userspace could make use of stale slots: There is only one cached entry at each given time. User who wants to write into guest memory often defines gfn_to_hva_cache variable somewhere. Init it with kvm_gfn_to_hva_cache_init() and then calls kvm_write_guest_cached() on it. If there was no slot changes in between cached translation are used. Otherwise cache is recalculated. Malicious userspace can cause entry to be cached, ioctl SET_USER_MEMORY_REGION 2^32 times, generation number will match, mark_page_dirty_in_slot will be called with pointer to freed memory. Hmm. To zap all cached entires on overflow we need to track them. If we will track then we can zap them on each slot update and drop generation entirely. -- Gleb. -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On Mon, Oct 04, 2010 at 05:56:26PM +0200, Gleb Natapov wrote: Keep track of memslots changes by keeping generation number in memslots structure. Provide kvm_write_guest_cached() function that skips gfn_to_hva() translation if memslots was not changed since previous invocation. Signed-off-by: Gleb Natapov g...@redhat.com --- include/linux/kvm_host.h |7 + include/linux/kvm_types.h |7 + virt/kvm/kvm_main.c | 57 +--- 3 files changed, 67 insertions(+), 4 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index a08614e..4dff9a1 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -199,6 +199,7 @@ struct kvm_irq_routing_table {}; struct kvm_memslots { int nmemslots; + u32 generation; struct kvm_memory_slot memslots[KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS]; }; @@ -352,12 +353,18 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, const void *data, int offset, int len); int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data, unsigned long len); +int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, +void *data, unsigned long len); +int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + gpa_t gpa); int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len); int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len); struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn); unsigned long kvm_host_page_size(struct kvm *kvm, gfn_t gfn); void mark_page_dirty(struct kvm *kvm, gfn_t gfn); +void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot, + gfn_t gfn); void kvm_vcpu_block(struct kvm_vcpu *vcpu); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu); diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 7ac0d4e..ee6eb71 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -67,4 +67,11 @@ struct kvm_lapic_irq { u32 dest_id; }; +struct gfn_to_hva_cache { + u32 generation; + gpa_t gpa; + unsigned long hva; + struct kvm_memory_slot *memslot; +}; + #endif /* __KVM_TYPES_H__ */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index db58a1b..45ef50c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -687,6 +687,7 @@ skip_lpage: memcpy(slots, kvm-memslots, sizeof(struct kvm_memslots)); if (mem-slot = slots-nmemslots) slots-nmemslots = mem-slot + 1; + slots-generation++; slots-memslots[mem-slot].flags |= KVM_MEMSLOT_INVALID; old_memslots = kvm-memslots; @@ -723,6 +724,7 @@ skip_lpage: memcpy(slots, kvm-memslots, sizeof(struct kvm_memslots)); if (mem-slot = slots-nmemslots) slots-nmemslots = mem-slot + 1; + slots-generation++; /* actual memory is freed via old in kvm_free_physmem_slot below */ if (!npages) { @@ -1247,6 +1249,47 @@ int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data, return 0; } +int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + gpa_t gpa) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + int offset = offset_in_page(gpa); + gfn_t gfn = gpa PAGE_SHIFT; + + ghc-gpa = gpa; + ghc-generation = slots-generation; + ghc-memslot = gfn_to_memslot(kvm, gfn); + ghc-hva = gfn_to_hva(kvm, gfn); + if (!kvm_is_error_hva(ghc-hva)) + ghc-hva += offset; + else + return -EFAULT; + + return 0; +} Should use a unique kvm_memslots structure for the cache entry, since it can change in between (use gfn_to_hva_memslot, etc on slots pointer). Also should zap any cached entries on overflow, otherwise malicious userspace could make use of stale slots: +void mark_page_dirty(struct kvm *kvm, gfn_t gfn) +{ + struct kvm_memory_slot *memslot; + + memslot = gfn_to_memslot(kvm, gfn); + mark_page_dirty_in_slot(kvm, memslot, gfn); +} + /* * The vCPU has executed a HLT instruction with in-kernel mode enabled. */ -- 1.7.1 -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
Keep track of memslots changes by keeping generation number in memslots structure. Provide kvm_write_guest_cached() function that skips gfn_to_hva() translation if memslots was not changed since previous invocation. Signed-off-by: Gleb Natapov g...@redhat.com --- include/linux/kvm_host.h |7 + include/linux/kvm_types.h |7 + virt/kvm/kvm_main.c | 57 +--- 3 files changed, 67 insertions(+), 4 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index a08614e..4dff9a1 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -199,6 +199,7 @@ struct kvm_irq_routing_table {}; struct kvm_memslots { int nmemslots; + u32 generation; struct kvm_memory_slot memslots[KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS]; }; @@ -352,12 +353,18 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, const void *data, int offset, int len); int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data, unsigned long len); +int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + void *data, unsigned long len); +int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + gpa_t gpa); int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len); int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len); struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn); unsigned long kvm_host_page_size(struct kvm *kvm, gfn_t gfn); void mark_page_dirty(struct kvm *kvm, gfn_t gfn); +void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot, +gfn_t gfn); void kvm_vcpu_block(struct kvm_vcpu *vcpu); void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu); diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index 7ac0d4e..ee6eb71 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -67,4 +67,11 @@ struct kvm_lapic_irq { u32 dest_id; }; +struct gfn_to_hva_cache { + u32 generation; + gpa_t gpa; + unsigned long hva; + struct kvm_memory_slot *memslot; +}; + #endif /* __KVM_TYPES_H__ */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index db58a1b..45ef50c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -687,6 +687,7 @@ skip_lpage: memcpy(slots, kvm-memslots, sizeof(struct kvm_memslots)); if (mem-slot = slots-nmemslots) slots-nmemslots = mem-slot + 1; + slots-generation++; slots-memslots[mem-slot].flags |= KVM_MEMSLOT_INVALID; old_memslots = kvm-memslots; @@ -723,6 +724,7 @@ skip_lpage: memcpy(slots, kvm-memslots, sizeof(struct kvm_memslots)); if (mem-slot = slots-nmemslots) slots-nmemslots = mem-slot + 1; + slots-generation++; /* actual memory is freed via old in kvm_free_physmem_slot below */ if (!npages) { @@ -1247,6 +1249,47 @@ int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data, return 0; } +int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + gpa_t gpa) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + int offset = offset_in_page(gpa); + gfn_t gfn = gpa PAGE_SHIFT; + + ghc-gpa = gpa; + ghc-generation = slots-generation; + ghc-memslot = gfn_to_memslot(kvm, gfn); + ghc-hva = gfn_to_hva(kvm, gfn); + if (!kvm_is_error_hva(ghc-hva)) + ghc-hva += offset; + else + return -EFAULT; + + return 0; +} +EXPORT_SYMBOL_GPL(kvm_gfn_to_hva_cache_init); + +int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + void *data, unsigned long len) +{ + struct kvm_memslots *slots = kvm_memslots(kvm); + int r; + + if (slots-generation != ghc-generation) + kvm_gfn_to_hva_cache_init(kvm, ghc, ghc-gpa); + + if (kvm_is_error_hva(ghc-hva)) + return -EFAULT; + + r = copy_to_user((void __user *)ghc-hva, data, len); + if (r) + return -EFAULT; + mark_page_dirty_in_slot(kvm, ghc-memslot, ghc-gpa PAGE_SHIFT); + + return 0; +} +EXPORT_SYMBOL_GPL(kvm_write_guest_cached); + int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len) { return kvm_write_guest_page(kvm, gfn, empty_zero_page, offset, len); @@ -1272,11 +1315,9 @@ int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len) } EXPORT_SYMBOL_GPL(kvm_clear_guest); -void mark_page_dirty(struct kvm *kvm, gfn_t gfn) +void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot, +
Re: [PATCH v6 04/12] Add memory slot versioning and use it to provide fast guest write interface
On 10/04/2010 11:56 AM, Gleb Natapov wrote: Keep track of memslots changes by keeping generation number in memslots structure. Provide kvm_write_guest_cached() function that skips gfn_to_hva() translation if memslots was not changed since previous invocation. Signed-off-by: Gleb Natapovg...@redhat.com Acked-by: Rik van Riel r...@redhat.com -- All rights reversed -- To unsubscribe from this list: send the line unsubscribe kvm in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html