Re: [PATCH v2 00/10] kvm: memory slot cleanups, fix, and increase
On Mon, Dec 10, 2012 at 10:32:39AM -0700, Alex Williamson wrote: > v2: Update 02/10 to not check userspace_addr when slot is removed. > Yoshikawa-san withdrew objection to increase slot_bitmap prior > to his series to remove slot_bitmap. > > This series does away with any kind of complicated resizing of the > slot array and simply does a one time increase. I do compact struct > kvm_memory_slot a bit to take better advantage of the space we are > using. This reduces each slot from 64 bytes (x86_64) to 56 bytes. > By enforcing the API around valid operations for an unused slot and > fields that can be modified runtime, I found and was able to fix a > bug in iommu mapping for slots. The renames enabled me to find the > previously posted bug fix for catching slot overlaps. > > As mentioned in the series, the primary motivation for increasing > memory slots is assigned devices. With this, I've been able to > assign 30 devices to a single VM and could have gone further, but > ran out of SRIOV VFs. Typical devices use anywhere from 2-4 slots > and max out at 8 slots. 125 user slots (3 private slots) allows > us to support between 28 and 56 typical devices per VM. > > Tested on x86_64, compiled on ia64, powerpc, and s390. > > Thanks, > Alex Applied, thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 00/10] kvm: memory slot cleanups, fix, and increase
On Mon, Dec 10, 2012 at 10:32:39AM -0700, Alex Williamson wrote: v2: Update 02/10 to not check userspace_addr when slot is removed. Yoshikawa-san withdrew objection to increase slot_bitmap prior to his series to remove slot_bitmap. This series does away with any kind of complicated resizing of the slot array and simply does a one time increase. I do compact struct kvm_memory_slot a bit to take better advantage of the space we are using. This reduces each slot from 64 bytes (x86_64) to 56 bytes. By enforcing the API around valid operations for an unused slot and fields that can be modified runtime, I found and was able to fix a bug in iommu mapping for slots. The renames enabled me to find the previously posted bug fix for catching slot overlaps. As mentioned in the series, the primary motivation for increasing memory slots is assigned devices. With this, I've been able to assign 30 devices to a single VM and could have gone further, but ran out of SRIOV VFs. Typical devices use anywhere from 2-4 slots and max out at 8 slots. 125 user slots (3 private slots) allows us to support between 28 and 56 typical devices per VM. Tested on x86_64, compiled on ia64, powerpc, and s390. Thanks, Alex Applied, thanks. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 00/10] kvm: memory slot cleanups, fix, and increase
On Mon, Dec 10, 2012 at 10:32:39AM -0700, Alex Williamson wrote: > v2: Update 02/10 to not check userspace_addr when slot is removed. > Yoshikawa-san withdrew objection to increase slot_bitmap prior > to his series to remove slot_bitmap. > > This series does away with any kind of complicated resizing of the > slot array and simply does a one time increase. I do compact struct > kvm_memory_slot a bit to take better advantage of the space we are > using. This reduces each slot from 64 bytes (x86_64) to 56 bytes. > By enforcing the API around valid operations for an unused slot and > fields that can be modified runtime, I found and was able to fix a > bug in iommu mapping for slots. The renames enabled me to find the > previously posted bug fix for catching slot overlaps. > > As mentioned in the series, the primary motivation for increasing > memory slots is assigned devices. With this, I've been able to > assign 30 devices to a single VM and could have gone further, but > ran out of SRIOV VFs. Typical devices use anywhere from 2-4 slots > and max out at 8 slots. 125 user slots (3 private slots) allows > us to support between 28 and 56 typical devices per VM. > > Tested on x86_64, compiled on ia64, powerpc, and s390. > > Thanks, > Alex > Reviewed-by: Gleb Natapov > --- > > Alex Williamson (10): > kvm: Restrict non-existing slot state transitions > kvm: Check userspace_addr when modifying a memory slot > kvm: Fix iommu map/unmap to handle memory slot moves > kvm: Minor memory slot optimization > kvm: Rename KVM_MEMORY_SLOTS -> KVM_USER_MEM_SLOTS > kvm: Make KVM_PRIVATE_MEM_SLOTS optional > kvm: struct kvm_memory_slot.user_alloc -> bool > kvm: struct kvm_memory_slot.flags -> u32 > kvm: struct kvm_memory_slot.id -> short > kvm: Increase user memory slots on x86 to 125 > > > arch/ia64/include/asm/kvm_host.h|4 -- > arch/ia64/kvm/kvm-ia64.c|8 ++-- > arch/powerpc/include/asm/kvm_host.h |6 +-- > arch/powerpc/kvm/book3s_hv.c|2 - > arch/powerpc/kvm/powerpc.c |4 +- > arch/s390/include/asm/kvm_host.h|4 -- > arch/s390/kvm/kvm-s390.c|4 +- > arch/x86/include/asm/kvm_host.h |8 ++-- > arch/x86/include/asm/vmx.h |6 +-- > arch/x86/kvm/vmx.c |6 +-- > arch/x86/kvm/x86.c | 10 ++--- > include/linux/kvm_host.h| 24 +++- > virt/kvm/kvm_main.c | 72 > +++ > 13 files changed, 90 insertions(+), 68 deletions(-) -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 00/10] kvm: memory slot cleanups, fix, and increase
On Mon, Dec 10, 2012 at 10:32:39AM -0700, Alex Williamson wrote: v2: Update 02/10 to not check userspace_addr when slot is removed. Yoshikawa-san withdrew objection to increase slot_bitmap prior to his series to remove slot_bitmap. This series does away with any kind of complicated resizing of the slot array and simply does a one time increase. I do compact struct kvm_memory_slot a bit to take better advantage of the space we are using. This reduces each slot from 64 bytes (x86_64) to 56 bytes. By enforcing the API around valid operations for an unused slot and fields that can be modified runtime, I found and was able to fix a bug in iommu mapping for slots. The renames enabled me to find the previously posted bug fix for catching slot overlaps. As mentioned in the series, the primary motivation for increasing memory slots is assigned devices. With this, I've been able to assign 30 devices to a single VM and could have gone further, but ran out of SRIOV VFs. Typical devices use anywhere from 2-4 slots and max out at 8 slots. 125 user slots (3 private slots) allows us to support between 28 and 56 typical devices per VM. Tested on x86_64, compiled on ia64, powerpc, and s390. Thanks, Alex Reviewed-by: Gleb Natapov g...@redhat.com --- Alex Williamson (10): kvm: Restrict non-existing slot state transitions kvm: Check userspace_addr when modifying a memory slot kvm: Fix iommu map/unmap to handle memory slot moves kvm: Minor memory slot optimization kvm: Rename KVM_MEMORY_SLOTS - KVM_USER_MEM_SLOTS kvm: Make KVM_PRIVATE_MEM_SLOTS optional kvm: struct kvm_memory_slot.user_alloc - bool kvm: struct kvm_memory_slot.flags - u32 kvm: struct kvm_memory_slot.id - short kvm: Increase user memory slots on x86 to 125 arch/ia64/include/asm/kvm_host.h|4 -- arch/ia64/kvm/kvm-ia64.c|8 ++-- arch/powerpc/include/asm/kvm_host.h |6 +-- arch/powerpc/kvm/book3s_hv.c|2 - arch/powerpc/kvm/powerpc.c |4 +- arch/s390/include/asm/kvm_host.h|4 -- arch/s390/kvm/kvm-s390.c|4 +- arch/x86/include/asm/kvm_host.h |8 ++-- arch/x86/include/asm/vmx.h |6 +-- arch/x86/kvm/vmx.c |6 +-- arch/x86/kvm/x86.c | 10 ++--- include/linux/kvm_host.h| 24 +++- virt/kvm/kvm_main.c | 72 +++ 13 files changed, 90 insertions(+), 68 deletions(-) -- Gleb. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 00/10] kvm: memory slot cleanups, fix, and increase
On Mon, Dec 10, 2012 at 10:32:39AM -0700, Alex Williamson wrote: > v2: Update 02/10 to not check userspace_addr when slot is removed. > Yoshikawa-san withdrew objection to increase slot_bitmap prior > to his series to remove slot_bitmap. > > This series does away with any kind of complicated resizing of the > slot array and simply does a one time increase. I do compact struct > kvm_memory_slot a bit to take better advantage of the space we are > using. This reduces each slot from 64 bytes (x86_64) to 56 bytes. > By enforcing the API around valid operations for an unused slot and > fields that can be modified runtime, I found and was able to fix a > bug in iommu mapping for slots. The renames enabled me to find the > previously posted bug fix for catching slot overlaps. > > As mentioned in the series, the primary motivation for increasing > memory slots is assigned devices. With this, I've been able to > assign 30 devices to a single VM and could have gone further, but > ran out of SRIOV VFs. Typical devices use anywhere from 2-4 slots > and max out at 8 slots. 125 user slots (3 private slots) allows > us to support between 28 and 56 typical devices per VM. > > Tested on x86_64, compiled on ia64, powerpc, and s390. > > Thanks, > Alex Reviewed-by: Marcelo Tosatti -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH v2 00/10] kvm: memory slot cleanups, fix, and increase
On Mon, Dec 10, 2012 at 10:32:39AM -0700, Alex Williamson wrote: v2: Update 02/10 to not check userspace_addr when slot is removed. Yoshikawa-san withdrew objection to increase slot_bitmap prior to his series to remove slot_bitmap. This series does away with any kind of complicated resizing of the slot array and simply does a one time increase. I do compact struct kvm_memory_slot a bit to take better advantage of the space we are using. This reduces each slot from 64 bytes (x86_64) to 56 bytes. By enforcing the API around valid operations for an unused slot and fields that can be modified runtime, I found and was able to fix a bug in iommu mapping for slots. The renames enabled me to find the previously posted bug fix for catching slot overlaps. As mentioned in the series, the primary motivation for increasing memory slots is assigned devices. With this, I've been able to assign 30 devices to a single VM and could have gone further, but ran out of SRIOV VFs. Typical devices use anywhere from 2-4 slots and max out at 8 slots. 125 user slots (3 private slots) allows us to support between 28 and 56 typical devices per VM. Tested on x86_64, compiled on ia64, powerpc, and s390. Thanks, Alex Reviewed-by: Marcelo Tosatti mtosa...@redhat.com -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2 00/10] kvm: memory slot cleanups, fix, and increase
v2: Update 02/10 to not check userspace_addr when slot is removed. Yoshikawa-san withdrew objection to increase slot_bitmap prior to his series to remove slot_bitmap. This series does away with any kind of complicated resizing of the slot array and simply does a one time increase. I do compact struct kvm_memory_slot a bit to take better advantage of the space we are using. This reduces each slot from 64 bytes (x86_64) to 56 bytes. By enforcing the API around valid operations for an unused slot and fields that can be modified runtime, I found and was able to fix a bug in iommu mapping for slots. The renames enabled me to find the previously posted bug fix for catching slot overlaps. As mentioned in the series, the primary motivation for increasing memory slots is assigned devices. With this, I've been able to assign 30 devices to a single VM and could have gone further, but ran out of SRIOV VFs. Typical devices use anywhere from 2-4 slots and max out at 8 slots. 125 user slots (3 private slots) allows us to support between 28 and 56 typical devices per VM. Tested on x86_64, compiled on ia64, powerpc, and s390. Thanks, Alex --- Alex Williamson (10): kvm: Restrict non-existing slot state transitions kvm: Check userspace_addr when modifying a memory slot kvm: Fix iommu map/unmap to handle memory slot moves kvm: Minor memory slot optimization kvm: Rename KVM_MEMORY_SLOTS -> KVM_USER_MEM_SLOTS kvm: Make KVM_PRIVATE_MEM_SLOTS optional kvm: struct kvm_memory_slot.user_alloc -> bool kvm: struct kvm_memory_slot.flags -> u32 kvm: struct kvm_memory_slot.id -> short kvm: Increase user memory slots on x86 to 125 arch/ia64/include/asm/kvm_host.h|4 -- arch/ia64/kvm/kvm-ia64.c|8 ++-- arch/powerpc/include/asm/kvm_host.h |6 +-- arch/powerpc/kvm/book3s_hv.c|2 - arch/powerpc/kvm/powerpc.c |4 +- arch/s390/include/asm/kvm_host.h|4 -- arch/s390/kvm/kvm-s390.c|4 +- arch/x86/include/asm/kvm_host.h |8 ++-- arch/x86/include/asm/vmx.h |6 +-- arch/x86/kvm/vmx.c |6 +-- arch/x86/kvm/x86.c | 10 ++--- include/linux/kvm_host.h| 24 +++- virt/kvm/kvm_main.c | 72 +++ 13 files changed, 90 insertions(+), 68 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH v2 00/10] kvm: memory slot cleanups, fix, and increase
v2: Update 02/10 to not check userspace_addr when slot is removed. Yoshikawa-san withdrew objection to increase slot_bitmap prior to his series to remove slot_bitmap. This series does away with any kind of complicated resizing of the slot array and simply does a one time increase. I do compact struct kvm_memory_slot a bit to take better advantage of the space we are using. This reduces each slot from 64 bytes (x86_64) to 56 bytes. By enforcing the API around valid operations for an unused slot and fields that can be modified runtime, I found and was able to fix a bug in iommu mapping for slots. The renames enabled me to find the previously posted bug fix for catching slot overlaps. As mentioned in the series, the primary motivation for increasing memory slots is assigned devices. With this, I've been able to assign 30 devices to a single VM and could have gone further, but ran out of SRIOV VFs. Typical devices use anywhere from 2-4 slots and max out at 8 slots. 125 user slots (3 private slots) allows us to support between 28 and 56 typical devices per VM. Tested on x86_64, compiled on ia64, powerpc, and s390. Thanks, Alex --- Alex Williamson (10): kvm: Restrict non-existing slot state transitions kvm: Check userspace_addr when modifying a memory slot kvm: Fix iommu map/unmap to handle memory slot moves kvm: Minor memory slot optimization kvm: Rename KVM_MEMORY_SLOTS - KVM_USER_MEM_SLOTS kvm: Make KVM_PRIVATE_MEM_SLOTS optional kvm: struct kvm_memory_slot.user_alloc - bool kvm: struct kvm_memory_slot.flags - u32 kvm: struct kvm_memory_slot.id - short kvm: Increase user memory slots on x86 to 125 arch/ia64/include/asm/kvm_host.h|4 -- arch/ia64/kvm/kvm-ia64.c|8 ++-- arch/powerpc/include/asm/kvm_host.h |6 +-- arch/powerpc/kvm/book3s_hv.c|2 - arch/powerpc/kvm/powerpc.c |4 +- arch/s390/include/asm/kvm_host.h|4 -- arch/s390/kvm/kvm-s390.c|4 +- arch/x86/include/asm/kvm_host.h |8 ++-- arch/x86/include/asm/vmx.h |6 +-- arch/x86/kvm/vmx.c |6 +-- arch/x86/kvm/x86.c | 10 ++--- include/linux/kvm_host.h| 24 +++- virt/kvm/kvm_main.c | 72 +++ 13 files changed, 90 insertions(+), 68 deletions(-) -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/