Re: Seeking a KVM benchmark

2014-11-17 Thread Wanpeng Li
Hi Paolo, On 11/11/14, 1:28 AM, Paolo Bonzini wrote: On 10/11/2014 15:23, Avi Kivity wrote: It's not surprising [1]. Since the meaning of some PTE bits change [2], the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush if EFER changed between two invocations of the same

Re: Seeking a KVM benchmark

2014-11-17 Thread Paolo Bonzini
On 17/11/2014 12:17, Wanpeng Li wrote: It's not surprising [1]. Since the meaning of some PTE bits change [2], the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush if EFER changed between two invocations of the same VPID, which isn't the case. If there need a TLB

Re: Seeking a KVM benchmark

2014-11-17 Thread Wanpeng Li
Hi Paolo, On 11/17/14, 7:18 PM, Paolo Bonzini wrote: On 17/11/2014 12:17, Wanpeng Li wrote: It's not surprising [1]. Since the meaning of some PTE bits change [2], the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush if EFER changed between two invocations of the same

Re: Seeking a KVM benchmark

2014-11-17 Thread Paolo Bonzini
On 17/11/2014 13:00, Wanpeng Li wrote: Sorry, maybe I didn't state my question clearly. As Avi mentioned above In VMX we have VPIDs, so we only need to flush if EFER changed between two invocations of the same VPID, so there is only one VPID if the guest is UP, my question is if there need a

Re: Seeking a KVM benchmark

2014-11-17 Thread Wanpeng Li
Hi Paolo, On 11/17/14, 8:04 PM, Paolo Bonzini wrote: On 17/11/2014 13:00, Wanpeng Li wrote: Sorry, maybe I didn't state my question clearly. As Avi mentioned above In VMX we have VPIDs, so we only need to flush if EFER changed between two invocations of the same VPID, so there is only one VPID

Re: Seeking a KVM benchmark

2014-11-17 Thread Paolo Bonzini
On 17/11/2014 13:14, Wanpeng Li wrote: Sorry, maybe I didn't state my question clearly. As Avi mentioned above In VMX we have VPIDs, so we only need to flush if EFER changed between two invocations of the same VPID, so there is only one VPID if the guest is UP, my question is if there need

Re: Seeking a KVM benchmark

2014-11-12 Thread Paolo Bonzini
On 10/11/2014 18:38, Gleb Natapov wrote: On Mon, Nov 10, 2014 at 06:28:25PM +0100, Paolo Bonzini wrote: On 10/11/2014 15:23, Avi Kivity wrote: It's not surprising [1]. Since the meaning of some PTE bits change [2], the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush

Re: Seeking a KVM benchmark

2014-11-12 Thread Gleb Natapov
On Wed, Nov 12, 2014 at 12:33:32PM +0100, Paolo Bonzini wrote: On 10/11/2014 18:38, Gleb Natapov wrote: On Mon, Nov 10, 2014 at 06:28:25PM +0100, Paolo Bonzini wrote: On 10/11/2014 15:23, Avi Kivity wrote: It's not surprising [1]. Since the meaning of some PTE bits change [2], the

Re: Seeking a KVM benchmark

2014-11-12 Thread Paolo Bonzini
On 12/11/2014 16:22, Gleb Natapov wrote: Nehalem results: userspace exit, urn 17560 17726 17628 17572 17417 lightweight exit, urn 3316 3342 3342 3319 3328 userspace exit, LOAD_EFER, guest!=host 12200 11772 12130 12164 12327 lightweight

Re: Seeking a KVM benchmark

2014-11-12 Thread Gleb Natapov
On Wed, Nov 12, 2014 at 04:26:29PM +0100, Paolo Bonzini wrote: On 12/11/2014 16:22, Gleb Natapov wrote: Nehalem results: userspace exit, urn 17560 17726 17628 17572 17417 lightweight exit, urn 3316 3342 3342 3319 3328 userspace

Re: Seeking a KVM benchmark

2014-11-12 Thread Paolo Bonzini
On 12/11/2014 16:32, Gleb Natapov wrote: userspace exit, urn 17560 17726 17628 17572 17417 lightweight exit, urn 3316 3342 3342 3319 3328 userspace exit, LOAD_EFER, guest!=host 12200 11772 12130 12164 12327 lightweight exit,

Re: Seeking a KVM benchmark

2014-11-12 Thread Andy Lutomirski
On Wed, Nov 12, 2014 at 7:51 AM, Paolo Bonzini pbonz...@redhat.com wrote: On 12/11/2014 16:32, Gleb Natapov wrote: userspace exit, urn 17560 17726 17628 17572 17417 lightweight exit, urn 3316 3342 3342 3319 3328 userspace exit, LOAD_EFER,

Re: Seeking a KVM benchmark

2014-11-12 Thread Paolo Bonzini
Assuming you're running both of my patches (LOAD_EFER regardless of nx, but skip LOAD_EFER of guest == host), then some of the speedup may be just less code running. I haven't figured out exactly when vmx_save_host_state runs, but my patches avoid a call to kvm_set_shared_msr, which is worth

Re: Seeking a KVM benchmark

2014-11-11 Thread Paolo Bonzini
On 10/11/2014 13:15, Paolo Bonzini wrote: On 10/11/2014 11:45, Gleb Natapov wrote: I tried making also the other shared MSRs the same between guest and host (STAR, LSTAR, CSTAR, SYSCALL_MASK), so that the user return notifier has nothing to do. That saves about 4-500 cycles on

Re: Seeking a KVM benchmark

2014-11-10 Thread Paolo Bonzini
On 09/11/2014 17:36, Andy Lutomirski wrote: The purpose of vmexit test is to show us various overheads, so why not measure EFER switch overhead by having two tests one with equal EFER another with different EFER, instead of hiding it. I'll try this. We might need three tests, though: NX

Re: Seeking a KVM benchmark

2014-11-10 Thread Gleb Natapov
On Mon, Nov 10, 2014 at 11:03:35AM +0100, Paolo Bonzini wrote: On 09/11/2014 17:36, Andy Lutomirski wrote: The purpose of vmexit test is to show us various overheads, so why not measure EFER switch overhead by having two tests one with equal EFER another with different EFER, instead of

Re: Seeking a KVM benchmark

2014-11-10 Thread Paolo Bonzini
On 10/11/2014 11:45, Gleb Natapov wrote: I tried making also the other shared MSRs the same between guest and host (STAR, LSTAR, CSTAR, SYSCALL_MASK), so that the user return notifier has nothing to do. That saves about 4-500 cycles on inl_from_qemu. I do want to dig out my old Core 2

Re: Seeking a KVM benchmark

2014-11-10 Thread Avi Kivity
On 11/10/2014 02:15 PM, Paolo Bonzini wrote: On 10/11/2014 11:45, Gleb Natapov wrote: I tried making also the other shared MSRs the same between guest and host (STAR, LSTAR, CSTAR, SYSCALL_MASK), so that the user return notifier has nothing to do. That saves about 4-500 cycles on

Re: Seeking a KVM benchmark

2014-11-10 Thread Paolo Bonzini
On 10/11/2014 15:23, Avi Kivity wrote: It's not surprising [1]. Since the meaning of some PTE bits change [2], the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush if EFER changed between two invocations of the same VPID, which isn't the case. [1] after the fact [2]

Re: Seeking a KVM benchmark

2014-11-10 Thread Gleb Natapov
On Mon, Nov 10, 2014 at 06:28:25PM +0100, Paolo Bonzini wrote: On 10/11/2014 15:23, Avi Kivity wrote: It's not surprising [1]. Since the meaning of some PTE bits change [2], the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush if EFER changed between two invocations

Re: Seeking a KVM benchmark

2014-11-10 Thread Andy Lutomirski
On Mon, Nov 10, 2014 at 2:45 AM, Gleb Natapov g...@kernel.org wrote: On Mon, Nov 10, 2014 at 11:03:35AM +0100, Paolo Bonzini wrote: On 09/11/2014 17:36, Andy Lutomirski wrote: The purpose of vmexit test is to show us various overheads, so why not measure EFER switch overhead by having two

Re: Seeking a KVM benchmark

2014-11-09 Thread Gleb Natapov
On Sat, Nov 08, 2014 at 08:44:42AM -0800, Andy Lutomirski wrote: On Sat, Nov 8, 2014 at 8:00 AM, Andy Lutomirski l...@amacapital.net wrote: On Nov 8, 2014 4:01 AM, Gleb Natapov g...@kernel.org wrote: On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote: On Thu, Nov 6, 2014 at

Re: Seeking a KVM benchmark

2014-11-09 Thread Andy Lutomirski
On Sun, Nov 9, 2014 at 12:52 AM, Gleb Natapov g...@kernel.org wrote: On Sat, Nov 08, 2014 at 08:44:42AM -0800, Andy Lutomirski wrote: On Sat, Nov 8, 2014 at 8:00 AM, Andy Lutomirski l...@amacapital.net wrote: On Nov 8, 2014 4:01 AM, Gleb Natapov g...@kernel.org wrote: On Fri, Nov 07, 2014

Re: Seeking a KVM benchmark

2014-11-08 Thread Gleb Natapov
On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote: On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini pbonz...@redhat.com wrote: On 07/11/2014 07:27, Andy Lutomirski wrote: Is there an easy benchmark that's sensitive to the time it takes to round-trip from userspace to guest

Re: Seeking a KVM benchmark

2014-11-08 Thread Andy Lutomirski
On Nov 8, 2014 4:01 AM, Gleb Natapov g...@kernel.org wrote: On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote: On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini pbonz...@redhat.com wrote: On 07/11/2014 07:27, Andy Lutomirski wrote: Is there an easy benchmark that's

Re: Seeking a KVM benchmark

2014-11-08 Thread Andy Lutomirski
On Sat, Nov 8, 2014 at 8:00 AM, Andy Lutomirski l...@amacapital.net wrote: On Nov 8, 2014 4:01 AM, Gleb Natapov g...@kernel.org wrote: On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote: On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini pbonz...@redhat.com wrote: On

Re: Seeking a KVM benchmark

2014-11-07 Thread Andy Lutomirski
On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini pbonz...@redhat.com wrote: On 07/11/2014 07:27, Andy Lutomirski wrote: Is there an easy benchmark that's sensitive to the time it takes to round-trip from userspace to guest and back to userspace? I think I may have a big speedup. The

Re: Seeking a KVM benchmark

2014-11-07 Thread Andy Lutomirski
On Fri, Nov 7, 2014 at 9:59 AM, Andy Lutomirski l...@amacapital.net wrote: On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini pbonz...@redhat.com wrote: On 07/11/2014 07:27, Andy Lutomirski wrote: Is there an easy benchmark that's sensitive to the time it takes to round-trip from userspace to

Re: Seeking a KVM benchmark

2014-11-06 Thread Paolo Bonzini
On 07/11/2014 07:27, Andy Lutomirski wrote: Is there an easy benchmark that's sensitive to the time it takes to round-trip from userspace to guest and back to userspace? I think I may have a big speedup. The simplest is vmexit.flat from