Hi Paolo,
On 11/11/14, 1:28 AM, Paolo Bonzini wrote:
On 10/11/2014 15:23, Avi Kivity wrote:
It's not surprising [1]. Since the meaning of some PTE bits change [2],
the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush
if EFER changed between two invocations of the same
On 17/11/2014 12:17, Wanpeng Li wrote:
It's not surprising [1]. Since the meaning of some PTE bits change [2],
the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush
if EFER changed between two invocations of the same VPID, which isn't
the case.
If there need a TLB
Hi Paolo,
On 11/17/14, 7:18 PM, Paolo Bonzini wrote:
On 17/11/2014 12:17, Wanpeng Li wrote:
It's not surprising [1]. Since the meaning of some PTE bits change [2],
the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush
if EFER changed between two invocations of the same
On 17/11/2014 13:00, Wanpeng Li wrote:
Sorry, maybe I didn't state my question clearly. As Avi mentioned above
In VMX we have VPIDs, so we only need to flush if EFER changed between
two invocations of the same VPID, so there is only one VPID if the
guest is UP, my question is if there need a
Hi Paolo,
On 11/17/14, 8:04 PM, Paolo Bonzini wrote:
On 17/11/2014 13:00, Wanpeng Li wrote:
Sorry, maybe I didn't state my question clearly. As Avi mentioned above
In VMX we have VPIDs, so we only need to flush if EFER changed between
two invocations of the same VPID, so there is only one VPID
On 17/11/2014 13:14, Wanpeng Li wrote:
Sorry, maybe I didn't state my question clearly. As Avi mentioned above
In VMX we have VPIDs, so we only need to flush if EFER changed between
two invocations of the same VPID, so there is only one VPID if the
guest is UP, my question is if there need
On 10/11/2014 18:38, Gleb Natapov wrote:
On Mon, Nov 10, 2014 at 06:28:25PM +0100, Paolo Bonzini wrote:
On 10/11/2014 15:23, Avi Kivity wrote:
It's not surprising [1]. Since the meaning of some PTE bits change [2],
the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush
On Wed, Nov 12, 2014 at 12:33:32PM +0100, Paolo Bonzini wrote:
On 10/11/2014 18:38, Gleb Natapov wrote:
On Mon, Nov 10, 2014 at 06:28:25PM +0100, Paolo Bonzini wrote:
On 10/11/2014 15:23, Avi Kivity wrote:
It's not surprising [1]. Since the meaning of some PTE bits change [2],
the
On 12/11/2014 16:22, Gleb Natapov wrote:
Nehalem results:
userspace exit, urn 17560 17726 17628 17572 17417
lightweight exit, urn 3316 3342 3342 3319 3328
userspace exit, LOAD_EFER, guest!=host 12200 11772 12130 12164 12327
lightweight
On Wed, Nov 12, 2014 at 04:26:29PM +0100, Paolo Bonzini wrote:
On 12/11/2014 16:22, Gleb Natapov wrote:
Nehalem results:
userspace exit, urn 17560 17726 17628 17572 17417
lightweight exit, urn 3316 3342 3342 3319 3328
userspace
On 12/11/2014 16:32, Gleb Natapov wrote:
userspace exit, urn 17560 17726 17628 17572 17417
lightweight exit, urn 3316 3342 3342 3319 3328
userspace exit, LOAD_EFER, guest!=host 12200 11772 12130 12164 12327
lightweight exit,
On Wed, Nov 12, 2014 at 7:51 AM, Paolo Bonzini pbonz...@redhat.com wrote:
On 12/11/2014 16:32, Gleb Natapov wrote:
userspace exit, urn 17560 17726 17628 17572 17417
lightweight exit, urn 3316 3342 3342 3319 3328
userspace exit, LOAD_EFER,
Assuming you're running both of my patches (LOAD_EFER regardless of
nx, but skip LOAD_EFER of guest == host), then some of the speedup may
be just less code running. I haven't figured out exactly when
vmx_save_host_state runs, but my patches avoid a call to
kvm_set_shared_msr, which is worth
On 10/11/2014 13:15, Paolo Bonzini wrote:
On 10/11/2014 11:45, Gleb Natapov wrote:
I tried making also the other shared MSRs the same between guest and
host (STAR, LSTAR, CSTAR, SYSCALL_MASK), so that the user return notifier
has nothing to do. That saves about 4-500 cycles on
On 09/11/2014 17:36, Andy Lutomirski wrote:
The purpose of vmexit test is to show us various overheads, so why not
measure EFER switch overhead by having two tests one with equal EFER
another with different EFER, instead of hiding it.
I'll try this. We might need three tests, though: NX
On Mon, Nov 10, 2014 at 11:03:35AM +0100, Paolo Bonzini wrote:
On 09/11/2014 17:36, Andy Lutomirski wrote:
The purpose of vmexit test is to show us various overheads, so why not
measure EFER switch overhead by having two tests one with equal EFER
another with different EFER, instead of
On 10/11/2014 11:45, Gleb Natapov wrote:
I tried making also the other shared MSRs the same between guest and
host (STAR, LSTAR, CSTAR, SYSCALL_MASK), so that the user return notifier
has nothing to do. That saves about 4-500 cycles on inl_from_qemu. I
do want to dig out my old Core 2
On 11/10/2014 02:15 PM, Paolo Bonzini wrote:
On 10/11/2014 11:45, Gleb Natapov wrote:
I tried making also the other shared MSRs the same between guest and
host (STAR, LSTAR, CSTAR, SYSCALL_MASK), so that the user return notifier
has nothing to do. That saves about 4-500 cycles on
On 10/11/2014 15:23, Avi Kivity wrote:
It's not surprising [1]. Since the meaning of some PTE bits change [2],
the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush
if EFER changed between two invocations of the same VPID, which isn't the
case.
[1] after the fact
[2]
On Mon, Nov 10, 2014 at 06:28:25PM +0100, Paolo Bonzini wrote:
On 10/11/2014 15:23, Avi Kivity wrote:
It's not surprising [1]. Since the meaning of some PTE bits change [2],
the TLB has to be flushed. In VMX we have VPIDs, so we only need to flush
if EFER changed between two invocations
On Mon, Nov 10, 2014 at 2:45 AM, Gleb Natapov g...@kernel.org wrote:
On Mon, Nov 10, 2014 at 11:03:35AM +0100, Paolo Bonzini wrote:
On 09/11/2014 17:36, Andy Lutomirski wrote:
The purpose of vmexit test is to show us various overheads, so why not
measure EFER switch overhead by having two
On Sat, Nov 08, 2014 at 08:44:42AM -0800, Andy Lutomirski wrote:
On Sat, Nov 8, 2014 at 8:00 AM, Andy Lutomirski l...@amacapital.net wrote:
On Nov 8, 2014 4:01 AM, Gleb Natapov g...@kernel.org wrote:
On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote:
On Thu, Nov 6, 2014 at
On Sun, Nov 9, 2014 at 12:52 AM, Gleb Natapov g...@kernel.org wrote:
On Sat, Nov 08, 2014 at 08:44:42AM -0800, Andy Lutomirski wrote:
On Sat, Nov 8, 2014 at 8:00 AM, Andy Lutomirski l...@amacapital.net wrote:
On Nov 8, 2014 4:01 AM, Gleb Natapov g...@kernel.org wrote:
On Fri, Nov 07, 2014
On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote:
On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini pbonz...@redhat.com wrote:
On 07/11/2014 07:27, Andy Lutomirski wrote:
Is there an easy benchmark that's sensitive to the time it takes to
round-trip from userspace to guest
On Nov 8, 2014 4:01 AM, Gleb Natapov g...@kernel.org wrote:
On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote:
On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini pbonz...@redhat.com wrote:
On 07/11/2014 07:27, Andy Lutomirski wrote:
Is there an easy benchmark that's
On Sat, Nov 8, 2014 at 8:00 AM, Andy Lutomirski l...@amacapital.net wrote:
On Nov 8, 2014 4:01 AM, Gleb Natapov g...@kernel.org wrote:
On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote:
On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini pbonz...@redhat.com wrote:
On
On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini pbonz...@redhat.com wrote:
On 07/11/2014 07:27, Andy Lutomirski wrote:
Is there an easy benchmark that's sensitive to the time it takes to
round-trip from userspace to guest and back to userspace? I think I
may have a big speedup.
The
On Fri, Nov 7, 2014 at 9:59 AM, Andy Lutomirski l...@amacapital.net wrote:
On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini pbonz...@redhat.com wrote:
On 07/11/2014 07:27, Andy Lutomirski wrote:
Is there an easy benchmark that's sensitive to the time it takes to
round-trip from userspace to
On 07/11/2014 07:27, Andy Lutomirski wrote:
Is there an easy benchmark that's sensitive to the time it takes to
round-trip from userspace to guest and back to userspace? I think I
may have a big speedup.
The simplest is vmexit.flat from
29 matches
Mail list logo