Re: [PATCH 2/5] x86, vdso, pvclock: Simplify and speed up the vdso pvclock reader

2015-12-11 Thread Andy Lutomirski
On Fri, Dec 11, 2015 at 12:42 AM, Paolo Bonzini  wrote:
>
>
> On 11/12/2015 08:52, Ingo Molnar wrote:
>>
>> * Paolo Bonzini  wrote:

>>>
>>> Reviewed-by: Paolo Bonzini 
>>
>> Thanks. I've added your Reviewed-by to the 1/5 patch as well - to be able to 
>> put
>> the whole series into the tip:x86/entry tree. Let me know if you'd like it 
>> to be
>> done differently.
>
> The 1/5 patch is entirely in KVM and is not necessary for the rest of
> the series to work.  I would like it to be separate, because Marcelo has
> not yet chimed in to say why it was necessary.
>
> Can you just apply patches 2-5?

Yes, please.  I don't grok the clock update mechanism in the KVM host
well enough to be sure that patch 1 is actually correct.  All I know
is that it works better on my laptop with the patch than without the
patch and that it seems at least conceptually correct.

In any event, patch 1 is a host patch and 2-5 are guest patches, and
they only interact to the extent that it's hard for me to test 2-5 on
the guest without patch 1 on the host because without patch 1 my
laptop's host kernel tends to disable stable kvmclock, thus disabling
the entire mechanism in the guest.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/5] x86, vdso, pvclock: Simplify and speed up the vdso pvclock reader

2015-12-11 Thread Paolo Bonzini


On 11/12/2015 08:52, Ingo Molnar wrote:
> 
> * Paolo Bonzini  wrote:
> 
>>
>>
>> On 10/12/2015 00:12, Andy Lutomirski wrote:
>>> From: Andy Lutomirski 
>>>
>>> The pvclock vdso code was too abstracted to understand easily and
>>> excessively paranoid.  Simplify it for a huge speedup.
>>>
>>> This opens the door for additional simplifications, as the vdso no
>>> longer accesses the pvti for any vcpu other than vcpu 0.
>>>
>>> Before, vclock_gettime using kvm-clock took about 45ns on my machine.
>>> With this change, it takes 29ns, which is almost as fast as the pure TSC
>>> implementation.
>>>
>>> Signed-off-by: Andy Lutomirski 
>>> ---
>>>  arch/x86/entry/vdso/vclock_gettime.c | 81 
>>> 
>>>  1 file changed, 46 insertions(+), 35 deletions(-)
>>>
>>> diff --git a/arch/x86/entry/vdso/vclock_gettime.c 
>>> b/arch/x86/entry/vdso/vclock_gettime.c
>>> index ca94fa649251..c325ba1bdddf 100644
>>> --- a/arch/x86/entry/vdso/vclock_gettime.c
>>> +++ b/arch/x86/entry/vdso/vclock_gettime.c
>>> @@ -78,47 +78,58 @@ static notrace const struct pvclock_vsyscall_time_info 
>>> *get_pvti(int cpu)
>>>  
>>>  static notrace cycle_t vread_pvclock(int *mode)
>>>  {
>>> -   const struct pvclock_vsyscall_time_info *pvti;
>>> +   const struct pvclock_vcpu_time_info *pvti = _pvti(0)->pvti;
>>> cycle_t ret;
>>> -   u64 last;
>>> -   u32 version;
>>> -   u8 flags;
>>> -   unsigned cpu, cpu1;
>>> -
>>> +   u64 tsc, pvti_tsc;
>>> +   u64 last, delta, pvti_system_time;
>>> +   u32 version, pvti_tsc_to_system_mul, pvti_tsc_shift;
>>>  
>>> /*
>>> -* Note: hypervisor must guarantee that:
>>> -* 1. cpu ID number maps 1:1 to per-CPU pvclock time info.
>>> -* 2. that per-CPU pvclock time info is updated if the
>>> -*underlying CPU changes.
>>> -* 3. that version is increased whenever underlying CPU
>>> -*changes.
>>> +* Note: The kernel and hypervisor must guarantee that cpu ID
>>> +* number maps 1:1 to per-CPU pvclock time info.
>>> +*
>>> +* Because the hypervisor is entirely unaware of guest userspace
>>> +* preemption, it cannot guarantee that per-CPU pvclock time
>>> +* info is updated if the underlying CPU changes or that that
>>> +* version is increased whenever underlying CPU changes.
>>>  *
>>> +* On KVM, we are guaranteed that pvti updates for any vCPU are
>>> +* atomic as seen by *all* vCPUs.  This is an even stronger
>>> +* guarantee than we get with a normal seqlock.
>>> +*
>>> +* On Xen, we don't appear to have that guarantee, but Xen still
>>> +* supplies a valid seqlock using the version field.
>>> +
>>> +* We only do pvclock vdso timing at all if
>>> +* PVCLOCK_TSC_STABLE_BIT is set, and we interpret that bit to
>>> +* mean that all vCPUs have matching pvti and that the TSC is
>>> +* synced, so we can just look at vCPU 0's pvti.
>>>  */
>>> -   do {
>>> -   cpu = __getcpu() & VGETCPU_CPU_MASK;
>>> -   /* TODO: We can put vcpu id into higher bits of pvti.version.
>>> -* This will save a couple of cycles by getting rid of
>>> -* __getcpu() calls (Gleb).
>>> -*/
>>> -
>>> -   pvti = get_pvti(cpu);
>>> -
>>> -   version = __pvclock_read_cycles(>pvti, , );
>>> -
>>> -   /*
>>> -* Test we're still on the cpu as well as the version.
>>> -* We could have been migrated just after the first
>>> -* vgetcpu but before fetching the version, so we
>>> -* wouldn't notice a version change.
>>> -*/
>>> -   cpu1 = __getcpu() & VGETCPU_CPU_MASK;
>>> -   } while (unlikely(cpu != cpu1 ||
>>> - (pvti->pvti.version & 1) ||
>>> - pvti->pvti.version != version));
>>> -
>>> -   if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
>>> +
>>> +   if (unlikely(!(pvti->flags & PVCLOCK_TSC_STABLE_BIT))) {
>>> *mode = VCLOCK_NONE;
>>> +   return 0;
>>> +   }
>>> +
>>> +   do {
>>> +   version = pvti->version;
>>> +
>>> +   /* This is also a read barrier, so we'll read version first. */
>>> +   tsc = rdtsc_ordered();
>>> +
>>> +   pvti_tsc_to_system_mul = pvti->tsc_to_system_mul;
>>> +   pvti_tsc_shift = pvti->tsc_shift;
>>> +   pvti_system_time = pvti->system_time;
>>> +   pvti_tsc = pvti->tsc_timestamp;
>>> +
>>> +   /* Make sure that the version double-check is last. */
>>> +   smp_rmb();
>>> +   } while (unlikely((version & 1) || version != pvti->version));
>>> +
>>> +   delta = tsc - pvti_tsc;
>>> +   ret = pvti_system_time +
>>> +   pvclock_scale_delta(delta, pvti_tsc_to_system_mul,
>>> +   pvti_tsc_shift);
>>>  
>>> /* refer to tsc.c read_tsc() comment for rationale */
>>> last = gtod->cycle_last;
>>>
>>
>> Reviewed-by: Paolo 

Re: [PATCH 2/5] x86, vdso, pvclock: Simplify and speed up the vdso pvclock reader

2015-12-10 Thread Paolo Bonzini


On 10/12/2015 00:12, Andy Lutomirski wrote:
> From: Andy Lutomirski 
> 
> The pvclock vdso code was too abstracted to understand easily and
> excessively paranoid.  Simplify it for a huge speedup.
> 
> This opens the door for additional simplifications, as the vdso no
> longer accesses the pvti for any vcpu other than vcpu 0.
> 
> Before, vclock_gettime using kvm-clock took about 45ns on my machine.
> With this change, it takes 29ns, which is almost as fast as the pure TSC
> implementation.
> 
> Signed-off-by: Andy Lutomirski 
> ---
>  arch/x86/entry/vdso/vclock_gettime.c | 81 
> 
>  1 file changed, 46 insertions(+), 35 deletions(-)
> 
> diff --git a/arch/x86/entry/vdso/vclock_gettime.c 
> b/arch/x86/entry/vdso/vclock_gettime.c
> index ca94fa649251..c325ba1bdddf 100644
> --- a/arch/x86/entry/vdso/vclock_gettime.c
> +++ b/arch/x86/entry/vdso/vclock_gettime.c
> @@ -78,47 +78,58 @@ static notrace const struct pvclock_vsyscall_time_info 
> *get_pvti(int cpu)
>  
>  static notrace cycle_t vread_pvclock(int *mode)
>  {
> - const struct pvclock_vsyscall_time_info *pvti;
> + const struct pvclock_vcpu_time_info *pvti = _pvti(0)->pvti;
>   cycle_t ret;
> - u64 last;
> - u32 version;
> - u8 flags;
> - unsigned cpu, cpu1;
> -
> + u64 tsc, pvti_tsc;
> + u64 last, delta, pvti_system_time;
> + u32 version, pvti_tsc_to_system_mul, pvti_tsc_shift;
>  
>   /*
> -  * Note: hypervisor must guarantee that:
> -  * 1. cpu ID number maps 1:1 to per-CPU pvclock time info.
> -  * 2. that per-CPU pvclock time info is updated if the
> -  *underlying CPU changes.
> -  * 3. that version is increased whenever underlying CPU
> -  *changes.
> +  * Note: The kernel and hypervisor must guarantee that cpu ID
> +  * number maps 1:1 to per-CPU pvclock time info.
> +  *
> +  * Because the hypervisor is entirely unaware of guest userspace
> +  * preemption, it cannot guarantee that per-CPU pvclock time
> +  * info is updated if the underlying CPU changes or that that
> +  * version is increased whenever underlying CPU changes.
>*
> +  * On KVM, we are guaranteed that pvti updates for any vCPU are
> +  * atomic as seen by *all* vCPUs.  This is an even stronger
> +  * guarantee than we get with a normal seqlock.
> +  *
> +  * On Xen, we don't appear to have that guarantee, but Xen still
> +  * supplies a valid seqlock using the version field.
> +
> +  * We only do pvclock vdso timing at all if
> +  * PVCLOCK_TSC_STABLE_BIT is set, and we interpret that bit to
> +  * mean that all vCPUs have matching pvti and that the TSC is
> +  * synced, so we can just look at vCPU 0's pvti.
>*/
> - do {
> - cpu = __getcpu() & VGETCPU_CPU_MASK;
> - /* TODO: We can put vcpu id into higher bits of pvti.version.
> -  * This will save a couple of cycles by getting rid of
> -  * __getcpu() calls (Gleb).
> -  */
> -
> - pvti = get_pvti(cpu);
> -
> - version = __pvclock_read_cycles(>pvti, , );
> -
> - /*
> -  * Test we're still on the cpu as well as the version.
> -  * We could have been migrated just after the first
> -  * vgetcpu but before fetching the version, so we
> -  * wouldn't notice a version change.
> -  */
> - cpu1 = __getcpu() & VGETCPU_CPU_MASK;
> - } while (unlikely(cpu != cpu1 ||
> -   (pvti->pvti.version & 1) ||
> -   pvti->pvti.version != version));
> -
> - if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
> +
> + if (unlikely(!(pvti->flags & PVCLOCK_TSC_STABLE_BIT))) {
>   *mode = VCLOCK_NONE;
> + return 0;
> + }
> +
> + do {
> + version = pvti->version;
> +
> + /* This is also a read barrier, so we'll read version first. */
> + tsc = rdtsc_ordered();
> +
> + pvti_tsc_to_system_mul = pvti->tsc_to_system_mul;
> + pvti_tsc_shift = pvti->tsc_shift;
> + pvti_system_time = pvti->system_time;
> + pvti_tsc = pvti->tsc_timestamp;
> +
> + /* Make sure that the version double-check is last. */
> + smp_rmb();
> + } while (unlikely((version & 1) || version != pvti->version));
> +
> + delta = tsc - pvti_tsc;
> + ret = pvti_system_time +
> + pvclock_scale_delta(delta, pvti_tsc_to_system_mul,
> + pvti_tsc_shift);
>  
>   /* refer to tsc.c read_tsc() comment for rationale */
>   last = gtod->cycle_last;
> 

Reviewed-by: Paolo Bonzini 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

Re: [PATCH 2/5] x86, vdso, pvclock: Simplify and speed up the vdso pvclock reader

2015-12-10 Thread Ingo Molnar

* Paolo Bonzini  wrote:

> 
> 
> On 10/12/2015 00:12, Andy Lutomirski wrote:
> > From: Andy Lutomirski 
> > 
> > The pvclock vdso code was too abstracted to understand easily and
> > excessively paranoid.  Simplify it for a huge speedup.
> > 
> > This opens the door for additional simplifications, as the vdso no
> > longer accesses the pvti for any vcpu other than vcpu 0.
> > 
> > Before, vclock_gettime using kvm-clock took about 45ns on my machine.
> > With this change, it takes 29ns, which is almost as fast as the pure TSC
> > implementation.
> > 
> > Signed-off-by: Andy Lutomirski 
> > ---
> >  arch/x86/entry/vdso/vclock_gettime.c | 81 
> > 
> >  1 file changed, 46 insertions(+), 35 deletions(-)
> > 
> > diff --git a/arch/x86/entry/vdso/vclock_gettime.c 
> > b/arch/x86/entry/vdso/vclock_gettime.c
> > index ca94fa649251..c325ba1bdddf 100644
> > --- a/arch/x86/entry/vdso/vclock_gettime.c
> > +++ b/arch/x86/entry/vdso/vclock_gettime.c
> > @@ -78,47 +78,58 @@ static notrace const struct pvclock_vsyscall_time_info 
> > *get_pvti(int cpu)
> >  
> >  static notrace cycle_t vread_pvclock(int *mode)
> >  {
> > -   const struct pvclock_vsyscall_time_info *pvti;
> > +   const struct pvclock_vcpu_time_info *pvti = _pvti(0)->pvti;
> > cycle_t ret;
> > -   u64 last;
> > -   u32 version;
> > -   u8 flags;
> > -   unsigned cpu, cpu1;
> > -
> > +   u64 tsc, pvti_tsc;
> > +   u64 last, delta, pvti_system_time;
> > +   u32 version, pvti_tsc_to_system_mul, pvti_tsc_shift;
> >  
> > /*
> > -* Note: hypervisor must guarantee that:
> > -* 1. cpu ID number maps 1:1 to per-CPU pvclock time info.
> > -* 2. that per-CPU pvclock time info is updated if the
> > -*underlying CPU changes.
> > -* 3. that version is increased whenever underlying CPU
> > -*changes.
> > +* Note: The kernel and hypervisor must guarantee that cpu ID
> > +* number maps 1:1 to per-CPU pvclock time info.
> > +*
> > +* Because the hypervisor is entirely unaware of guest userspace
> > +* preemption, it cannot guarantee that per-CPU pvclock time
> > +* info is updated if the underlying CPU changes or that that
> > +* version is increased whenever underlying CPU changes.
> >  *
> > +* On KVM, we are guaranteed that pvti updates for any vCPU are
> > +* atomic as seen by *all* vCPUs.  This is an even stronger
> > +* guarantee than we get with a normal seqlock.
> > +*
> > +* On Xen, we don't appear to have that guarantee, but Xen still
> > +* supplies a valid seqlock using the version field.
> > +
> > +* We only do pvclock vdso timing at all if
> > +* PVCLOCK_TSC_STABLE_BIT is set, and we interpret that bit to
> > +* mean that all vCPUs have matching pvti and that the TSC is
> > +* synced, so we can just look at vCPU 0's pvti.
> >  */
> > -   do {
> > -   cpu = __getcpu() & VGETCPU_CPU_MASK;
> > -   /* TODO: We can put vcpu id into higher bits of pvti.version.
> > -* This will save a couple of cycles by getting rid of
> > -* __getcpu() calls (Gleb).
> > -*/
> > -
> > -   pvti = get_pvti(cpu);
> > -
> > -   version = __pvclock_read_cycles(>pvti, , );
> > -
> > -   /*
> > -* Test we're still on the cpu as well as the version.
> > -* We could have been migrated just after the first
> > -* vgetcpu but before fetching the version, so we
> > -* wouldn't notice a version change.
> > -*/
> > -   cpu1 = __getcpu() & VGETCPU_CPU_MASK;
> > -   } while (unlikely(cpu != cpu1 ||
> > - (pvti->pvti.version & 1) ||
> > - pvti->pvti.version != version));
> > -
> > -   if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
> > +
> > +   if (unlikely(!(pvti->flags & PVCLOCK_TSC_STABLE_BIT))) {
> > *mode = VCLOCK_NONE;
> > +   return 0;
> > +   }
> > +
> > +   do {
> > +   version = pvti->version;
> > +
> > +   /* This is also a read barrier, so we'll read version first. */
> > +   tsc = rdtsc_ordered();
> > +
> > +   pvti_tsc_to_system_mul = pvti->tsc_to_system_mul;
> > +   pvti_tsc_shift = pvti->tsc_shift;
> > +   pvti_system_time = pvti->system_time;
> > +   pvti_tsc = pvti->tsc_timestamp;
> > +
> > +   /* Make sure that the version double-check is last. */
> > +   smp_rmb();
> > +   } while (unlikely((version & 1) || version != pvti->version));
> > +
> > +   delta = tsc - pvti_tsc;
> > +   ret = pvti_system_time +
> > +   pvclock_scale_delta(delta, pvti_tsc_to_system_mul,
> > +   pvti_tsc_shift);
> >  
> > /* refer to tsc.c read_tsc() comment for rationale */
> > last = gtod->cycle_last;
> > 
> 
> Reviewed-by: Paolo Bonzini 

Thanks. I've 

[PATCH 2/5] x86, vdso, pvclock: Simplify and speed up the vdso pvclock reader

2015-12-09 Thread Andy Lutomirski
From: Andy Lutomirski 

The pvclock vdso code was too abstracted to understand easily and
excessively paranoid.  Simplify it for a huge speedup.

This opens the door for additional simplifications, as the vdso no
longer accesses the pvti for any vcpu other than vcpu 0.

Before, vclock_gettime using kvm-clock took about 45ns on my machine.
With this change, it takes 29ns, which is almost as fast as the pure TSC
implementation.

Signed-off-by: Andy Lutomirski 
---
 arch/x86/entry/vdso/vclock_gettime.c | 81 
 1 file changed, 46 insertions(+), 35 deletions(-)

diff --git a/arch/x86/entry/vdso/vclock_gettime.c 
b/arch/x86/entry/vdso/vclock_gettime.c
index ca94fa649251..c325ba1bdddf 100644
--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -78,47 +78,58 @@ static notrace const struct pvclock_vsyscall_time_info 
*get_pvti(int cpu)
 
 static notrace cycle_t vread_pvclock(int *mode)
 {
-   const struct pvclock_vsyscall_time_info *pvti;
+   const struct pvclock_vcpu_time_info *pvti = _pvti(0)->pvti;
cycle_t ret;
-   u64 last;
-   u32 version;
-   u8 flags;
-   unsigned cpu, cpu1;
-
+   u64 tsc, pvti_tsc;
+   u64 last, delta, pvti_system_time;
+   u32 version, pvti_tsc_to_system_mul, pvti_tsc_shift;
 
/*
-* Note: hypervisor must guarantee that:
-* 1. cpu ID number maps 1:1 to per-CPU pvclock time info.
-* 2. that per-CPU pvclock time info is updated if the
-*underlying CPU changes.
-* 3. that version is increased whenever underlying CPU
-*changes.
+* Note: The kernel and hypervisor must guarantee that cpu ID
+* number maps 1:1 to per-CPU pvclock time info.
+*
+* Because the hypervisor is entirely unaware of guest userspace
+* preemption, it cannot guarantee that per-CPU pvclock time
+* info is updated if the underlying CPU changes or that that
+* version is increased whenever underlying CPU changes.
 *
+* On KVM, we are guaranteed that pvti updates for any vCPU are
+* atomic as seen by *all* vCPUs.  This is an even stronger
+* guarantee than we get with a normal seqlock.
+*
+* On Xen, we don't appear to have that guarantee, but Xen still
+* supplies a valid seqlock using the version field.
+
+* We only do pvclock vdso timing at all if
+* PVCLOCK_TSC_STABLE_BIT is set, and we interpret that bit to
+* mean that all vCPUs have matching pvti and that the TSC is
+* synced, so we can just look at vCPU 0's pvti.
 */
-   do {
-   cpu = __getcpu() & VGETCPU_CPU_MASK;
-   /* TODO: We can put vcpu id into higher bits of pvti.version.
-* This will save a couple of cycles by getting rid of
-* __getcpu() calls (Gleb).
-*/
-
-   pvti = get_pvti(cpu);
-
-   version = __pvclock_read_cycles(>pvti, , );
-
-   /*
-* Test we're still on the cpu as well as the version.
-* We could have been migrated just after the first
-* vgetcpu but before fetching the version, so we
-* wouldn't notice a version change.
-*/
-   cpu1 = __getcpu() & VGETCPU_CPU_MASK;
-   } while (unlikely(cpu != cpu1 ||
- (pvti->pvti.version & 1) ||
- pvti->pvti.version != version));
-
-   if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT)))
+
+   if (unlikely(!(pvti->flags & PVCLOCK_TSC_STABLE_BIT))) {
*mode = VCLOCK_NONE;
+   return 0;
+   }
+
+   do {
+   version = pvti->version;
+
+   /* This is also a read barrier, so we'll read version first. */
+   tsc = rdtsc_ordered();
+
+   pvti_tsc_to_system_mul = pvti->tsc_to_system_mul;
+   pvti_tsc_shift = pvti->tsc_shift;
+   pvti_system_time = pvti->system_time;
+   pvti_tsc = pvti->tsc_timestamp;
+
+   /* Make sure that the version double-check is last. */
+   smp_rmb();
+   } while (unlikely((version & 1) || version != pvti->version));
+
+   delta = tsc - pvti_tsc;
+   ret = pvti_system_time +
+   pvclock_scale_delta(delta, pvti_tsc_to_system_mul,
+   pvti_tsc_shift);
 
/* refer to tsc.c read_tsc() comment for rationale */
last = gtod->cycle_last;
-- 
2.5.0

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html