Re: [patch 09/11] x86/vdso: Simplify the invalid vclock case

2018-09-17 Thread Andy Lutomirski
On Fri, Sep 14, 2018 at 5:50 AM, Thomas Gleixner  wrote:
> The code flow for the vclocks is convoluted as it requires the vclocks
> which can be invalidated separately from the vsyscall_gtod_data sequence to
> store the fact in a separate variable. That's inefficient.
>

>  notrace static int do_hres(clockid_t clk, struct timespec *ts)
>  {
> struct vgtod_ts *base = >basetime[clk];
> unsigned int seq;
> -   int mode;
> -   u64 ns;
> +   u64 cycles, ns;
>
> do {
> seq = gtod_read_begin(gtod);
> -   mode = gtod->vclock_mode;
> ts->tv_sec = base->sec;
> ns = base->nsec;
> -   ns += vgetsns();
> +   cycles = vgetcyc(gtod->vclock_mode);
> +   if (unlikely((s64)cycles < 0))
> +   return vdso_fallback_gettime(clk, ts);

i was contemplating this, and I would suggest one of two optimizations:

1. have all the helpers return a struct containing a u64 and a bool,
and use that bool.  The compiler should do okay.

2. Be sneaky.  Later in the series, you do:

if (unlikely((s64)cycles < 0))
return vdso_fallback_gettime(clk, ts);
-   ns += (cycles - gtod->cycle_last) * gtod->mult;
+   if (cycles > last)
+   ns += (cycles - last) * gtod->mult;

How about:

if (unlikely((s64)cycles <= last)) {
  if (cycles < 0) [or cycles == -1 or whatever]
return vdso_fallback_gettime;
} else {
  ns += (cycles - last) * gtod->mult;
}

which should actually make this whole mess be essentially free.

Also, I'm not entirely convinced that this "last" thing is needed at
all.  John, what's the scenario under which we need it?

--Andy

--Andy
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[patch V2 11/11] x66/vdso: Add CLOCK_TAI support

2018-09-17 Thread Thomas Gleixner
With the storage array in place it's now trivial to support CLOCK_TAI in
the vdso. Extend the base time storage array and add the update code.

Signed-off-by: Thomas Gleixner 
---

V2: Remove the masking trick

 arch/x86/entry/vsyscall/vsyscall_gtod.c |4 
 arch/x86/include/asm/vgtod.h|4 ++--
 2 files changed, 6 insertions(+), 2 deletions(-)

--- a/arch/x86/entry/vsyscall/vsyscall_gtod.c
+++ b/arch/x86/entry/vsyscall/vsyscall_gtod.c
@@ -51,6 +51,10 @@ void update_vsyscall(struct timekeeper *
base->sec = tk->xtime_sec;
base->nsec = tk->tkr_mono.xtime_nsec;
 
+   base = >basetime[CLOCK_TAI];
+   base->sec = tk->xtime_sec + (s64)tk->tai_offset;
+   base->nsec = tk->tkr_mono.xtime_nsec;
+
base = >basetime[CLOCK_MONOTONIC];
base->sec = tk->xtime_sec + tk->wall_to_monotonic.tv_sec;
nsec = tk->tkr_mono.xtime_nsec;
--- a/arch/x86/include/asm/vgtod.h
+++ b/arch/x86/include/asm/vgtod.h
@@ -18,8 +18,8 @@ struct vgtod_ts {
u64 nsec;
 };
 
-#define VGTOD_BASES(CLOCK_MONOTONIC_COARSE + 1)
-#define VGTOD_HRES (BIT(CLOCK_REALTIME) | BIT(CLOCK_MONOTONIC))
+#define VGTOD_BASES(CLOCK_TAI + 1)
+#define VGTOD_HRES (BIT(CLOCK_REALTIME) | BIT(CLOCK_MONOTONIC) | 
BIT(CLOCK_TAI))
 #define VGTOD_COARSE   (BIT(CLOCK_REALTIME_COARSE) | 
BIT(CLOCK_MONOTONIC_COARSE))
 
 /*


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[patch V2 10/11] x86/vdso: Move cycle_last handling into the caller

2018-09-17 Thread Thomas Gleixner
Dereferencing gtod->cycle_last all over the place and foing the cycles <
last comparison in the vclock read functions generates horrible code. Doing
it at the call site is much better and gains a few cycles both for TSC and
pvclock.

Caveat: This adds the comparison to the hyperv vclock as well, but I have
no way to test that.

Signed-off-by: Thomas Gleixner 
---
 arch/x86/entry/vdso/vclock_gettime.c |   39 ++-
 1 file changed, 7 insertions(+), 32 deletions(-)

--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -76,9 +76,8 @@ static notrace const struct pvclock_vsys
 static notrace u64 vread_pvclock(void)
 {
const struct pvclock_vcpu_time_info *pvti = _pvti0()->pvti;
-   u64 ret;
-   u64 last;
u32 version;
+   u64 ret;
 
/*
 * Note: The kernel and hypervisor must guarantee that cpu ID
@@ -111,13 +110,7 @@ static notrace u64 vread_pvclock(void)
ret = __pvclock_read_cycles(pvti, rdtsc_ordered());
} while (pvclock_read_retry(pvti, version));
 
-   /* refer to vread_tsc() comment for rationale */
-   last = gtod->cycle_last;
-
-   if (likely(ret >= last))
-   return ret;
-
-   return last;
+   return ret;
 }
 #endif
 #ifdef CONFIG_HYPERV_TSCPAGE
@@ -130,30 +123,10 @@ static notrace u64 vread_hvclock(void)
 }
 #endif
 
-notrace static u64 vread_tsc(void)
-{
-   u64 ret = (u64)rdtsc_ordered();
-   u64 last = gtod->cycle_last;
-
-   if (likely(ret >= last))
-   return ret;
-
-   /*
-* GCC likes to generate cmov here, but this branch is extremely
-* predictable (it's just a function of time and the likely is
-* very likely) and there's a data dependence, so force GCC
-* to generate a branch instead.  I don't barrier() because
-* we don't actually need a barrier, and if this function
-* ever gets inlined it will generate worse code.
-*/
-   asm volatile ("");
-   return last;
-}
-
 notrace static inline u64 vgetcyc(int mode)
 {
if (mode == VCLOCK_TSC)
-   return vread_tsc();
+   return (u64)rdtsc_ordered();
 #ifdef CONFIG_PARAVIRT_CLOCK
else if (mode == VCLOCK_PVCLOCK)
return vread_pvclock();
@@ -168,17 +141,19 @@ notrace static inline u64 vgetcyc(int mo
 notrace static int do_hres(clockid_t clk, struct timespec *ts)
 {
struct vgtod_ts *base = >basetime[clk];
+   u64 cycles, last, ns;
unsigned int seq;
-   u64 cycles, ns;
 
do {
seq = gtod_read_begin(gtod);
ts->tv_sec = base->sec;
ns = base->nsec;
+   last = gtod->cycle_last;
cycles = vgetcyc(gtod->vclock_mode);
if (unlikely((s64)cycles < 0))
return vdso_fallback_gettime(clk, ts);
-   ns += (cycles - gtod->cycle_last) * gtod->mult;
+   if (cycles > last)
+   ns += (cycles - last) * gtod->mult;
ns >>= gtod->shift;
} while (unlikely(gtod_read_retry(gtod, seq)));
 


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[patch V2 08/11] x86/vdso: Replace the clockid switch case

2018-09-17 Thread Thomas Gleixner
Now that the time getter functions use the clockid as index into the
storage array for the base time access, the switch case can be replaced.

- Check for clockid >= MAX_CLOCKS and for negative clockid (CPU/FD) first
  and call the fallback function right away.

- After establishing that clockid is < MAX_CLOCKS, convert the clockid to a
  bitmask

- Check for the supported high resolution and coarse functions by anding
  the bitmask of supported clocks and check whether a bit is set.

This completely avoids jump tables, reduces the number of conditionals and
makes the VDSO extensible for other clock ids.

Signed-off-by: Thomas Gleixner 
---
 arch/x86/entry/vdso/vclock_gettime.c |   38 ---
 1 file changed, 18 insertions(+), 20 deletions(-)

--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -239,29 +239,27 @@ notrace static void do_coarse(clockid_t
 
 notrace int __vdso_clock_gettime(clockid_t clock, struct timespec *ts)
 {
-   switch (clock) {
-   case CLOCK_REALTIME:
-   if (do_hres(CLOCK_REALTIME, ts) == VCLOCK_NONE)
-   goto fallback;
-   break;
-   case CLOCK_MONOTONIC:
-   if (do_hres(CLOCK_MONOTONIC, ts) == VCLOCK_NONE)
-   goto fallback;
-   break;
-   case CLOCK_REALTIME_COARSE:
-   do_coarse(CLOCK_REALTIME_COARSE, ts);
-   break;
-   case CLOCK_MONOTONIC_COARSE:
-   do_coarse(CLOCK_MONOTONIC_COARSE, ts);
-   break;
-   default:
-   goto fallback;
-   }
+   unsigned int msk;
+
+   /* Sort out negative (CPU/FD) and invalid clocks */
+   if (unlikely((unsigned int) clock >= MAX_CLOCKS))
+   return vdso_fallback_gettime(clock, ts);
 
-   return 0;
-fallback:
+   /*
+* Convert the clockid to a bitmask and use it to check which
+* clocks are handled in the VDSO directly.
+*/
+   msk = 1U << clock;
+   if (likely(msk & VGTOD_HRES)) {
+   if (do_hres(clock, ts) != VCLOCK_NONE)
+   return 0;
+   } else if (msk & VGTOD_COARSE) {
+   do_coarse(clock, ts);
+   return 0;
+   }
return vdso_fallback_gettime(clock, ts);
 }
+
 int clock_gettime(clockid_t, struct timespec *)
__attribute__((weak, alias("__vdso_clock_gettime")));
 


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[patch V2 09/11] x86/vdso: Simplify the invalid vclock case

2018-09-17 Thread Thomas Gleixner
The code flow for the vclocks is convoluted as it requires the vclocks
which can be invalidated separately from the vsyscall_gtod_data sequence to
store the fact in a separate variable. That's inefficient.

Restructure the code so the vclock readout returns cycles and the
conversion to nanoseconds is handled at the call site.

If the clock gets invalidated or vclock is already VCLOCK_NONE, return
U64_MAX as the cycle value, which is invalid for all clocks and leave the
sequence loop immediately in that case by calling the fallback function
directly.

This allows to remove the gettimeofday fallback as it now uses the
clock_gettime() fallback and does the nanoseconds to microseconds
conversion in the same way as it does when the vclock is functional. It
does not make a difference whether the division by 1000 happens in the
kernel fallback or in userspace.

Generates way better code and gains a few cycles back.

Signed-off-by: Thomas Gleixner 
---
 arch/x86/entry/vdso/vclock_gettime.c |   81 +--
 1 file changed, 21 insertions(+), 60 deletions(-)

--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -48,16 +48,6 @@ notrace static long vdso_fallback_gettim
return ret;
 }
 
-notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz)
-{
-   long ret;
-
-   asm("syscall" : "=a" (ret) :
-   "0" (__NR_gettimeofday), "D" (tv), "S" (tz) : "memory");
-   return ret;
-}
-
-
 #else
 
 notrace static long vdso_fallback_gettime(long clock, struct timespec *ts)
@@ -75,21 +65,6 @@ notrace static long vdso_fallback_gettim
return ret;
 }
 
-notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz)
-{
-   long ret;
-
-   asm(
-   "mov %%ebx, %%edx \n"
-   "mov %2, %%ebx \n"
-   "call __kernel_vsyscall \n"
-   "mov %%edx, %%ebx \n"
-   : "=a" (ret)
-   : "0" (__NR_gettimeofday), "g" (tv), "c" (tz)
-   : "memory", "edx");
-   return ret;
-}
-
 #endif
 
 #ifdef CONFIG_PARAVIRT_CLOCK
@@ -98,7 +73,7 @@ static notrace const struct pvclock_vsys
return (const struct pvclock_vsyscall_time_info *)_page;
 }
 
-static notrace u64 vread_pvclock(int *mode)
+static notrace u64 vread_pvclock(void)
 {
const struct pvclock_vcpu_time_info *pvti = _pvti0()->pvti;
u64 ret;
@@ -130,10 +105,8 @@ static notrace u64 vread_pvclock(int *mo
do {
version = pvclock_read_begin(pvti);
 
-   if (unlikely(!(pvti->flags & PVCLOCK_TSC_STABLE_BIT))) {
-   *mode = VCLOCK_NONE;
-   return 0;
-   }
+   if (unlikely(!(pvti->flags & PVCLOCK_TSC_STABLE_BIT)))
+   return U64_MAX;
 
ret = __pvclock_read_cycles(pvti, rdtsc_ordered());
} while (pvclock_read_retry(pvti, version));
@@ -148,17 +121,12 @@ static notrace u64 vread_pvclock(int *mo
 }
 #endif
 #ifdef CONFIG_HYPERV_TSCPAGE
-static notrace u64 vread_hvclock(int *mode)
+static notrace u64 vread_hvclock(void)
 {
const struct ms_hyperv_tsc_page *tsc_pg =
(const struct ms_hyperv_tsc_page *)_page;
-   u64 current_tick = hv_read_tsc_page(tsc_pg);
-
-   if (current_tick != U64_MAX)
-   return current_tick;
 
-   *mode = VCLOCK_NONE;
-   return 0;
+   return hv_read_tsc_page(tsc_pg);
 }
 #endif
 
@@ -182,47 +150,42 @@ notrace static u64 vread_tsc(void)
return last;
 }
 
-notrace static inline u64 vgetsns(int *mode)
+notrace static inline u64 vgetcyc(int mode)
 {
-   u64 v;
-   cycles_t cycles;
-
-   if (gtod->vclock_mode == VCLOCK_TSC)
-   cycles = vread_tsc();
+   if (mode == VCLOCK_TSC)
+   return vread_tsc();
 #ifdef CONFIG_PARAVIRT_CLOCK
-   else if (gtod->vclock_mode == VCLOCK_PVCLOCK)
-   cycles = vread_pvclock(mode);
+   else if (mode == VCLOCK_PVCLOCK)
+   return vread_pvclock();
 #endif
 #ifdef CONFIG_HYPERV_TSCPAGE
-   else if (gtod->vclock_mode == VCLOCK_HVCLOCK)
-   cycles = vread_hvclock(mode);
+   else if (mode == VCLOCK_HVCLOCK)
+   return vread_hvclock();
 #endif
-   else
-   return 0;
-   v = cycles - gtod->cycle_last;
-   return v * gtod->mult;
+   return U64_MAX;
 }
 
 notrace static int do_hres(clockid_t clk, struct timespec *ts)
 {
struct vgtod_ts *base = >basetime[clk];
unsigned int seq;
-   int mode;
-   u64 ns;
+   u64 cycles, ns;
 
do {
seq = gtod_read_begin(gtod);
-   mode = gtod->vclock_mode;
ts->tv_sec = base->sec;
ns = base->nsec;
-   ns += vgetsns();
+   cycles = vgetcyc(gtod->vclock_mode);
+   if (unlikely((s64)cycles < 0))
+   return 

[patch V2 06/11] x86/vdso: Collapse high resolution functions

2018-09-17 Thread Thomas Gleixner
do_realtime() and do_monotonic() are now the same except for the storage
array index. Hand the index in as an argument and collapse the functions.

Signed-off-by: Thomas Gleixner 
---
 arch/x86/entry/vdso/vclock_gettime.c |   35 +++
 1 file changed, 7 insertions(+), 28 deletions(-)

--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -203,35 +203,12 @@ notrace static inline u64 vgetsns(int *m
return v * gtod->mult;
 }
 
-/* Code size doesn't matter (vdso is 4k anyway) and this is faster. */
-notrace static int __always_inline do_realtime(struct timespec *ts)
+notrace static int do_hres(clockid_t clk, struct timespec *ts)
 {
-   struct vgtod_ts *base = >basetime[CLOCK_REALTIME];
+   struct vgtod_ts *base = >basetime[clk];
unsigned int seq;
-   u64 ns;
int mode;
-
-   do {
-   seq = gtod_read_begin(gtod);
-   mode = gtod->vclock_mode;
-   ts->tv_sec = base->sec;
-   ns = base->nsec;
-   ns += vgetsns();
-   ns >>= gtod->shift;
-   } while (unlikely(gtod_read_retry(gtod, seq)));
-
-   ts->tv_sec += __iter_div_u64_rem(ns, NSEC_PER_SEC, );
-   ts->tv_nsec = ns;
-
-   return mode;
-}
-
-notrace static int __always_inline do_monotonic(struct timespec *ts)
-{
-   struct vgtod_ts *base = >basetime[CLOCK_MONOTONIC];
-   unsigned int seq;
u64 ns;
-   int mode;
 
do {
seq = gtod_read_begin(gtod);
@@ -276,11 +253,11 @@ notrace int __vdso_clock_gettime(clockid
 {
switch (clock) {
case CLOCK_REALTIME:
-   if (do_realtime(ts) == VCLOCK_NONE)
+   if (do_hres(CLOCK_REALTIME, ts) == VCLOCK_NONE)
goto fallback;
break;
case CLOCK_MONOTONIC:
-   if (do_monotonic(ts) == VCLOCK_NONE)
+   if (do_hres(CLOCK_MONOTONIC, ts) == VCLOCK_NONE)
goto fallback;
break;
case CLOCK_REALTIME_COARSE:
@@ -303,7 +280,9 @@ int clock_gettime(clockid_t, struct time
 notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
 {
if (likely(tv != NULL)) {
-   if (unlikely(do_realtime((struct timespec *)tv) == VCLOCK_NONE))
+   struct timespec *ts = (struct timespec *) tv;
+
+   if (unlikely(do_hres(CLOCK_REALTIME, ts) == VCLOCK_NONE))
return vdso_fallback_gtod(tv, tz);
tv->tv_usec /= 1000;
}


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[patch V2 05/11] x86/vdso: Introduce and use vgtod_ts

2018-09-17 Thread Thomas Gleixner
It's desired to support more clocks in the VDSO, e.g. CLOCK_TAI. This
results either in indirect calls due to the larger switch case, which then
requires retpolines or when the compiler is forced to avoid jump tables it
results in even more conditionals.

To avoid both variants which are bad for performance the high resolution
functions and the coarse grained functions will be collapsed into one for
each. That requires to store the clock specific base time in an array.

Introcude struct vgtod_ts for storage and convert the data store, the
update function and the individual clock functions over to use it.

The new storage does not longer use gtod_long_t for seconds depending on 32
or 64 bit compile because this needs to be the full 64bit value even for
32bit when a Y2038 function is added. No point in keeping the distinction
alive in the internal representation.

Signed-off-by: Thomas Gleixner 
---
 arch/x86/entry/vdso/vclock_gettime.c|   24 +--
 arch/x86/entry/vsyscall/vsyscall_gtod.c |   51 
 arch/x86/include/asm/vgtod.h|   36 --
 3 files changed, 61 insertions(+), 50 deletions(-)

--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -206,6 +206,7 @@ notrace static inline u64 vgetsns(int *m
 /* Code size doesn't matter (vdso is 4k anyway) and this is faster. */
 notrace static int __always_inline do_realtime(struct timespec *ts)
 {
+   struct vgtod_ts *base = >basetime[CLOCK_REALTIME];
unsigned int seq;
u64 ns;
int mode;
@@ -213,8 +214,8 @@ notrace static int __always_inline do_re
do {
seq = gtod_read_begin(gtod);
mode = gtod->vclock_mode;
-   ts->tv_sec = gtod->wall_time_sec;
-   ns = gtod->wall_time_snsec;
+   ts->tv_sec = base->sec;
+   ns = base->nsec;
ns += vgetsns();
ns >>= gtod->shift;
} while (unlikely(gtod_read_retry(gtod, seq)));
@@ -227,6 +228,7 @@ notrace static int __always_inline do_re
 
 notrace static int __always_inline do_monotonic(struct timespec *ts)
 {
+   struct vgtod_ts *base = >basetime[CLOCK_MONOTONIC];
unsigned int seq;
u64 ns;
int mode;
@@ -234,8 +236,8 @@ notrace static int __always_inline do_mo
do {
seq = gtod_read_begin(gtod);
mode = gtod->vclock_mode;
-   ts->tv_sec = gtod->monotonic_time_sec;
-   ns = gtod->monotonic_time_snsec;
+   ts->tv_sec = base->sec;
+   ns = base->nsec;
ns += vgetsns();
ns >>= gtod->shift;
} while (unlikely(gtod_read_retry(gtod, seq)));
@@ -248,21 +250,25 @@ notrace static int __always_inline do_mo
 
 notrace static void do_realtime_coarse(struct timespec *ts)
 {
+   struct vgtod_ts *base = >basetime[CLOCK_REALTIME_COARSE];
unsigned int seq;
+
do {
seq = gtod_read_begin(gtod);
-   ts->tv_sec = gtod->wall_time_coarse_sec;
-   ts->tv_nsec = gtod->wall_time_coarse_nsec;
+   ts->tv_sec = base->sec;
+   ts->tv_nsec = base->nsec;
} while (unlikely(gtod_read_retry(gtod, seq)));
 }
 
 notrace static void do_monotonic_coarse(struct timespec *ts)
 {
+   struct vgtod_ts *base = >basetime[CLOCK_MONOTONIC_COARSE];
unsigned int seq;
+
do {
seq = gtod_read_begin(gtod);
-   ts->tv_sec = gtod->monotonic_time_coarse_sec;
-   ts->tv_nsec = gtod->monotonic_time_coarse_nsec;
+   ts->tv_sec = base->sec;
+   ts->tv_nsec = base->nsec;
} while (unlikely(gtod_read_retry(gtod, seq)));
 }
 
@@ -318,7 +324,7 @@ int gettimeofday(struct timeval *, struc
 notrace time_t __vdso_time(time_t *t)
 {
/* This is atomic on x86 so we don't need any locks. */
-   time_t result = READ_ONCE(gtod->wall_time_sec);
+   time_t result = READ_ONCE(gtod->basetime[CLOCK_REALTIME].sec);
 
if (t)
*t = result;
--- a/arch/x86/entry/vsyscall/vsyscall_gtod.c
+++ b/arch/x86/entry/vsyscall/vsyscall_gtod.c
@@ -31,6 +31,8 @@ void update_vsyscall(struct timekeeper *
 {
int vclock_mode = tk->tkr_mono.clock->archdata.vclock_mode;
struct vsyscall_gtod_data *vdata = _gtod_data;
+   struct vgtod_ts *base;
+   u64 nsec;
 
/* Mark the new vclock used. */
BUILD_BUG_ON(VCLOCK_MAX >= 32);
@@ -45,34 +47,33 @@ void update_vsyscall(struct timekeeper *
vdata->mult = tk->tkr_mono.mult;
vdata->shift= tk->tkr_mono.shift;
 
-   vdata->wall_time_sec= tk->xtime_sec;
-   vdata->wall_time_snsec  = tk->tkr_mono.xtime_nsec;
-
-   vdata->monotonic_time_sec   = tk->xtime_sec
-   + tk->wall_to_monotonic.tv_sec;
-   vdata->monotonic_time_snsec   

[patch V2 07/11] x86/vdso: Collapse coarse functions

2018-09-17 Thread Thomas Gleixner
do_realtime_coarse() and do_monotonic_coarse() are now the same except for
the storage array index. Hand the index in as an argument and collapse the
functions.

Signed-off-by: Thomas Gleixner 
---
 arch/x86/entry/vdso/vclock_gettime.c |   20 
 1 file changed, 4 insertions(+), 16 deletions(-)

--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -225,21 +225,9 @@ notrace static int do_hres(clockid_t clk
return mode;
 }
 
-notrace static void do_realtime_coarse(struct timespec *ts)
+notrace static void do_coarse(clockid_t clk, struct timespec *ts)
 {
-   struct vgtod_ts *base = >basetime[CLOCK_REALTIME_COARSE];
-   unsigned int seq;
-
-   do {
-   seq = gtod_read_begin(gtod);
-   ts->tv_sec = base->sec;
-   ts->tv_nsec = base->nsec;
-   } while (unlikely(gtod_read_retry(gtod, seq)));
-}
-
-notrace static void do_monotonic_coarse(struct timespec *ts)
-{
-   struct vgtod_ts *base = >basetime[CLOCK_MONOTONIC_COARSE];
+   struct vgtod_ts *base = >basetime[clk];
unsigned int seq;
 
do {
@@ -261,10 +249,10 @@ notrace int __vdso_clock_gettime(clockid
goto fallback;
break;
case CLOCK_REALTIME_COARSE:
-   do_realtime_coarse(ts);
+   do_coarse(CLOCK_REALTIME_COARSE, ts);
break;
case CLOCK_MONOTONIC_COARSE:
-   do_monotonic_coarse(ts);
+   do_coarse(CLOCK_MONOTONIC_COARSE, ts);
break;
default:
goto fallback;


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[patch V2 04/11] x86/vdso: Use unsigned int consistently for vsyscall_gtod_data::seq

2018-09-17 Thread Thomas Gleixner
The sequence count in vgtod_data is unsigned int, but the call sites use
unsigned long, which is a pointless exercise. Fix the call sites and
replace 'unsigned' with unsinged 'int' while at it.

Signed-off-by: Thomas Gleixner 
---
 arch/x86/entry/vdso/vclock_gettime.c |8 
 arch/x86/include/asm/vgtod.h |   10 +-
 2 files changed, 9 insertions(+), 9 deletions(-)

--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -206,7 +206,7 @@ notrace static inline u64 vgetsns(int *m
 /* Code size doesn't matter (vdso is 4k anyway) and this is faster. */
 notrace static int __always_inline do_realtime(struct timespec *ts)
 {
-   unsigned long seq;
+   unsigned int seq;
u64 ns;
int mode;
 
@@ -227,7 +227,7 @@ notrace static int __always_inline do_re
 
 notrace static int __always_inline do_monotonic(struct timespec *ts)
 {
-   unsigned long seq;
+   unsigned int seq;
u64 ns;
int mode;
 
@@ -248,7 +248,7 @@ notrace static int __always_inline do_mo
 
 notrace static void do_realtime_coarse(struct timespec *ts)
 {
-   unsigned long seq;
+   unsigned int seq;
do {
seq = gtod_read_begin(gtod);
ts->tv_sec = gtod->wall_time_coarse_sec;
@@ -258,7 +258,7 @@ notrace static void do_realtime_coarse(s
 
 notrace static void do_monotonic_coarse(struct timespec *ts)
 {
-   unsigned long seq;
+   unsigned int seq;
do {
seq = gtod_read_begin(gtod);
ts->tv_sec = gtod->monotonic_time_coarse_sec;
--- a/arch/x86/include/asm/vgtod.h
+++ b/arch/x86/include/asm/vgtod.h
@@ -15,9 +15,9 @@ typedef unsigned long gtod_long_t;
  * so be carefull by modifying this structure.
  */
 struct vsyscall_gtod_data {
-   unsigned seq;
+   unsigned int seq;
 
-   int vclock_mode;
+   int vclock_mode;
u64 cycle_last;
u64 mask;
u32 mult;
@@ -44,9 +44,9 @@ static inline bool vclock_was_used(int v
return READ_ONCE(vclocks_used) & (1 << vclock);
 }
 
-static inline unsigned gtod_read_begin(const struct vsyscall_gtod_data *s)
+static inline unsigned int gtod_read_begin(const struct vsyscall_gtod_data *s)
 {
-   unsigned ret;
+   unsigned int ret;
 
 repeat:
ret = READ_ONCE(s->seq);
@@ -59,7 +59,7 @@ static inline unsigned gtod_read_begin(c
 }
 
 static inline int gtod_read_retry(const struct vsyscall_gtod_data *s,
-   unsigned start)
+ unsigned int start)
 {
smp_rmb();
return unlikely(s->seq != start);


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[patch V2 03/11] x86/vdso: Enforce 64bit clocksource

2018-09-17 Thread Thomas Gleixner
All VDSO clock sources are TSC based and use CLOCKSOURCE_MASK(64). There is
no point in masking with all FF. Get rid of it and enforce the mask in the
sanity checker.

Signed-off-by: Thomas Gleixner 
---
 arch/x86/entry/vdso/vclock_gettime.c |2 +-
 arch/x86/kernel/time.c   |6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)

--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -199,7 +199,7 @@ notrace static inline u64 vgetsns(int *m
 #endif
else
return 0;
-   v = (cycles - gtod->cycle_last) & gtod->mask;
+   v = cycles - gtod->cycle_last;
return v * gtod->mult;
 }
 
--- a/arch/x86/kernel/time.c
+++ b/arch/x86/kernel/time.c
@@ -120,4 +120,10 @@ void clocksource_arch_init(struct clocks
cs->name, cs->archdata.vclock_mode);
cs->archdata.vclock_mode = VCLOCK_NONE;
}
+
+   if (cs->mask != CLOCKSOURCE_MASK(64)) {
+   pr_warn("clocksource %s registered with invalid mask %016llx. 
Disabling vclock.\n",
+   cs->name, cs->mask);
+   cs->archdata.vclock_mode = VCLOCK_NONE;
+   }
 }


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[patch V2 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support

2018-09-17 Thread Thomas Gleixner
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime()
implementation, which extended the clockid switch case and added yet
another slightly different copy of the same code.

Especially the extended switch case is problematic as the compiler tends to
generate a jump table which then requires to use retpolines. If jump tables
are disabled it adds yet another conditional to the existing maze.

This series takes a different approach by consolidating the almost
identical functions into one implementation for high resolution clocks and
one for the coarse grained clock ids by storing the base data for each
clock id in an array which is indexed by the clock id.

This completely eliminates the switch case and allows further
simplifications of the code base, which at the end all together gain a few
cycles performance or at least stay on par with todays code. The resulting
performance depends heavily on the micro architecture and the compiler.

Changes vs. V1:

  - Fix the VCLOCK_MAX sanity check
  - Remove the magic clock masking and extend the storage array

Thanks,

tglx

8<---
 arch/x86/Kconfig|1 
 arch/x86/entry/vdso/vclock_gettime.c|  199 
 arch/x86/entry/vsyscall/vsyscall_gtod.c |   55 
 arch/x86/include/asm/vgtod.h|   42 +++---
 arch/x86/kernel/time.c  |   22 +++
 include/linux/clocksource.h |5 
 kernel/time/Kconfig |4 
 kernel/time/clocksource.c   |2 
 8 files changed, 140 insertions(+), 190 deletions(-)



___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[patch V2 02/11] x86/time: Implement clocksource_arch_init()

2018-09-17 Thread Thomas Gleixner
Runtime validate the VCLOCK_MODE in clocksource::archdata and disable
VCLOCK if invalid, which disables the VDSO but keeps the system running.

Signed-off-by: Thomas Gleixner 
---

V2: Fix the VCLOCK_MAX check

 arch/x86/Kconfig   |1 +
 arch/x86/kernel/time.c |   16 
 2 files changed, 17 insertions(+)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -48,6 +48,7 @@ config X86
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
select ANON_INODES
select ARCH_CLOCKSOURCE_DATA
+   select ARCH_CLOCKSOURCE_INIT
select ARCH_DISCARD_MEMBLOCK
select ARCH_HAS_ACPI_TABLE_UPGRADE  if ACPI
select ARCH_HAS_DEBUG_VIRTUAL
--- a/arch/x86/kernel/time.c
+++ b/arch/x86/kernel/time.c
@@ -10,6 +10,7 @@
  *
  */
 
+#include 
 #include 
 #include 
 #include 
@@ -105,3 +106,18 @@ void __init time_init(void)
 {
late_time_init = x86_late_time_init;
 }
+
+/*
+ * Sanity check the vdso related archdata content.
+ */
+void clocksource_arch_init(struct clocksource *cs)
+{
+   if (cs->archdata.vclock_mode == VCLOCK_NONE)
+   return;
+
+   if (cs->archdata.vclock_mode > VCLOCK_MAX) {
+   pr_warn("clocksource %s registered with invalid vclock_mode %d. 
Disabling vclock.\n",
+   cs->name, cs->archdata.vclock_mode);
+   cs->archdata.vclock_mode = VCLOCK_NONE;
+   }
+}


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[patch V2 01/11] clocksource: Provide clocksource_arch_init()

2018-09-17 Thread Thomas Gleixner
Architectures have extra archdata in the clocksource, e.g. for VDSO
support. There are no sanity checks or general initializations for this
available. Add support for that.

Signed-off-by: Thomas Gleixner 
---
 include/linux/clocksource.h |5 +
 kernel/time/Kconfig |4 
 kernel/time/clocksource.c   |2 ++
 3 files changed, 11 insertions(+)

--- a/include/linux/clocksource.h
+++ b/include/linux/clocksource.h
@@ -241,6 +241,11 @@ static inline void __clocksource_update_
__clocksource_update_freq_scale(cs, 1000, khz);
 }
 
+#ifdef CONFIG_ARCH_CLOCKSOURCE_INIT
+extern void clocksource_arch_init(struct clocksource *cs);
+#else
+static inline void clocksource_arch_init(struct clocksource *cs) { }
+#endif
 
 extern int timekeeping_notify(struct clocksource *clock);
 
--- a/kernel/time/Kconfig
+++ b/kernel/time/Kconfig
@@ -12,6 +12,10 @@ config CLOCKSOURCE_WATCHDOG
 config ARCH_CLOCKSOURCE_DATA
bool
 
+# Architecture has extra clocksource init called from registration
+config ARCH_CLOCKSOURCE_INIT
+   bool
+
 # Clocksources require validation of the clocksource against the last
 # cycle update - x86/TSC misfeature
 config CLOCKSOURCE_VALIDATE_LAST_CYCLE
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -937,6 +937,8 @@ int __clocksource_register_scale(struct
 {
unsigned long flags;
 
+   clocksource_arch_init(cs);
+
/* Initialize mult/shift and max_idle_ns */
__clocksource_update_freq_scale(cs, scale, freq);
 


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [patch 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support

2018-09-17 Thread Thomas Gleixner
On Fri, 14 Sep 2018, Arnd Bergmann wrote:
> On Fri, Sep 14, 2018 at 2:52 PM Thomas Gleixner  wrote:
> A couple of architectures (s390, ia64, riscv, powerpc, arm64)
> implement the vdso as assembler code at the moment, so they
> won't be as easy to consolidate (other than outright replacing all
> the code).
> 
> The other five:
> arch/x86/entry/vdso/vclock_gettime.c
> arch/sparc/vdso/vclock_gettime.c
> arch/nds32/kernel/vdso/gettimeofday.c
> arch/mips/vdso/gettimeofday.c
> arch/arm/vdso/vgettimeofday.c
>
> are basically all minor variations of the same code base and could be
> consolidated to some degree.
> Any suggestions here? Should we plan to do that consolitdation based on
> your new version, or just add clock_gettime64 in arm32 and x86-32, and then
> be done with it? The other ones will obviously still be fast for 32-bit time_t
> and will have a working non-vdso sys_clock_getttime64().

In principle consolidating all those implementations should be possible to
some extent and probably worthwhile. What's arch specific are the actual
accessors to the hardware clocks.

> I also wonder about clock_getres(): half the architectures seem to implement
> it in vdso, but notably arm32 and x86 don't, and I had not expected it to be
> performance critical given that the result is easily cached in user space.

getres() is not really performance critical, but adding it does not create
a huge problem either.

Thanks,

tglx

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization