> Date: Fri, 3 Jul 2020 12:42:58 -0500
> From: Scott Cheloha <[email protected]>
> 
> On Fri, Jul 03, 2020 at 02:34:20PM +0300, Paul Irofti wrote:
> > On 2020-07-03 00:40, Scott Cheloha wrote:
> > > On Fri, Jun 26, 2020 at 04:53:14PM +0300, Paul Irofti wrote:
> > > > On Wed, Jun 24, 2020 at 11:53:23AM +0200, Robert Nagy wrote:
> > > > > On 22/06/20 19:12 +0300, Paul Irofti wrote:
> > > > > > New iteration:
> > > > > > 
> > > > > >    - ps_timekeep should not coredump, pointed by deraadt@
> > > > > >    - set ps_timekeep to 0 before user uvm_map for randomization
> > > > > >    - map timekeep before fixup. confirmed by naddy@ that it fixes 
> > > > > > NULL init
> > > > > >    - initialize va. clarified by kettenis@
> > > > > > 
> > > > > > How's the magical max skew value research going? Do we have a value 
> > > > > > yet?
> > > > > > 
> > > > > > Paul
> > > > > 
> > > > > I think we should pick 100 for now and then we can adjust it later if 
> > > > > needed.
> > > > > 
> > > > > Of course this depends on kettenis' lfence diff so that amd ryzen tsc 
> > > > > is sane.
> > > > 
> > > > I looked at dmesglog and the reported values are indeed small. 99 was
> > > > the highest on an Atom. I updated the diff to 100. I think we can adapt
> > > > this as we get more reports (if ever).
> > > > 
> > > > OK?
> > > 
> > > One thing...
> > > 
> > > > diff --git lib/libc/arch/amd64/gen/usertc.c 
> > > > lib/libc/arch/amd64/gen/usertc.c
> > > > new file mode 100644
> > > > index 00000000000..56016c8eca1
> > > > --- /dev/null
> > > > +++ lib/libc/arch/amd64/gen/usertc.c
> > > > @@ -0,0 +1,41 @@
> > > > +/*     $OpenBSD$ */
> > > > +/*
> > > > + * Copyright (c) 2020 Paul Irofti <[email protected]>
> > > > + *
> > > > + * Permission to use, copy, modify, and distribute this software for 
> > > > any
> > > > + * purpose with or without fee is hereby granted, provided that the 
> > > > above
> > > > + * copyright notice and this permission notice appear in all copies.
> > > > + *
> > > > + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL 
> > > > WARRANTIES
> > > > + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
> > > > + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE 
> > > > FOR
> > > > + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY 
> > > > DAMAGES
> > > > + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN 
> > > > AN
> > > > + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING 
> > > > OUT OF
> > > > + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
> > > > + */
> > > > +
> > > > +#include <sys/types.h>
> > > > +#include <sys/timetc.h>
> > > > +
> > > > +static inline u_int
> > > > +rdtsc(void)
> > > > +{
> > > > +       uint32_t hi, lo;
> > > > +       asm volatile("rdtsc" : "=a"(lo), "=d"(hi));
> > > > +       return ((uint64_t)lo)|(((uint64_t)hi)<<32);
> > > > +}
> > > 
> > > We need to lfence this.
> > 
> > In userland too? Why?
> 
> I was under the impression kettenis@ had added an lfence to the kernel
> TSC's tc_get_timecount(), but I was mistaken.
> 
> We can deal with that separately.

I just committed a diff that adds the LFENCE everywhere where we are
measuring a time interval to do some sort of calibration.  I think
that should get the skew down under 100 cycles on all reasonable
amd64 machines.

I did not add the LFENCE to tc_get_timecount() itself.  We probably
should, but in the past some networking folks complained about slow
timecounters affecting network performance, so I wanted confirmation
that this didn't cause problems.

Reply via email to