On Wed, 2008-01-09 at 18:29 -0500, Steven Rostedt wrote: > plain text document attachment (rt-time-starvation-fix.patch) > Handle accurate time even if there's a long delay between > accumulated clock cycles. > > Signed-off-by: John Stultz <[EMAIL PROTECTED]> > Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]> > --- > arch/x86/kernel/vsyscall_64.c | 5 ++- > include/asm-x86/vgtod.h | 2 - > include/linux/clocksource.h | 58 > ++++++++++++++++++++++++++++++++++++++++-- > kernel/time/timekeeping.c | 35 +++++++++++++------------ > 4 files changed, 80 insertions(+), 20 deletions(-) > > linux-2.6.21-rc5_cycles-accumulated_C7.patch ^^ An oldie but a goodie?
I was just reminded that in the time since 2.6.21-rc5, other arches beside x86_64 have gained vgettimeofday implementations, and thus will need similar update_vsyscall() tweaks as seen below to get the correct time. Here's the fix for x86_64: > Index: linux-compile-i386.git/arch/x86/kernel/vsyscall_64.c > =================================================================== > --- linux-compile-i386.git.orig/arch/x86/kernel/vsyscall_64.c 2008-01-09 > 14:10:20.000000000 -0500 > +++ linux-compile-i386.git/arch/x86/kernel/vsyscall_64.c 2008-01-09 > 14:17:53.000000000 -0500 > @@ -86,6 +86,7 @@ void update_vsyscall(struct timespec *wa > vsyscall_gtod_data.clock.mask = clock->mask; > vsyscall_gtod_data.clock.mult = clock->mult; > vsyscall_gtod_data.clock.shift = clock->shift; > + vsyscall_gtod_data.clock.cycle_accumulated = clock->cycle_accumulated; > vsyscall_gtod_data.wall_time_sec = wall_time->tv_sec; > vsyscall_gtod_data.wall_time_nsec = wall_time->tv_nsec; > vsyscall_gtod_data.wall_to_monotonic = wall_to_monotonic; > @@ -121,7 +122,7 @@ static __always_inline long time_syscall > > static __always_inline void do_vgettimeofday(struct timeval * tv) > { > - cycle_t now, base, mask, cycle_delta; > + cycle_t now, base, accumulated, mask, cycle_delta; > unsigned seq; > unsigned long mult, shift, nsec; > cycle_t (*vread)(void); > @@ -135,6 +136,7 @@ static __always_inline void do_vgettimeo > } > now = vread(); > base = __vsyscall_gtod_data.clock.cycle_last; > + accumulated = __vsyscall_gtod_data.clock.cycle_accumulated; > mask = __vsyscall_gtod_data.clock.mask; > mult = __vsyscall_gtod_data.clock.mult; > shift = __vsyscall_gtod_data.clock.shift; > @@ -145,6 +147,7 @@ static __always_inline void do_vgettimeo > > /* calculate interval: */ > cycle_delta = (now - base) & mask; > + cycle_delta += accumulated; > /* convert to nsecs: */ > nsec += (cycle_delta * mult) >> shift; Tony: ia64 also needs something like this, but I found the fsyscall asm bits a little difficult to grasp. So I'll need some assistance on how to include the accumulated cycles into the final calculation. The following is a quick and dirty fix for powerpc so it includes cycle_accumulated in its calculation. It relies on the fact that the powerpc clocksource is a 64bit counter (don't have to worry about multiple overflows), so the subtraction should be safe. Signed-off-by: John Stultz <[EMAIL PROTECTED]> Index: 2.6.24-rc5/arch/powerpc/kernel/time.c =================================================================== --- 2.6.24-rc5.orig/arch/powerpc/kernel/time.c 2008-01-09 15:17:32.000000000 -0800 +++ 2.6.24-rc5/arch/powerpc/kernel/time.c 2008-01-09 15:17:43.000000000 -0800 @@ -773,7 +773,7 @@ void update_vsyscall(struct timespec *wa stamp_xsec = (u64) xtime.tv_nsec * XSEC_PER_SEC; do_div(stamp_xsec, 1000000000); stamp_xsec += (u64) xtime.tv_sec * XSEC_PER_SEC; - update_gtod(clock->cycle_last, stamp_xsec, t2x); + update_gtod(clock->cycle_last-clock->cycle_accumulated, stamp_xsec, t2x); } void update_vsyscall_tz(void) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/