Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-12 Thread Ingo Molnar

* Nick Piggin <[EMAIL PROTECTED]> wrote:

> > the scariest bit is the adding of cpu_clock() to kernel/printk.c so 
> > late in the game, and the anti-recursion code i did there. Maybe 
> > because this only affects CONFIG_PRINTK_TIME we could try it even 
> > for v2.6.24.
> 
> Printk recursion I guess shouldn't happen, but if there is a bug 
> somewhere in eg. the scheduler locking, then it may trigger, right?

or we just crash somewhere. It's all about risk management - printk is 
crutial, and with more complex codepaths being touched in printk it 
might make sense to just add built-in recursion protection into printk 
via my patch.

> Probably pretty rare case, however it would be nice to be able to find 
> out where the recursion comes from? Can you put an instruction pointer 
> in the recursion message perhaps?

yeah, as i mentioned if this would be occuring in practice we can always 
save the stacktrace of the incident and output that. I opted for the 
simplest approach first. Thanks for your Reviewed-by, i've queued it up 
for 2.6.25.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-12 Thread Ingo Molnar

* Nick Piggin [EMAIL PROTECTED] wrote:

  the scariest bit is the adding of cpu_clock() to kernel/printk.c so 
  late in the game, and the anti-recursion code i did there. Maybe 
  because this only affects CONFIG_PRINTK_TIME we could try it even 
  for v2.6.24.
 
 Printk recursion I guess shouldn't happen, but if there is a bug 
 somewhere in eg. the scheduler locking, then it may trigger, right?

or we just crash somewhere. It's all about risk management - printk is 
crutial, and with more complex codepaths being touched in printk it 
might make sense to just add built-in recursion protection into printk 
via my patch.

 Probably pretty rare case, however it would be nice to be able to find 
 out where the recursion comes from? Can you put an instruction pointer 
 in the recursion message perhaps?

yeah, as i mentioned if this would be occuring in practice we can always 
save the stacktrace of the incident and output that. I opted for the 
simplest approach first. Thanks for your Reviewed-by, i've queued it up 
for 2.6.25.

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-11 Thread Nick Piggin
On Saturday 08 December 2007 19:52, Ingo Molnar wrote:

> the scariest bit is the adding of cpu_clock() to kernel/printk.c so late
> in the game, and the anti-recursion code i did there. Maybe because this
> only affects CONFIG_PRINTK_TIME we could try it even for v2.6.24.

Printk recursion I guess shouldn't happen, but if there is a bug
somewhere in eg. the scheduler locking, then it may trigger, right?

Probably pretty rare case, however it would be nice to be able to
find out where the recursion comes from? Can you put an instruction
pointer in the recursion message perhaps?


> i've now completed a couple of hundred random bootups on x86 overnight,
> with the full stack of these patches applied, and no problems.
>
> Could have a really critical look at the two patches below and give your
> Reviewed-by line(s) if you agree with having them in v2.6.24? I'd feel a
> lot better about this that way :-) I do have the feeling that it makes
> printk printout a lot more robust in general, independently of the
> cpu_clock() change - especially with more complex consoles like
> netconsole or fbcon.

Reviewed-by: Nick Piggin <[EMAIL PROTECTED]>

for both of them. However I don't feel good to get this into 2.6.24
just for printk timestamps :P But if you can convince others, I won't
stand in your way ;)

BTW. shouldn't we still disable the tsc for this machine for 2.6.24?
Because even if you did get these patches in, the sched_clock still
goes pretty wild if the tsc is not constant. cpu_clock can filter it
somewhat, but it would still lead to wrong values...

Nick

>
>   Ingo
>
> >
> Subject: printk: make printk more robust by not allowing recursion
> From: Ingo Molnar <[EMAIL PROTECTED]>
>
> make printk more robust by allowing recursion only if there's a crash
> going on. Also add recursion detection.
>
> I've tested it with an artificially injected printk recursion - instead
> of a lockup or spontaneous reboot or other crash, the output was a well
> controlled:
>
> [   41.057335] SysRq : <2>BUG: recent printk recursion!
> [   41.057335] loglevel0-8 reBoot Crashdump show-all-locks(D) tErm Full
> kIll saK showMem Nice powerOff showPc show-all-timers(Q) unRaw Sync
> showTasks Unmount shoW-blocked-tasks
>
> also do all this printk-debug logic with irqs disabled.
>
> Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
> ---
>  kernel/printk.c |   48 ++--
>  1 file changed, 38 insertions(+), 10 deletions(-)
>
> Index: linux/kernel/printk.c
> ===
> --- linux.orig/kernel/printk.c
> +++ linux/kernel/printk.c
> @@ -628,30 +628,57 @@ asmlinkage int printk(const char *fmt, .
>  /* cpu currently holding logbuf_lock */
>  static volatile unsigned int printk_cpu = UINT_MAX;
>
> +const char printk_recursion_bug_msg [] =
> + KERN_CRIT "BUG: recent printk recursion!\n";
> +static int printk_recursion_bug;
> +
>  asmlinkage int vprintk(const char *fmt, va_list args)
>  {
> + static int log_level_unknown = 1;
> + static char printk_buf[1024];
> +
>   unsigned long flags;
> - int printed_len;
> + int printed_len = 0;
> + int this_cpu;
>   char *p;
> - static char printk_buf[1024];
> - static int log_level_unknown = 1;
>
>   boot_delay_msec();
>
>   preempt_disable();
> - if (unlikely(oops_in_progress) && printk_cpu == smp_processor_id())
> - /* If a crash is occurring during printk() on this CPU,
> -  * make sure we can't deadlock */
> - zap_locks();
> -
>   /* This stops the holder of console_sem just where we want him */
>   raw_local_irq_save(flags);
> + this_cpu = smp_processor_id();
> +
> + /*
> +  * Ouch, printk recursed into itself!
> +  */
> + if (unlikely(printk_cpu == this_cpu)) {
> + /*
> +  * If a crash is occurring during printk() on this CPU,
> +  * then try to get the crash message out but make sure
> +  * we can't deadlock. Otherwise just return to avoid the
> +  * recursion and return - but flag the recursion so that
> +  * it can be printed at the next appropriate moment:
> +  */
> + if (!oops_in_progress) {
> + printk_recursion_bug = 1;
> + goto out_restore_irqs;
> + }
> + zap_locks();
> + }
> +
>   lockdep_off();
>   spin_lock(_lock);
> - printk_cpu = smp_processor_id();
> + printk_cpu = this_cpu;
>
> + if (printk_recursion_bug) {
> + printk_recursion_bug = 0;
> + strcpy(printk_buf, printk_recursion_bug_msg);
> + printed_len = sizeof(printk_recursion_bug_msg);
> + }
>   /* Emit the output into the temporary buffer */
> - printed_len = vscnprintf(printk_buf, sizeof(printk_buf), fmt, args);
> + printed_len += vscnprintf(printk_buf 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-11 Thread Nick Piggin
On Saturday 08 December 2007 19:52, Ingo Molnar wrote:

 the scariest bit is the adding of cpu_clock() to kernel/printk.c so late
 in the game, and the anti-recursion code i did there. Maybe because this
 only affects CONFIG_PRINTK_TIME we could try it even for v2.6.24.

Printk recursion I guess shouldn't happen, but if there is a bug
somewhere in eg. the scheduler locking, then it may trigger, right?

Probably pretty rare case, however it would be nice to be able to
find out where the recursion comes from? Can you put an instruction
pointer in the recursion message perhaps?


 i've now completed a couple of hundred random bootups on x86 overnight,
 with the full stack of these patches applied, and no problems.

 Could have a really critical look at the two patches below and give your
 Reviewed-by line(s) if you agree with having them in v2.6.24? I'd feel a
 lot better about this that way :-) I do have the feeling that it makes
 printk printout a lot more robust in general, independently of the
 cpu_clock() change - especially with more complex consoles like
 netconsole or fbcon.

Reviewed-by: Nick Piggin [EMAIL PROTECTED]

for both of them. However I don't feel good to get this into 2.6.24
just for printk timestamps :P But if you can convince others, I won't
stand in your way ;)

BTW. shouldn't we still disable the tsc for this machine for 2.6.24?
Because even if you did get these patches in, the sched_clock still
goes pretty wild if the tsc is not constant. cpu_clock can filter it
somewhat, but it would still lead to wrong values...

Nick


   Ingo

 
 Subject: printk: make printk more robust by not allowing recursion
 From: Ingo Molnar [EMAIL PROTECTED]

 make printk more robust by allowing recursion only if there's a crash
 going on. Also add recursion detection.

 I've tested it with an artificially injected printk recursion - instead
 of a lockup or spontaneous reboot or other crash, the output was a well
 controlled:

 [   41.057335] SysRq : 2BUG: recent printk recursion!
 [   41.057335] loglevel0-8 reBoot Crashdump show-all-locks(D) tErm Full
 kIll saK showMem Nice powerOff showPc show-all-timers(Q) unRaw Sync
 showTasks Unmount shoW-blocked-tasks

 also do all this printk-debug logic with irqs disabled.

 Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
 ---
  kernel/printk.c |   48 ++--
  1 file changed, 38 insertions(+), 10 deletions(-)

 Index: linux/kernel/printk.c
 ===
 --- linux.orig/kernel/printk.c
 +++ linux/kernel/printk.c
 @@ -628,30 +628,57 @@ asmlinkage int printk(const char *fmt, .
  /* cpu currently holding logbuf_lock */
  static volatile unsigned int printk_cpu = UINT_MAX;

 +const char printk_recursion_bug_msg [] =
 + KERN_CRIT BUG: recent printk recursion!\n;
 +static int printk_recursion_bug;
 +
  asmlinkage int vprintk(const char *fmt, va_list args)
  {
 + static int log_level_unknown = 1;
 + static char printk_buf[1024];
 +
   unsigned long flags;
 - int printed_len;
 + int printed_len = 0;
 + int this_cpu;
   char *p;
 - static char printk_buf[1024];
 - static int log_level_unknown = 1;

   boot_delay_msec();

   preempt_disable();
 - if (unlikely(oops_in_progress)  printk_cpu == smp_processor_id())
 - /* If a crash is occurring during printk() on this CPU,
 -  * make sure we can't deadlock */
 - zap_locks();
 -
   /* This stops the holder of console_sem just where we want him */
   raw_local_irq_save(flags);
 + this_cpu = smp_processor_id();
 +
 + /*
 +  * Ouch, printk recursed into itself!
 +  */
 + if (unlikely(printk_cpu == this_cpu)) {
 + /*
 +  * If a crash is occurring during printk() on this CPU,
 +  * then try to get the crash message out but make sure
 +  * we can't deadlock. Otherwise just return to avoid the
 +  * recursion and return - but flag the recursion so that
 +  * it can be printed at the next appropriate moment:
 +  */
 + if (!oops_in_progress) {
 + printk_recursion_bug = 1;
 + goto out_restore_irqs;
 + }
 + zap_locks();
 + }
 +
   lockdep_off();
   spin_lock(logbuf_lock);
 - printk_cpu = smp_processor_id();
 + printk_cpu = this_cpu;

 + if (printk_recursion_bug) {
 + printk_recursion_bug = 0;
 + strcpy(printk_buf, printk_recursion_bug_msg);
 + printed_len = sizeof(printk_recursion_bug_msg);
 + }
   /* Emit the output into the temporary buffer */
 - printed_len = vscnprintf(printk_buf, sizeof(printk_buf), fmt, args);
 + printed_len += vscnprintf(printk_buf + printed_len,
 +   sizeof(printk_buf), fmt, args);

   /*
* Copy the 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Guillaume Chazarain
On Dec 8, 2007 9:52 AM, Ingo Molnar <[EMAIL PROTECTED]> wrote:

> the scariest bit isnt even the scaling i think - that is a fairly
> straightforward and clean PER_CPU-ization of the global scaling factor,
> and its hookup with cpufreq events. (and the credit for that goes to
> Guillaume Chazarain)

To be fair, the cpufreq hook were already there, I just did a buggy percpu
conversion and added an offset that you removed ;-)

-- 
Guillaume
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Ingo Molnar

* Michael Buesch <[EMAIL PROTECTED]> wrote:

> > i cannot see how. You can verify msleep by running something like this:
> > 
> >   while :; do time usleep 111000; done
> > 
> > you should see a steady stream of:
> > 
> >  real0m0.113s
> >  real0m0.113s
> >  real0m0.113s
> > 
> > (on an idle system). If it fluctuates, with occasional longer delays, 
> > there's some timer problem present.
> 
> Does the sleeping and timing use different time references?

yes. But the real paranoid can do this from another box:

  while :; do time ssh testbox usleep 111000; done

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Michael Buesch
On Saturday 08 December 2007 16:33:27 Ingo Molnar wrote:
> 
> * Michael Buesch <[EMAIL PROTECTED]> wrote:
> 
> > On Saturday 08 December 2007 16:13:41 Ingo Molnar wrote:
> > > 
> > > * Mark Lord <[EMAIL PROTECTED]> wrote:
> > > 
> > > > Ingo Molnar wrote:
> > > >> ...
> > > >> thanks. I do get the impression that most of this can/should wait 
> > > >> until 
> > > >> 2.6.25. The patches look quite dangerous.
> > > > ..
> > > >
> > > > I confess to not really trying hard to understand everything in this 
> > > > thread, but the implication seems to be that this bug might affect 
> > > > udelay() and possibly jiffies ?
> > > 
> > > no, it cannot affect jiffies. (jiffies was a red herring all along)
> > > 
> > > udelay() cannot be affected either - sched_clock() has no effect on 
> > > udelay(). _But_, when there are TSC problems then tsc based udelay() 
> > > suffers too so the phenomenons may _seem_ related.
> > 
> > What about msleep()? I suspect problems in b43 because of this issue. 
> > msleep() returning too early. Is that possible with this bug?
> 
> i cannot see how. You can verify msleep by running something like this:
> 
>   while :; do time usleep 111000; done
> 
> you should see a steady stream of:
> 
>  real0m0.113s
>  real0m0.113s
>  real0m0.113s
> 
> (on an idle system). If it fluctuates, with occasional longer delays, 
> there's some timer problem present.

Does the sleeping and timing use different time references?
I mean, if it uses the same reference and that reference does fluctuate
you won't see it in the result.

But anyway, Stefano. Can you test this?

-- 
Greetings Michael.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Ingo Molnar

* Michael Buesch <[EMAIL PROTECTED]> wrote:

> On Saturday 08 December 2007 16:13:41 Ingo Molnar wrote:
> > 
> > * Mark Lord <[EMAIL PROTECTED]> wrote:
> > 
> > > Ingo Molnar wrote:
> > >> ...
> > >> thanks. I do get the impression that most of this can/should wait until 
> > >> 2.6.25. The patches look quite dangerous.
> > > ..
> > >
> > > I confess to not really trying hard to understand everything in this 
> > > thread, but the implication seems to be that this bug might affect 
> > > udelay() and possibly jiffies ?
> > 
> > no, it cannot affect jiffies. (jiffies was a red herring all along)
> > 
> > udelay() cannot be affected either - sched_clock() has no effect on 
> > udelay(). _But_, when there are TSC problems then tsc based udelay() 
> > suffers too so the phenomenons may _seem_ related.
> 
> What about msleep()? I suspect problems in b43 because of this issue. 
> msleep() returning too early. Is that possible with this bug?

i cannot see how. You can verify msleep by running something like this:

  while :; do time usleep 111000; done

you should see a steady stream of:

 real0m0.113s
 real0m0.113s
 real0m0.113s

(on an idle system). If it fluctuates, with occasional longer delays, 
there's some timer problem present.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Michael Buesch
On Saturday 08 December 2007 16:13:41 Ingo Molnar wrote:
> 
> * Mark Lord <[EMAIL PROTECTED]> wrote:
> 
> > Ingo Molnar wrote:
> >> ...
> >> thanks. I do get the impression that most of this can/should wait until 
> >> 2.6.25. The patches look quite dangerous.
> > ..
> >
> > I confess to not really trying hard to understand everything in this 
> > thread, but the implication seems to be that this bug might affect 
> > udelay() and possibly jiffies ?
> 
> no, it cannot affect jiffies. (jiffies was a red herring all along)
> 
> udelay() cannot be affected either - sched_clock() has no effect on 
> udelay(). _But_, when there are TSC problems then tsc based udelay() 
> suffers too so the phenomenons may _seem_ related.

What about msleep()? I suspect problems in b43 because of this issue.
msleep() returning too early. Is that possible with this bug?

-- 
Greetings Michael.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Ingo Molnar

* Mark Lord <[EMAIL PROTECTED]> wrote:

> Ingo Molnar wrote:
>> ...
>> thanks. I do get the impression that most of this can/should wait until 
>> 2.6.25. The patches look quite dangerous.
> ..
>
> I confess to not really trying hard to understand everything in this 
> thread, but the implication seems to be that this bug might affect 
> udelay() and possibly jiffies ?

no, it cannot affect jiffies. (jiffies was a red herring all along)

udelay() cannot be affected either - sched_clock() has no effect on 
udelay(). _But_, when there are TSC problems then tsc based udelay() 
suffers too so the phenomenons may _seem_ related.

> If so, then fixing it has to be a *must* for 2.6.24, as otherwise 
> we'll get all sorts of one-in-while odd driver bugs.. like maybe these 
> two for starters:
>
> [Bug 9492] 2.6.24:  false double-clicks from USB mouse
> [Bug 9489] 2+ wake-ups/second in 2.6.24

iirc these high rate wakeups happened on 2.6.22 too.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Mark Lord

Ingo Molnar wrote:

...
thanks. I do get the impression that most of this can/should wait until 
2.6.25. The patches look quite dangerous.

..

I confess to not really trying hard to understand everything in this thread,
but the implication seems to be that this bug might affect udelay()
and possibly jiffies ?

If so, then fixing it has to be a *must* for 2.6.24, as otherwise we'll get
all sorts of one-in-while odd driver bugs.. like maybe these two for starters:

[Bug 9492] 2.6.24:  false double-clicks from USB mouse
[Bug 9489] 2+ wake-ups/second in 2.6.24

Neither of which happens often enough to explain or debug,
but either of which *could* be caused by some weird jiffies thing maybe.

???
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Ingo Molnar

* Nick Piggin <[EMAIL PROTECTED]> wrote:

> On Saturday 08 December 2007 11:50, Nick Piggin wrote:
> 
> > I guess your patch is fairly complex but it should work
> 
> I should also add that although complex, it should have a much smaller 
> TSC delta window in which the wrong scaling factor can get applied to 
> it (I guess it is about as good as you can possibly get). So I do like 
> it :)

ok :-)

the scariest bit isnt even the scaling i think - that is a fairly 
straightforward and clean PER_CPU-ization of the global scaling factor, 
and its hookup with cpufreq events. (and the credit for that goes to 
Guillaume Chazarain) We could even split it into two to make it even 
less scary and more bisectable.

the scariest bit is the adding of cpu_clock() to kernel/printk.c so late 
in the game, and the anti-recursion code i did there. Maybe because this 
only affects CONFIG_PRINTK_TIME we could try it even for v2.6.24.

i've now completed a couple of hundred random bootups on x86 overnight, 
with the full stack of these patches applied, and no problems.

Could have a really critical look at the two patches below and give your 
Reviewed-by line(s) if you agree with having them in v2.6.24? I'd feel a 
lot better about this that way :-) I do have the feeling that it makes 
printk printout a lot more robust in general, independently of the 
cpu_clock() change - especially with more complex consoles like 
netconsole or fbcon.

Ingo

> 
Subject: printk: make printk more robust by not allowing recursion
From: Ingo Molnar <[EMAIL PROTECTED]>

make printk more robust by allowing recursion only if there's a crash
going on. Also add recursion detection.

I've tested it with an artificially injected printk recursion - instead
of a lockup or spontaneous reboot or other crash, the output was a well
controlled:

[   41.057335] SysRq : <2>BUG: recent printk recursion!
[   41.057335] loglevel0-8 reBoot Crashdump show-all-locks(D) tErm Full kIll 
saK showMem Nice powerOff showPc show-all-timers(Q) unRaw Sync showTasks 
Unmount shoW-blocked-tasks

also do all this printk-debug logic with irqs disabled.

Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
---
 kernel/printk.c |   48 ++--
 1 file changed, 38 insertions(+), 10 deletions(-)

Index: linux/kernel/printk.c
===
--- linux.orig/kernel/printk.c
+++ linux/kernel/printk.c
@@ -628,30 +628,57 @@ asmlinkage int printk(const char *fmt, .
 /* cpu currently holding logbuf_lock */
 static volatile unsigned int printk_cpu = UINT_MAX;
 
+const char printk_recursion_bug_msg [] =
+   KERN_CRIT "BUG: recent printk recursion!\n";
+static int printk_recursion_bug;
+
 asmlinkage int vprintk(const char *fmt, va_list args)
 {
+   static int log_level_unknown = 1;
+   static char printk_buf[1024];
+
unsigned long flags;
-   int printed_len;
+   int printed_len = 0;
+   int this_cpu;
char *p;
-   static char printk_buf[1024];
-   static int log_level_unknown = 1;
 
boot_delay_msec();
 
preempt_disable();
-   if (unlikely(oops_in_progress) && printk_cpu == smp_processor_id())
-   /* If a crash is occurring during printk() on this CPU,
-* make sure we can't deadlock */
-   zap_locks();
-
/* This stops the holder of console_sem just where we want him */
raw_local_irq_save(flags);
+   this_cpu = smp_processor_id();
+
+   /*
+* Ouch, printk recursed into itself!
+*/
+   if (unlikely(printk_cpu == this_cpu)) {
+   /*
+* If a crash is occurring during printk() on this CPU,
+* then try to get the crash message out but make sure
+* we can't deadlock. Otherwise just return to avoid the
+* recursion and return - but flag the recursion so that
+* it can be printed at the next appropriate moment:
+*/
+   if (!oops_in_progress) {
+   printk_recursion_bug = 1;
+   goto out_restore_irqs;
+   }
+   zap_locks();
+   }
+
lockdep_off();
spin_lock(_lock);
-   printk_cpu = smp_processor_id();
+   printk_cpu = this_cpu;
 
+   if (printk_recursion_bug) {
+   printk_recursion_bug = 0;
+   strcpy(printk_buf, printk_recursion_bug_msg);
+   printed_len = sizeof(printk_recursion_bug_msg);
+   }
/* Emit the output into the temporary buffer */
-   printed_len = vscnprintf(printk_buf, sizeof(printk_buf), fmt, args);
+   printed_len += vscnprintf(printk_buf + printed_len,
+ sizeof(printk_buf), fmt, args);
 
/*
 * Copy the output into log_buf.  If the caller didn't provide
@@ -744,6 +771,7 @@ asmlinkage int 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Ingo Molnar

* Nick Piggin [EMAIL PROTECTED] wrote:

 On Saturday 08 December 2007 11:50, Nick Piggin wrote:
 
  I guess your patch is fairly complex but it should work
 
 I should also add that although complex, it should have a much smaller 
 TSC delta window in which the wrong scaling factor can get applied to 
 it (I guess it is about as good as you can possibly get). So I do like 
 it :)

ok :-)

the scariest bit isnt even the scaling i think - that is a fairly 
straightforward and clean PER_CPU-ization of the global scaling factor, 
and its hookup with cpufreq events. (and the credit for that goes to 
Guillaume Chazarain) We could even split it into two to make it even 
less scary and more bisectable.

the scariest bit is the adding of cpu_clock() to kernel/printk.c so late 
in the game, and the anti-recursion code i did there. Maybe because this 
only affects CONFIG_PRINTK_TIME we could try it even for v2.6.24.

i've now completed a couple of hundred random bootups on x86 overnight, 
with the full stack of these patches applied, and no problems.

Could have a really critical look at the two patches below and give your 
Reviewed-by line(s) if you agree with having them in v2.6.24? I'd feel a 
lot better about this that way :-) I do have the feeling that it makes 
printk printout a lot more robust in general, independently of the 
cpu_clock() change - especially with more complex consoles like 
netconsole or fbcon.

Ingo

 
Subject: printk: make printk more robust by not allowing recursion
From: Ingo Molnar [EMAIL PROTECTED]

make printk more robust by allowing recursion only if there's a crash
going on. Also add recursion detection.

I've tested it with an artificially injected printk recursion - instead
of a lockup or spontaneous reboot or other crash, the output was a well
controlled:

[   41.057335] SysRq : 2BUG: recent printk recursion!
[   41.057335] loglevel0-8 reBoot Crashdump show-all-locks(D) tErm Full kIll 
saK showMem Nice powerOff showPc show-all-timers(Q) unRaw Sync showTasks 
Unmount shoW-blocked-tasks

also do all this printk-debug logic with irqs disabled.

Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
---
 kernel/printk.c |   48 ++--
 1 file changed, 38 insertions(+), 10 deletions(-)

Index: linux/kernel/printk.c
===
--- linux.orig/kernel/printk.c
+++ linux/kernel/printk.c
@@ -628,30 +628,57 @@ asmlinkage int printk(const char *fmt, .
 /* cpu currently holding logbuf_lock */
 static volatile unsigned int printk_cpu = UINT_MAX;
 
+const char printk_recursion_bug_msg [] =
+   KERN_CRIT BUG: recent printk recursion!\n;
+static int printk_recursion_bug;
+
 asmlinkage int vprintk(const char *fmt, va_list args)
 {
+   static int log_level_unknown = 1;
+   static char printk_buf[1024];
+
unsigned long flags;
-   int printed_len;
+   int printed_len = 0;
+   int this_cpu;
char *p;
-   static char printk_buf[1024];
-   static int log_level_unknown = 1;
 
boot_delay_msec();
 
preempt_disable();
-   if (unlikely(oops_in_progress)  printk_cpu == smp_processor_id())
-   /* If a crash is occurring during printk() on this CPU,
-* make sure we can't deadlock */
-   zap_locks();
-
/* This stops the holder of console_sem just where we want him */
raw_local_irq_save(flags);
+   this_cpu = smp_processor_id();
+
+   /*
+* Ouch, printk recursed into itself!
+*/
+   if (unlikely(printk_cpu == this_cpu)) {
+   /*
+* If a crash is occurring during printk() on this CPU,
+* then try to get the crash message out but make sure
+* we can't deadlock. Otherwise just return to avoid the
+* recursion and return - but flag the recursion so that
+* it can be printed at the next appropriate moment:
+*/
+   if (!oops_in_progress) {
+   printk_recursion_bug = 1;
+   goto out_restore_irqs;
+   }
+   zap_locks();
+   }
+
lockdep_off();
spin_lock(logbuf_lock);
-   printk_cpu = smp_processor_id();
+   printk_cpu = this_cpu;
 
+   if (printk_recursion_bug) {
+   printk_recursion_bug = 0;
+   strcpy(printk_buf, printk_recursion_bug_msg);
+   printed_len = sizeof(printk_recursion_bug_msg);
+   }
/* Emit the output into the temporary buffer */
-   printed_len = vscnprintf(printk_buf, sizeof(printk_buf), fmt, args);
+   printed_len += vscnprintf(printk_buf + printed_len,
+ sizeof(printk_buf), fmt, args);
 
/*
 * Copy the output into log_buf.  If the caller didn't provide
@@ -744,6 +771,7 @@ asmlinkage int vprintk(const char *fmt, 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Michael Buesch
On Saturday 08 December 2007 16:33:27 Ingo Molnar wrote:
 
 * Michael Buesch [EMAIL PROTECTED] wrote:
 
  On Saturday 08 December 2007 16:13:41 Ingo Molnar wrote:
   
   * Mark Lord [EMAIL PROTECTED] wrote:
   
Ingo Molnar wrote:
...
thanks. I do get the impression that most of this can/should wait 
until 
2.6.25. The patches look quite dangerous.
..
   
I confess to not really trying hard to understand everything in this 
thread, but the implication seems to be that this bug might affect 
udelay() and possibly jiffies ?
   
   no, it cannot affect jiffies. (jiffies was a red herring all along)
   
   udelay() cannot be affected either - sched_clock() has no effect on 
   udelay(). _But_, when there are TSC problems then tsc based udelay() 
   suffers too so the phenomenons may _seem_ related.
  
  What about msleep()? I suspect problems in b43 because of this issue. 
  msleep() returning too early. Is that possible with this bug?
 
 i cannot see how. You can verify msleep by running something like this:
 
   while :; do time usleep 111000; done
 
 you should see a steady stream of:
 
  real0m0.113s
  real0m0.113s
  real0m0.113s
 
 (on an idle system). If it fluctuates, with occasional longer delays, 
 there's some timer problem present.

Does the sleeping and timing use different time references?
I mean, if it uses the same reference and that reference does fluctuate
you won't see it in the result.

But anyway, Stefano. Can you test this?

-- 
Greetings Michael.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Ingo Molnar

* Mark Lord [EMAIL PROTECTED] wrote:

 Ingo Molnar wrote:
 ...
 thanks. I do get the impression that most of this can/should wait until 
 2.6.25. The patches look quite dangerous.
 ..

 I confess to not really trying hard to understand everything in this 
 thread, but the implication seems to be that this bug might affect 
 udelay() and possibly jiffies ?

no, it cannot affect jiffies. (jiffies was a red herring all along)

udelay() cannot be affected either - sched_clock() has no effect on 
udelay(). _But_, when there are TSC problems then tsc based udelay() 
suffers too so the phenomenons may _seem_ related.

 If so, then fixing it has to be a *must* for 2.6.24, as otherwise 
 we'll get all sorts of one-in-while odd driver bugs.. like maybe these 
 two for starters:

 [Bug 9492] 2.6.24:  false double-clicks from USB mouse
 [Bug 9489] 2+ wake-ups/second in 2.6.24

iirc these high rate wakeups happened on 2.6.22 too.

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Mark Lord

Ingo Molnar wrote:

...
thanks. I do get the impression that most of this can/should wait until 
2.6.25. The patches look quite dangerous.

..

I confess to not really trying hard to understand everything in this thread,
but the implication seems to be that this bug might affect udelay()
and possibly jiffies ?

If so, then fixing it has to be a *must* for 2.6.24, as otherwise we'll get
all sorts of one-in-while odd driver bugs.. like maybe these two for starters:

[Bug 9492] 2.6.24:  false double-clicks from USB mouse
[Bug 9489] 2+ wake-ups/second in 2.6.24

Neither of which happens often enough to explain or debug,
but either of which *could* be caused by some weird jiffies thing maybe.

???
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Ingo Molnar

* Michael Buesch [EMAIL PROTECTED] wrote:

 On Saturday 08 December 2007 16:13:41 Ingo Molnar wrote:
  
  * Mark Lord [EMAIL PROTECTED] wrote:
  
   Ingo Molnar wrote:
   ...
   thanks. I do get the impression that most of this can/should wait until 
   2.6.25. The patches look quite dangerous.
   ..
  
   I confess to not really trying hard to understand everything in this 
   thread, but the implication seems to be that this bug might affect 
   udelay() and possibly jiffies ?
  
  no, it cannot affect jiffies. (jiffies was a red herring all along)
  
  udelay() cannot be affected either - sched_clock() has no effect on 
  udelay(). _But_, when there are TSC problems then tsc based udelay() 
  suffers too so the phenomenons may _seem_ related.
 
 What about msleep()? I suspect problems in b43 because of this issue. 
 msleep() returning too early. Is that possible with this bug?

i cannot see how. You can verify msleep by running something like this:

  while :; do time usleep 111000; done

you should see a steady stream of:

 real0m0.113s
 real0m0.113s
 real0m0.113s

(on an idle system). If it fluctuates, with occasional longer delays, 
there's some timer problem present.

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Michael Buesch
On Saturday 08 December 2007 16:13:41 Ingo Molnar wrote:
 
 * Mark Lord [EMAIL PROTECTED] wrote:
 
  Ingo Molnar wrote:
  ...
  thanks. I do get the impression that most of this can/should wait until 
  2.6.25. The patches look quite dangerous.
  ..
 
  I confess to not really trying hard to understand everything in this 
  thread, but the implication seems to be that this bug might affect 
  udelay() and possibly jiffies ?
 
 no, it cannot affect jiffies. (jiffies was a red herring all along)
 
 udelay() cannot be affected either - sched_clock() has no effect on 
 udelay(). _But_, when there are TSC problems then tsc based udelay() 
 suffers too so the phenomenons may _seem_ related.

What about msleep()? I suspect problems in b43 because of this issue.
msleep() returning too early. Is that possible with this bug?

-- 
Greetings Michael.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Ingo Molnar

* Michael Buesch [EMAIL PROTECTED] wrote:

  i cannot see how. You can verify msleep by running something like this:
  
while :; do time usleep 111000; done
  
  you should see a steady stream of:
  
   real0m0.113s
   real0m0.113s
   real0m0.113s
  
  (on an idle system). If it fluctuates, with occasional longer delays, 
  there's some timer problem present.
 
 Does the sleeping and timing use different time references?

yes. But the real paranoid can do this from another box:

  while :; do time ssh testbox usleep 111000; done

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-08 Thread Guillaume Chazarain
On Dec 8, 2007 9:52 AM, Ingo Molnar [EMAIL PROTECTED] wrote:

 the scariest bit isnt even the scaling i think - that is a fairly
 straightforward and clean PER_CPU-ization of the global scaling factor,
 and its hookup with cpufreq events. (and the credit for that goes to
 Guillaume Chazarain)

To be fair, the cpufreq hook were already there, I just did a buggy percpu
conversion and added an offset that you removed ;-)

-- 
Guillaume
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Nick Piggin
On Saturday 08 December 2007 11:50, Nick Piggin wrote:

> I guess your patch is fairly complex but it should work

I should also add that although complex, it should have a
much smaller TSC delta window in which the wrong scaling
factor can get applied to it (I guess it is about as good
as you can possibly get). So I do like it :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Nick Piggin
On Saturday 08 December 2007 03:48, Nick Piggin wrote:
> On Friday 07 December 2007 22:17, Ingo Molnar wrote:
> > * Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > > ah, printk_clock() still uses sched_clock(), not jiffies. So it's
> > > > not the jiffies counter that goes back and forth, it's sched_clock()
> > > > - so this is a printk timestamps anomaly, not related to jiffies. I
> > > > thought we have fixed this bug in the printk code already:
> > > > sched_clock() is a 'raw' interface that should not be used directly
> > > > - the proper interface is cpu_clock(cpu).
> > >
> > > It's a single CPU box, so sched_clock() jumping would still be
> > > problematic, no?
> >
> > sched_clock() is an internal API - the non-jumping API to be used by
> > printk is cpu_clock().
>
> You know why sched_clock jumps when the TSC frequency changes, right?

Ah, hmm, I don't know why I wrote that :)

I guess your patch is fairly complex but it should work if the plan
is to convert all sched_clock users to use cpu_clock eg like lockdep
as well.

So it looks good to me, thanks for fixing this.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Guillaume Chazarain <[EMAIL PROTECTED]> wrote:

> Le Fri, 7 Dec 2007 15:54:18 +0100,
> Ingo Molnar <[EMAIL PROTECTED]> a écrit :
> 
> > This is a version that 
> > is supposed fix all known aspects of TSC and frequency-change 
> > weirdnesses.
> 
> Tested it with frequency changes, the clock is as smooth as I like it
> :-)

ok, great :-)

> The only remaining sched_clock user in need of conversion seems to be 
> lockdep.

yeah - for CONFIG_LOCKSTAT - but that needs to be done even more 
carefully, due to rq->lock being lockdep-checked. We can perhaps try a 
lock-less cpu_clock() version - other CPUs are not supposed to update 
rq->clock.

> Great work.

thanks. I do get the impression that most of this can/should wait until 
2.6.25. The patches look quite dangerous.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Guillaume Chazarain
Le Fri, 7 Dec 2007 15:54:18 +0100,
Ingo Molnar <[EMAIL PROTECTED]> a écrit :

> This is a version that 
> is supposed fix all known aspects of TSC and frequency-change 
> weirdnesses.

Tested it with frequency changes, the clock is as smooth as I like
it :-)

The only remaining sched_clock user in need of conversion seems to be
lockdep.

Great work.

-- 
Guillaume
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Nick Piggin
On Friday 07 December 2007 22:17, Ingo Molnar wrote:
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > ah, printk_clock() still uses sched_clock(), not jiffies. So it's
> > > not the jiffies counter that goes back and forth, it's sched_clock()
> > > - so this is a printk timestamps anomaly, not related to jiffies. I
> > > thought we have fixed this bug in the printk code already:
> > > sched_clock() is a 'raw' interface that should not be used directly
> > > - the proper interface is cpu_clock(cpu).
> >
> > It's a single CPU box, so sched_clock() jumping would still be
> > problematic, no?
>
> sched_clock() is an internal API - the non-jumping API to be used by
> printk is cpu_clock().

You know why sched_clock jumps when the TSC frequency changes, right?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Ingo Molnar <[EMAIL PROTECTED]> wrote:

> third update. the cpufreq callbacks are not quite OK yet.

fourth update - the cpufreq callbacks are back. This is a version that 
is supposed fix all known aspects of TSC and frequency-change 
weirdnesses.

Ingo

Index: linux/arch/arm/kernel/time.c
===
--- linux.orig/arch/arm/kernel/time.c
+++ linux/arch/arm/kernel/time.c
@@ -79,17 +79,6 @@ static unsigned long dummy_gettimeoffset
 }
 #endif
 
-/*
- * An implementation of printk_clock() independent from
- * sched_clock().  This avoids non-bootable kernels when
- * printk_clock is enabled.
- */
-unsigned long long printk_clock(void)
-{
-   return (unsigned long long)(jiffies - INITIAL_JIFFIES) *
-   (10 / HZ);
-}
-
 static unsigned long next_rtc_update;
 
 /*
Index: linux/arch/ia64/kernel/time.c
===
--- linux.orig/arch/ia64/kernel/time.c
+++ linux/arch/ia64/kernel/time.c
@@ -344,33 +344,6 @@ udelay (unsigned long usecs)
 }
 EXPORT_SYMBOL(udelay);
 
-static unsigned long long ia64_itc_printk_clock(void)
-{
-   if (ia64_get_kr(IA64_KR_PER_CPU_DATA))
-   return sched_clock();
-   return 0;
-}
-
-static unsigned long long ia64_default_printk_clock(void)
-{
-   return (unsigned long long)(jiffies_64 - INITIAL_JIFFIES) *
-   (10/HZ);
-}
-
-unsigned long long (*ia64_printk_clock)(void) = _default_printk_clock;
-
-unsigned long long printk_clock(void)
-{
-   return ia64_printk_clock();
-}
-
-void __init
-ia64_setup_printk_clock(void)
-{
-   if (!(sal_platform_features & IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT))
-   ia64_printk_clock = ia64_itc_printk_clock;
-}
-
 /* IA64 doesn't cache the timezone */
 void update_vsyscall_tz(void)
 {
Index: linux/arch/x86/kernel/process_32.c
===
--- linux.orig/arch/x86/kernel/process_32.c
+++ linux/arch/x86/kernel/process_32.c
@@ -113,10 +113,19 @@ void default_idle(void)
smp_mb();
 
local_irq_disable();
-   if (!need_resched())
+   if (!need_resched()) {
+   ktime_t t0, t1;
+   u64 t0n, t1n;
+
+   t0 = ktime_get();
+   t0n = ktime_to_ns(t0);
safe_halt();/* enables interrupts racelessly */
-   else
-   local_irq_enable();
+   local_irq_disable();
+   t1 = ktime_get();
+   t1n = ktime_to_ns(t1);
+   sched_clock_idle_wakeup_event(t1n - t0n);
+   }
+   local_irq_enable();
current_thread_info()->status |= TS_POLLING;
} else {
/* loop is done by the caller */
Index: linux/arch/x86/kernel/tsc_32.c
===
--- linux.orig/arch/x86/kernel/tsc_32.c
+++ linux/arch/x86/kernel/tsc_32.c
@@ -5,6 +5,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -78,15 +79,35 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
  *  cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
  *  ([EMAIL PROTECTED])
  *
+ *  ns += offset to avoid sched_clock jumps with cpufreq
+ *
  * [EMAIL PROTECTED] "math is hard, lets go shopping!"
  */
-unsigned long cyc2ns_scale __read_mostly;
 
-#define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
+DEFINE_PER_CPU(unsigned long, cyc2ns);
 
-static inline void set_cyc2ns_scale(unsigned long cpu_khz)
+static void set_cyc2ns_scale(unsigned long cpu_khz, int cpu)
 {
-   cyc2ns_scale = (100 << CYC2NS_SCALE_FACTOR)/cpu_khz;
+   unsigned long flags, prev_scale, *scale;
+   unsigned long long tsc_now, ns_now;
+
+   local_irq_save(flags);
+   sched_clock_idle_sleep_event();
+
+   scale = _cpu(cyc2ns, cpu);
+
+   rdtscll(tsc_now);
+   ns_now = __cycles_2_ns(tsc_now);
+
+   prev_scale = *scale;
+   if (cpu_khz)
+   *scale = (NSEC_PER_MSEC << CYC2NS_SCALE_FACTOR)/cpu_khz;
+
+   /*
+* Start smoothly with the new frequency:
+*/
+   sched_clock_idle_wakeup_event(0);
+   local_irq_restore(flags);
 }
 
 /*
@@ -239,7 +260,9 @@ time_cpufreq_notifier(struct notifier_bl
ref_freq, freq->new);
if (!(freq->flags & CPUFREQ_CONST_LOOPS)) {
tsc_khz = cpu_khz;
-   set_cyc2ns_scale(cpu_khz);
+   preempt_disable();
+   set_cyc2ns_scale(cpu_khz, smp_processor_id());
+   preempt_enable();
/*
 * TSC based sched_clock turns
  

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Ingo Molnar <[EMAIL PROTECTED]> wrote:

> > Stefano, could you try this ontop of a recent-ish Linus tree - does 
> > this resolve all issues? (without introducing new ones ;-)
> 
> updated version attached below.

third update. the cpufreq callbacks are not quite OK yet.

Ingo

Index: linux/arch/arm/kernel/time.c
===
--- linux.orig/arch/arm/kernel/time.c
+++ linux/arch/arm/kernel/time.c
@@ -79,17 +79,6 @@ static unsigned long dummy_gettimeoffset
 }
 #endif
 
-/*
- * An implementation of printk_clock() independent from
- * sched_clock().  This avoids non-bootable kernels when
- * printk_clock is enabled.
- */
-unsigned long long printk_clock(void)
-{
-   return (unsigned long long)(jiffies - INITIAL_JIFFIES) *
-   (10 / HZ);
-}
-
 static unsigned long next_rtc_update;
 
 /*
Index: linux/arch/ia64/kernel/time.c
===
--- linux.orig/arch/ia64/kernel/time.c
+++ linux/arch/ia64/kernel/time.c
@@ -344,33 +344,6 @@ udelay (unsigned long usecs)
 }
 EXPORT_SYMBOL(udelay);
 
-static unsigned long long ia64_itc_printk_clock(void)
-{
-   if (ia64_get_kr(IA64_KR_PER_CPU_DATA))
-   return sched_clock();
-   return 0;
-}
-
-static unsigned long long ia64_default_printk_clock(void)
-{
-   return (unsigned long long)(jiffies_64 - INITIAL_JIFFIES) *
-   (10/HZ);
-}
-
-unsigned long long (*ia64_printk_clock)(void) = _default_printk_clock;
-
-unsigned long long printk_clock(void)
-{
-   return ia64_printk_clock();
-}
-
-void __init
-ia64_setup_printk_clock(void)
-{
-   if (!(sal_platform_features & IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT))
-   ia64_printk_clock = ia64_itc_printk_clock;
-}
-
 /* IA64 doesn't cache the timezone */
 void update_vsyscall_tz(void)
 {
Index: linux/arch/x86/kernel/process_32.c
===
--- linux.orig/arch/x86/kernel/process_32.c
+++ linux/arch/x86/kernel/process_32.c
@@ -113,10 +113,19 @@ void default_idle(void)
smp_mb();
 
local_irq_disable();
-   if (!need_resched())
+   if (!need_resched()) {
+   ktime_t t0, t1;
+   u64 t0n, t1n;
+
+   t0 = ktime_get();
+   t0n = ktime_to_ns(t0);
safe_halt();/* enables interrupts racelessly */
-   else
-   local_irq_enable();
+   local_irq_disable();
+   t1 = ktime_get();
+   t1n = ktime_to_ns(t1);
+   sched_clock_idle_wakeup_event(t1n - t0n);
+   }
+   local_irq_enable();
current_thread_info()->status |= TS_POLLING;
} else {
/* loop is done by the caller */
Index: linux/arch/x86/lib/delay_32.c
===
--- linux.orig/arch/x86/lib/delay_32.c
+++ linux/arch/x86/lib/delay_32.c
@@ -38,17 +38,21 @@ static void delay_loop(unsigned long loo
:"0" (loops));
 }
 
-/* TSC based delay: */
+/* cpu_clock() [TSC] based delay: */
 static void delay_tsc(unsigned long loops)
 {
-   unsigned long bclock, now;
+   unsigned long long start, stop, now;
+   int this_cpu;
+
+   preempt_disable();
+
+   this_cpu = smp_processor_id();
+   start = now = cpu_clock(this_cpu);
+   stop = start + loops;
+
+   while ((long long)(stop - now) > 0)
+   now = cpu_clock(this_cpu);
 
-   preempt_disable();  /* TSC's are per-cpu */
-   rdtscl(bclock);
-   do {
-   rep_nop();
-   rdtscl(now);
-   } while ((now-bclock) < loops);
preempt_enable();
 }
 
Index: linux/arch/x86/lib/delay_64.c
===
--- linux.orig/arch/x86/lib/delay_64.c
+++ linux/arch/x86/lib/delay_64.c
@@ -26,19 +26,28 @@ int read_current_timer(unsigned long *ti
return 0;
 }
 
-void __delay(unsigned long loops)
+/* cpu_clock() [TSC] based delay: */
+static void delay_tsc(unsigned long loops)
 {
-   unsigned bclock, now;
+   unsigned long long start, stop, now;
+   int this_cpu;
+
+   preempt_disable();
+
+   this_cpu = smp_processor_id();
+   start = now = cpu_clock(this_cpu);
+   stop = start + loops;
+
+   while ((long long)(stop - now) > 0)
+   now = cpu_clock(this_cpu);
 
-   preempt_disable();  /* TSC's are pre-cpu */
-   rdtscl(bclock);
-   do {
-   rep_nop(); 
-   rdtscl(now);
-   }
-   while ((now-bclock) < loops);
preempt_enable();
 }
+
+void __delay(unsigned long loops)
+{
+   delay_tsc(loops);
+}
 EXPORT_SYMBOL(__delay);
 
 inline void __const_udelay(unsigned 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Ingo Molnar <[EMAIL PROTECTED]> wrote:

> ok, here's a rollup of 11 patches that relate to this. I hoped we 
> could wait with this for 2.6.25, but it seems more urgent as per 
> Stefano's testing, as udelay() and drivers are affected as well.
> 
> Stefano, could you try this ontop of a recent-ish Linus tree - does 
> this resolve all issues? (without introducing new ones ;-)

updated version attached below.

> +DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;

__read_mostly is not a good idea for PER_CPU variables.

Ingo

Index: linux/arch/arm/kernel/time.c
===
--- linux.orig/arch/arm/kernel/time.c
+++ linux/arch/arm/kernel/time.c
@@ -79,17 +79,6 @@ static unsigned long dummy_gettimeoffset
 }
 #endif
 
-/*
- * An implementation of printk_clock() independent from
- * sched_clock().  This avoids non-bootable kernels when
- * printk_clock is enabled.
- */
-unsigned long long printk_clock(void)
-{
-   return (unsigned long long)(jiffies - INITIAL_JIFFIES) *
-   (10 / HZ);
-}
-
 static unsigned long next_rtc_update;
 
 /*
Index: linux/arch/ia64/kernel/time.c
===
--- linux.orig/arch/ia64/kernel/time.c
+++ linux/arch/ia64/kernel/time.c
@@ -344,33 +344,6 @@ udelay (unsigned long usecs)
 }
 EXPORT_SYMBOL(udelay);
 
-static unsigned long long ia64_itc_printk_clock(void)
-{
-   if (ia64_get_kr(IA64_KR_PER_CPU_DATA))
-   return sched_clock();
-   return 0;
-}
-
-static unsigned long long ia64_default_printk_clock(void)
-{
-   return (unsigned long long)(jiffies_64 - INITIAL_JIFFIES) *
-   (10/HZ);
-}
-
-unsigned long long (*ia64_printk_clock)(void) = _default_printk_clock;
-
-unsigned long long printk_clock(void)
-{
-   return ia64_printk_clock();
-}
-
-void __init
-ia64_setup_printk_clock(void)
-{
-   if (!(sal_platform_features & IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT))
-   ia64_printk_clock = ia64_itc_printk_clock;
-}
-
 /* IA64 doesn't cache the timezone */
 void update_vsyscall_tz(void)
 {
Index: linux/arch/x86/kernel/process_32.c
===
--- linux.orig/arch/x86/kernel/process_32.c
+++ linux/arch/x86/kernel/process_32.c
@@ -113,10 +113,19 @@ void default_idle(void)
smp_mb();
 
local_irq_disable();
-   if (!need_resched())
+   if (!need_resched()) {
+   ktime_t t0, t1;
+   u64 t0n, t1n;
+
+   t0 = ktime_get();
+   t0n = ktime_to_ns(t0);
safe_halt();/* enables interrupts racelessly */
-   else
-   local_irq_enable();
+   local_irq_disable();
+   t1 = ktime_get();
+   t1n = ktime_to_ns(t1);
+   sched_clock_idle_wakeup_event(t1n - t0n);
+   }
+   local_irq_enable();
current_thread_info()->status |= TS_POLLING;
} else {
/* loop is done by the caller */
Index: linux/arch/x86/kernel/tsc_32.c
===
--- linux.orig/arch/x86/kernel/tsc_32.c
+++ linux/arch/x86/kernel/tsc_32.c
@@ -5,6 +5,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -78,15 +79,32 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
  *  cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
  *  ([EMAIL PROTECTED])
  *
+ *  ns += offset to avoid sched_clock jumps with cpufreq
+ *
  * [EMAIL PROTECTED] "math is hard, lets go shopping!"
  */
-unsigned long cyc2ns_scale __read_mostly;
 
 #define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
 
-static inline void set_cyc2ns_scale(unsigned long cpu_khz)
+DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns);
+
+static void set_cyc2ns_scale(unsigned long cpu_khz)
 {
-   cyc2ns_scale = (100 << CYC2NS_SCALE_FACTOR)/cpu_khz;
+   struct cyc2ns_params *params;
+   unsigned long flags;
+   unsigned long long tsc_now, ns_now;
+
+   rdtscll(tsc_now);
+   params = _cpu_var(cyc2ns);
+
+   local_irq_save(flags);
+   ns_now = __cycles_2_ns(params, tsc_now);
+
+   params->scale = (NSEC_PER_MSEC << CYC2NS_SCALE_FACTOR)/cpu_khz;
+   params->offset += ns_now - __cycles_2_ns(params, tsc_now);
+   local_irq_restore(flags);
+
+   put_cpu_var(cyc2ns);
 }
 
 /*
Index: linux/arch/x86/kernel/tsc_64.c
===
--- linux.orig/arch/x86/kernel/tsc_64.c
+++ linux/arch/x86/kernel/tsc_64.c
@@ -10,6 +10,7 @@
 
 #include 
 #include 
+#include 
 
 static int notsc __initdata = 0;
 
@@ -18,16 +19,25 @@ EXPORT_SYMBOL(cpu_khz);
 unsigned int tsc_khz;
 EXPORT_SYMBOL(tsc_khz);
 
-static unsigned int 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

ok, here's a rollup of 11 patches that relate to this. I hoped we could 
wait with this for 2.6.25, but it seems more urgent as per Stefano's 
testing, as udelay() and drivers are affected as well.

Stefano, could you try this ontop of a recent-ish Linus tree - does this 
resolve all issues? (without introducing new ones ;-)

Ingo

Index: linux/arch/arm/kernel/time.c
===
--- linux.orig/arch/arm/kernel/time.c
+++ linux/arch/arm/kernel/time.c
@@ -79,17 +79,6 @@ static unsigned long dummy_gettimeoffset
 }
 #endif
 
-/*
- * An implementation of printk_clock() independent from
- * sched_clock().  This avoids non-bootable kernels when
- * printk_clock is enabled.
- */
-unsigned long long printk_clock(void)
-{
-   return (unsigned long long)(jiffies - INITIAL_JIFFIES) *
-   (10 / HZ);
-}
-
 static unsigned long next_rtc_update;
 
 /*
Index: linux/arch/ia64/kernel/time.c
===
--- linux.orig/arch/ia64/kernel/time.c
+++ linux/arch/ia64/kernel/time.c
@@ -344,33 +344,6 @@ udelay (unsigned long usecs)
 }
 EXPORT_SYMBOL(udelay);
 
-static unsigned long long ia64_itc_printk_clock(void)
-{
-   if (ia64_get_kr(IA64_KR_PER_CPU_DATA))
-   return sched_clock();
-   return 0;
-}
-
-static unsigned long long ia64_default_printk_clock(void)
-{
-   return (unsigned long long)(jiffies_64 - INITIAL_JIFFIES) *
-   (10/HZ);
-}
-
-unsigned long long (*ia64_printk_clock)(void) = _default_printk_clock;
-
-unsigned long long printk_clock(void)
-{
-   return ia64_printk_clock();
-}
-
-void __init
-ia64_setup_printk_clock(void)
-{
-   if (!(sal_platform_features & IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT))
-   ia64_printk_clock = ia64_itc_printk_clock;
-}
-
 /* IA64 doesn't cache the timezone */
 void update_vsyscall_tz(void)
 {
Index: linux/arch/x86/kernel/process_32.c
===
--- linux.orig/arch/x86/kernel/process_32.c
+++ linux/arch/x86/kernel/process_32.c
@@ -113,10 +113,19 @@ void default_idle(void)
smp_mb();
 
local_irq_disable();
-   if (!need_resched())
+   if (!need_resched()) {
+   ktime_t t0, t1;
+   u64 t0n, t1n;
+
+   t0 = ktime_get();
+   t0n = ktime_to_ns(t0);
safe_halt();/* enables interrupts racelessly */
-   else
-   local_irq_enable();
+   local_irq_disable();
+   t1 = ktime_get();
+   t1n = ktime_to_ns(t1);
+   sched_clock_idle_wakeup_event(t1n - t0n);
+   }
+   local_irq_enable();
current_thread_info()->status |= TS_POLLING;
} else {
/* loop is done by the caller */
Index: linux/arch/x86/kernel/tsc_32.c
===
--- linux.orig/arch/x86/kernel/tsc_32.c
+++ linux/arch/x86/kernel/tsc_32.c
@@ -5,6 +5,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -78,15 +79,32 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
  *  cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
  *  ([EMAIL PROTECTED])
  *
+ *  ns += offset to avoid sched_clock jumps with cpufreq
+ *
  * [EMAIL PROTECTED] "math is hard, lets go shopping!"
  */
-unsigned long cyc2ns_scale __read_mostly;
 
 #define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
 
-static inline void set_cyc2ns_scale(unsigned long cpu_khz)
+DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;
+
+static void set_cyc2ns_scale(unsigned long cpu_khz)
 {
-   cyc2ns_scale = (100 << CYC2NS_SCALE_FACTOR)/cpu_khz;
+   struct cyc2ns_params *params;
+   unsigned long flags;
+   unsigned long long tsc_now, ns_now;
+
+   rdtscll(tsc_now);
+   params = _cpu_var(cyc2ns);
+
+   local_irq_save(flags);
+   ns_now = __cycles_2_ns(params, tsc_now);
+
+   params->scale = (NSEC_PER_MSEC << CYC2NS_SCALE_FACTOR)/cpu_khz;
+   params->offset += ns_now - __cycles_2_ns(params, tsc_now);
+   local_irq_restore(flags);
+
+   put_cpu_var(cyc2ns);
 }
 
 /*
Index: linux/arch/x86/kernel/tsc_64.c
===
--- linux.orig/arch/x86/kernel/tsc_64.c
+++ linux/arch/x86/kernel/tsc_64.c
@@ -10,6 +10,7 @@
 
 #include 
 #include 
+#include 
 
 static int notsc __initdata = 0;
 
@@ -18,16 +19,25 @@ EXPORT_SYMBOL(cpu_khz);
 unsigned int tsc_khz;
 EXPORT_SYMBOL(tsc_khz);
 
-static unsigned int cyc2ns_scale __read_mostly;
+DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;
 
-static inline void set_cyc2ns_scale(unsigned long khz)
+static void set_cyc2ns_scale(unsigned long cpu_khz)
 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

>> It's a single CPU box, so sched_clock() jumping would still be 
>> problematic, no?
>
> I guess so. Definitely, it didn't look like a printk issue. Drivers 
> don't read logs, usually. But they got confused anyway (it seems that 
> udelay's get scaled or fail or somesuch - I can't test it right now, 
> will provide more feedback in a few hours).

no, i think it's just another aspect of the broken TSC on that hardware. 
Does the patch below improve things?

Ingo

--->
Subject: x86: cpu_clock() based udelay
From: Ingo Molnar <[EMAIL PROTECTED]>

use cpu_clock() for TSC based udelay - it's more reliable than raw
TSC based delay loops.

Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
---
 arch/x86/lib/delay_32.c |   20 
 arch/x86/lib/delay_64.c |   27 ++-
 2 files changed, 30 insertions(+), 17 deletions(-)

Index: linux/arch/x86/lib/delay_32.c
===
--- linux.orig/arch/x86/lib/delay_32.c
+++ linux/arch/x86/lib/delay_32.c
@@ -38,17 +38,21 @@ static void delay_loop(unsigned long loo
:"0" (loops));
 }
 
-/* TSC based delay: */
+/* cpu_clock() [TSC] based delay: */
 static void delay_tsc(unsigned long loops)
 {
-   unsigned long bclock, now;
+   unsigned long long start, stop, now;
+   int this_cpu;
+
+   preempt_disable();
+
+   this_cpu = smp_processor_id();
+   start = now = cpu_clock(this_cpu);
+   stop = start + loops;
+
+   while ((long long)(stop - now) > 0)
+   now = cpu_clock(this_cpu);
 
-   preempt_disable();  /* TSC's are per-cpu */
-   rdtscl(bclock);
-   do {
-   rep_nop();
-   rdtscl(now);
-   } while ((now-bclock) < loops);
preempt_enable();
 }
 
Index: linux/arch/x86/lib/delay_64.c
===
--- linux.orig/arch/x86/lib/delay_64.c
+++ linux/arch/x86/lib/delay_64.c
@@ -26,19 +26,28 @@ int read_current_timer(unsigned long *ti
return 0;
 }
 
-void __delay(unsigned long loops)
+/* cpu_clock() [TSC] based delay: */
+static void delay_tsc(unsigned long loops)
 {
-   unsigned bclock, now;
+   unsigned long long start, stop, now;
+   int this_cpu;
+
+   preempt_disable();
+
+   this_cpu = smp_processor_id();
+   start = now = cpu_clock(this_cpu);
+   stop = start + loops;
+
+   while ((long long)(stop - now) > 0)
+   now = cpu_clock(this_cpu);
 
-   preempt_disable();  /* TSC's are pre-cpu */
-   rdtscl(bclock);
-   do {
-   rep_nop(); 
-   rdtscl(now);
-   }
-   while ((now-bclock) < loops);
preempt_enable();
 }
+
+void __delay(unsigned long loops)
+{
+   delay_tsc(loops);
+}
 EXPORT_SYMBOL(__delay);
 
 inline void __const_udelay(unsigned long xloops)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Guillaume Chazarain
On Dec 7, 2007 12:18 PM, Guillaume Chazarain <[EMAIL PROTECTED]> wrote:
> Any pointer to it?

Nevermind, I found it ... in this same thread :-(

-- 
Guillaume
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Nick Piggin <[EMAIL PROTECTED]> wrote:

> My patch should fix the worst cpufreq sched_clock jumping issue I 
> think.

but it degrades the precision of sched_clock() and has other problems as 
well. cpu_clock() is the right interface to use for such things.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread stefano . brivio

Quoting Nick Piggin <[EMAIL PROTECTED]>:


On Friday 07 December 2007 19:45, Ingo Molnar wrote:


ah, printk_clock() still uses sched_clock(), not jiffies. So it's not
the jiffies counter that goes back and forth, it's sched_clock() - so
this is a printk timestamps anomaly, not related to jiffies. I thought
we have fixed this bug in the printk code already: sched_clock() is a
'raw' interface that should not be used directly - the proper interface
is cpu_clock(cpu).


It's a single CPU box, so sched_clock() jumping would still be
problematic, no?


I guess so. Definitely, it didn't look like a printk issue. Drivers  
don't read logs, usually. But they got confused anyway (it seems that  
udelay's get scaled or fail or somesuch - I can't test it right now,  
will provide more feedback in a few hours).



--
Ciao
Stefano



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Guillaume Chazarain
On Dec 7, 2007 12:13 PM, Nick Piggin <[EMAIL PROTECTED]> wrote:
> My patch should fix the worst cpufreq sched_clock jumping issue
> I think.

Any pointer to it?

Thanks.

-- 
Guillaume
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Nick Piggin <[EMAIL PROTECTED]> wrote:

> > ah, printk_clock() still uses sched_clock(), not jiffies. So it's 
> > not the jiffies counter that goes back and forth, it's sched_clock() 
> > - so this is a printk timestamps anomaly, not related to jiffies. I 
> > thought we have fixed this bug in the printk code already: 
> > sched_clock() is a 'raw' interface that should not be used directly 
> > - the proper interface is cpu_clock(cpu).
> 
> It's a single CPU box, so sched_clock() jumping would still be 
> problematic, no?

sched_clock() is an internal API - the non-jumping API to be used by 
printk is cpu_clock().

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Nick Piggin
On Friday 07 December 2007 19:45, Ingo Molnar wrote:
> * Stefano Brivio <[EMAIL PROTECTED]> wrote:
> > This patch fixes a regression introduced by:
> >
> > commit bb29ab26863c022743143f27956cc0ca362f258c
> > Author: Ingo Molnar <[EMAIL PROTECTED]>
> > Date:   Mon Jul 9 18:51:59 2007 +0200
> >
> > This caused the jiffies counter to leap back and forth on cpufreq
> > changes on my x86 box. I'd say that we can't always assume that TSC
> > does "small errors" only, when marked unstable. On cpufreq changes
> > these errors can be huge.
>
> ah, printk_clock() still uses sched_clock(), not jiffies. So it's not
> the jiffies counter that goes back and forth, it's sched_clock() - so
> this is a printk timestamps anomaly, not related to jiffies. I thought
> we have fixed this bug in the printk code already: sched_clock() is a
> 'raw' interface that should not be used directly - the proper interface
> is cpu_clock(cpu).

It's a single CPU box, so sched_clock() jumping would still be
problematic, no?

My patch should fix the worst cpufreq sched_clock jumping issue
I think.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Andrew Morton <[EMAIL PROTECTED]> wrote:

> > > A bit risky - it's quite an expansion of code which no longer can 
> > > call printk.
> > > 
> > > You might want to take that WARN_ON out of __update_rq_clock() ;)
> > 
> > hm, dont we already detect printk recursions and turn them into a 
> > silent return instead of a hang/crash?
> 
> We'll pop the locks and will proceed to do the nested printk.  So 
> __update_rq_clock() will need rather a lot of stack ;)

yeah. That behavior of printk is rather fragile. I think my previous 
patch should handle all such incidents.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Andrew Morton
On Fri, 7 Dec 2007 11:40:13 +0100 Ingo Molnar <[EMAIL PROTECTED]> wrote:

> 
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
> 
> > > - t = printk_clock();
> > > + t = cpu_clock(printk_cpu);
> > >   nanosec_rem = do_div(t, 10);
> > >   tlen = sprintf(tbuf,
> > >   "<%c>[%5lu.%06lu] ",
> > 
> > A bit risky - it's quite an expansion of code which no longer can call 
> > printk.
> > 
> > You might want to take that WARN_ON out of __update_rq_clock() ;)
> 
> hm, dont we already detect printk recursions and turn them into a silent 
> return instead of a hang/crash?
> 

We'll pop the locks and will proceed to do the nested printk.  So
__update_rq_clock() will need rather a lot of stack ;)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Ingo Molnar <[EMAIL PROTECTED]> wrote:

> > > - t = printk_clock();
> > > + t = cpu_clock(printk_cpu);
> > >   nanosec_rem = do_div(t, 10);
> > >   tlen = sprintf(tbuf,
> > >   "<%c>[%5lu.%06lu] ",
> > 
> > A bit risky - it's quite an expansion of code which no longer can call 
> > printk.
> > 
> > You might want to take that WARN_ON out of __update_rq_clock() ;)
> 
> hm, dont we already detect printk recursions and turn them into a 
> silent return instead of a hang/crash?

ugh, we dont. So i guess the (tested) patch below is highly needed. (If 
such incidents become frequent then we could save the stackdump of the 
recursion via save_stack_trace() too - but i wanted to keep the initial 
code simple.)

Ingo

>
Subject: printk: make printk more robust by not allowing recursion
From: Ingo Molnar <[EMAIL PROTECTED]>

make printk more robust by allowing recursion only if there's a crash
going on. Also add recursion detection.

I've tested it with an artificially injected printk recursion - instead
of a lockup or spontaneous reboot or other crash, the output was a well
controlled:

[   41.057335] SysRq : <2>BUG: recent printk recursion!
[   41.057335] loglevel0-8 reBoot Crashdump show-all-locks(D) tErm Full kIll 
saK showMem Nice powerOff showPc show-all-timers(Q) unRaw Sync showTasks 
Unmount shoW-blocked-tasks

also do all this printk logic with irqs disabled.

Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
---
 kernel/printk.c |   52 ++--
 1 file changed, 42 insertions(+), 10 deletions(-)

Index: linux/kernel/printk.c
===
--- linux.orig/kernel/printk.c
+++ linux/kernel/printk.c
@@ -623,30 +623,57 @@ asmlinkage int printk(const char *fmt, .
 /* cpu currently holding logbuf_lock */
 static volatile unsigned int printk_cpu = UINT_MAX;
 
+const char printk_recursion_bug_msg [] =
+   KERN_CRIT "BUG: recent printk recursion!\n";
+static int printk_recursion_bug;
+
 asmlinkage int vprintk(const char *fmt, va_list args)
 {
+   static int log_level_unknown = 1;
+   static char printk_buf[1024];
+
unsigned long flags;
-   int printed_len;
+   int printed_len = 0;
+   int this_cpu;
char *p;
-   static char printk_buf[1024];
-   static int log_level_unknown = 1;
 
boot_delay_msec();
 
preempt_disable();
-   if (unlikely(oops_in_progress) && printk_cpu == smp_processor_id())
-   /* If a crash is occurring during printk() on this CPU,
-* make sure we can't deadlock */
-   zap_locks();
-
/* This stops the holder of console_sem just where we want him */
raw_local_irq_save(flags);
+   this_cpu = smp_processor_id();
+
+   /*
+* Ouch, printk recursed into itself!
+*/
+   if (unlikely(printk_cpu == this_cpu)) {
+   /*
+* If a crash is occurring during printk() on this CPU,
+* then try to get the crash message out but make sure
+* we can't deadlock. Otherwise just return to avoid the
+* recursion and return - but flag the recursion so that
+* it can be printed at the next appropriate moment:
+*/
+   if (!oops_in_progress) {
+   printk_recursion_bug = 1;
+   goto out_restore_irqs;
+   }
+   zap_locks();
+   }
+
lockdep_off();
spin_lock(_lock);
-   printk_cpu = smp_processor_id();
+   printk_cpu = this_cpu;
 
+   if (printk_recursion_bug) {
+   printk_recursion_bug = 0;
+   strcpy(printk_buf, printk_recursion_bug_msg);
+   printed_len = sizeof(printk_recursion_bug_msg);
+   }
/* Emit the output into the temporary buffer */
-   printed_len = vscnprintf(printk_buf, sizeof(printk_buf), fmt, args);
+   printed_len += vscnprintf(printk_buf + printed_len,
+ sizeof(printk_buf), fmt, args);
 
/*
 * Copy the output into log_buf.  If the caller didn't provide
@@ -675,6 +702,10 @@ asmlinkage int vprintk(const char *fmt, 
loglev_char = default_message_loglevel
+ '0';
}
+   if (panic_timeout) {
+   panic_timeout = 0;
+   printk("recurse!\n");
+   }
t = cpu_clock(printk_cpu);
nanosec_rem = do_div(t, 10);
tlen = sprintf(tbuf,
@@ -739,6 +770,7 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Andrew Morton <[EMAIL PROTECTED]> wrote:

> > -   t = printk_clock();
> > +   t = cpu_clock(printk_cpu);
> > nanosec_rem = do_div(t, 10);
> > tlen = sprintf(tbuf,
> > "<%c>[%5lu.%06lu] ",
> 
> A bit risky - it's quite an expansion of code which no longer can call 
> printk.
> 
> You might want to take that WARN_ON out of __update_rq_clock() ;)

hm, dont we already detect printk recursions and turn them into a silent 
return instead of a hang/crash?

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Andi Kleen
Thomas Gleixner <[EMAIL PROTECTED]> writes:
>
> Hmrpf. sched_clock() is used for the time stamp of the printks. We
> need to find some better solution other than killing off the tsc
> access completely.

Doing it properly requires pretty much most of my old sched-clock ff patch.
Complicated and not pretty, but ..
Unfortunately that version still had some jumps on cpufreq, but they
are fixable there.

-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Andrew Morton
On Fri, 7 Dec 2007 09:45:59 +0100 Ingo Molnar <[EMAIL PROTECTED]> wrote:

> 
> * Stefano Brivio <[EMAIL PROTECTED]> wrote:
> 
> > This patch fixes a regression introduced by:
> > 
> > commit bb29ab26863c022743143f27956cc0ca362f258c
> > Author: Ingo Molnar <[EMAIL PROTECTED]>
> > Date:   Mon Jul 9 18:51:59 2007 +0200
> > 
> > This caused the jiffies counter to leap back and forth on cpufreq 
> > changes on my x86 box. I'd say that we can't always assume that TSC 
> > does "small errors" only, when marked unstable. On cpufreq changes 
> > these errors can be huge.
> 
> ah, printk_clock() still uses sched_clock(), not jiffies. So it's not 
> the jiffies counter that goes back and forth, it's sched_clock() - so 
> this is a printk timestamps anomaly, not related to jiffies. I thought 
> we have fixed this bug in the printk code already: sched_clock() is a 
> 'raw' interface that should not be used directly - the proper interface 
> is cpu_clock(cpu). Does the patch below help?
> 
>   Ingo
> 
> --->
> Subject: sched: fix CONFIG_PRINT_TIME's reliance on sched_clock()
> From: Ingo Molnar <[EMAIL PROTECTED]>
> 
> Stefano Brivio reported weird printk timestamp behavior during
> CPU frequency changes:
> 
>   http://bugzilla.kernel.org/show_bug.cgi?id=9475
> 
> fix CONFIG_PRINT_TIME's reliance on sched_clock() and use cpu_clock()
> instead.
> 
> Reported-and-bisected-by: Stefano Brivio <[EMAIL PROTECTED]>
> Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
> ---
>  kernel/printk.c |2 +-
>  kernel/sched.c  |7 ++-
>  2 files changed, 7 insertions(+), 2 deletions(-)
> 
> Index: linux/kernel/printk.c
> ===
> --- linux.orig/kernel/printk.c
> +++ linux/kernel/printk.c
> @@ -680,7 +680,7 @@ asmlinkage int vprintk(const char *fmt, 
>   loglev_char = default_message_loglevel
>   + '0';
>   }
> - t = printk_clock();
> + t = cpu_clock(printk_cpu);
>   nanosec_rem = do_div(t, 10);
>   tlen = sprintf(tbuf,
>   "<%c>[%5lu.%06lu] ",

A bit risky - it's quite an expansion of code which no longer can call printk.

You might want to take that WARN_ON out of __update_rq_clock() ;)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Guillaume Chazarain <[EMAIL PROTECTED]> wrote:

> I'll clean it up and resend it later. As I don't have the necessary 
> knowledge to do the tsc_{32,64}.c unification, should I copy paste 
> common functions into tsc_32.c and tsc_64.c to ease later unification 
> or should I start a common .c file?

note that there are a couple of existing patches in this area. One is 
the fix below. There's also older frequency-scaling TSC patches - i'll 
try to dig them out.

Ingo

>
Subject: x86: idle wakeup event in the HLT loop
From: Ingo Molnar <[EMAIL PROTECTED]>

do a proper idle-wakeup event on HLT as well - some CPUs stop the TSC
in HLT too, not just when going through the ACPI methods.

Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
---
 arch/x86/kernel/process_32.c |   15 ---
 1 file changed, 12 insertions(+), 3 deletions(-)

Index: linux/arch/x86/kernel/process_32.c
===
--- linux.orig/arch/x86/kernel/process_32.c
+++ linux/arch/x86/kernel/process_32.c
@@ -113,10 +113,19 @@ void default_idle(void)
smp_mb();
 
local_irq_disable();
-   if (!need_resched())
+   if (!need_resched()) {
+   ktime_t t0, t1;
+   u64 t0n, t1n;
+
+   t0 = ktime_get();
+   t0n = ktime_to_ns(t0);
safe_halt();/* enables interrupts racelessly */
-   else
-   local_irq_enable();
+   local_irq_disable();
+   t1 = ktime_get();
+   t1n = ktime_to_ns(t1);
+   sched_clock_idle_wakeup_event(t1n - t0n);
+   }
+   local_irq_enable();
current_thread_info()->status |= TS_POLLING;
} else {
/* loop is done by the caller */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Guillaume Chazarain
Le Fri, 7 Dec 2007 09:51:21 +0100,
Ingo Molnar <[EMAIL PROTECTED]> a écrit :

> yeah, we can do something like this in 2.6.25 - this will improve the 
> quality of sched_clock().

Thanks a lot for your interest!

I'll clean it up and resend it later. As I don't have the necessary
knowledge to do the tsc_{32,64}.c unification, should I copy paste
common functions into tsc_32.c and tsc_64.c to ease later unification
or should I start a common .c file?

Thanks again for showing interest.

-- 
Guillaume
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Guillaume Chazarain <[EMAIL PROTECTED]> wrote:

> > Something like http://lkml.org/lkml/2007/3/16/291 that would need 
> > some refresh?
> 
> And here is a refreshed one just for testing with 2.6-git. The 64 bit 
> part is a shamelessly untested copy/paste as I cannot test it.

yeah, we can do something like this in 2.6.25 - this will improve the 
quality of sched_clock(). The other patch i sent should solve the 
problem for 2.6.24 - printk should not be using raw sched_clock() calls. 
(as the name says it's for the scheduler's internal use.) I've also 
queued up the patch below - it removes the now unnecessary printk clock 
code.

Ingo

->
Subject: sched: remove printk_clock()
From: Ingo Molnar <[EMAIL PROTECTED]>

printk_clock() is obsolete - it has been replaced with cpu_clock().

Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
---
 arch/arm/kernel/time.c  |   11 ---
 arch/ia64/kernel/time.c |   27 ---
 kernel/printk.c |5 -
 3 files changed, 43 deletions(-)

Index: linux/arch/arm/kernel/time.c
===
--- linux.orig/arch/arm/kernel/time.c
+++ linux/arch/arm/kernel/time.c
@@ -79,17 +79,6 @@ static unsigned long dummy_gettimeoffset
 }
 #endif
 
-/*
- * An implementation of printk_clock() independent from
- * sched_clock().  This avoids non-bootable kernels when
- * printk_clock is enabled.
- */
-unsigned long long printk_clock(void)
-{
-   return (unsigned long long)(jiffies - INITIAL_JIFFIES) *
-   (10 / HZ);
-}
-
 static unsigned long next_rtc_update;
 
 /*
Index: linux/arch/ia64/kernel/time.c
===
--- linux.orig/arch/ia64/kernel/time.c
+++ linux/arch/ia64/kernel/time.c
@@ -344,33 +344,6 @@ udelay (unsigned long usecs)
 }
 EXPORT_SYMBOL(udelay);
 
-static unsigned long long ia64_itc_printk_clock(void)
-{
-   if (ia64_get_kr(IA64_KR_PER_CPU_DATA))
-   return sched_clock();
-   return 0;
-}
-
-static unsigned long long ia64_default_printk_clock(void)
-{
-   return (unsigned long long)(jiffies_64 - INITIAL_JIFFIES) *
-   (10/HZ);
-}
-
-unsigned long long (*ia64_printk_clock)(void) = _default_printk_clock;
-
-unsigned long long printk_clock(void)
-{
-   return ia64_printk_clock();
-}
-
-void __init
-ia64_setup_printk_clock(void)
-{
-   if (!(sal_platform_features & IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT))
-   ia64_printk_clock = ia64_itc_printk_clock;
-}
-
 /* IA64 doesn't cache the timezone */
 void update_vsyscall_tz(void)
 {
Index: linux/kernel/printk.c
===
--- linux.orig/kernel/printk.c
+++ linux/kernel/printk.c
@@ -573,11 +573,6 @@ static int __init printk_time_setup(char
 
 __setup("time", printk_time_setup);
 
-__attribute__((weak)) unsigned long long printk_clock(void)
-{
-   return sched_clock();
-}
-
 /* Check if we have any console registered that can be called early in boot. */
 static int have_callable_console(void)
 {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Stefano Brivio <[EMAIL PROTECTED]> wrote:

> This patch fixes a regression introduced by:
> 
> commit bb29ab26863c022743143f27956cc0ca362f258c
> Author: Ingo Molnar <[EMAIL PROTECTED]>
> Date:   Mon Jul 9 18:51:59 2007 +0200
> 
> This caused the jiffies counter to leap back and forth on cpufreq 
> changes on my x86 box. I'd say that we can't always assume that TSC 
> does "small errors" only, when marked unstable. On cpufreq changes 
> these errors can be huge.

ah, printk_clock() still uses sched_clock(), not jiffies. So it's not 
the jiffies counter that goes back and forth, it's sched_clock() - so 
this is a printk timestamps anomaly, not related to jiffies. I thought 
we have fixed this bug in the printk code already: sched_clock() is a 
'raw' interface that should not be used directly - the proper interface 
is cpu_clock(cpu). Does the patch below help?

Ingo

--->
Subject: sched: fix CONFIG_PRINT_TIME's reliance on sched_clock()
From: Ingo Molnar <[EMAIL PROTECTED]>

Stefano Brivio reported weird printk timestamp behavior during
CPU frequency changes:

  http://bugzilla.kernel.org/show_bug.cgi?id=9475

fix CONFIG_PRINT_TIME's reliance on sched_clock() and use cpu_clock()
instead.

Reported-and-bisected-by: Stefano Brivio <[EMAIL PROTECTED]>
Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
---
 kernel/printk.c |2 +-
 kernel/sched.c  |7 ++-
 2 files changed, 7 insertions(+), 2 deletions(-)

Index: linux/kernel/printk.c
===
--- linux.orig/kernel/printk.c
+++ linux/kernel/printk.c
@@ -680,7 +680,7 @@ asmlinkage int vprintk(const char *fmt, 
loglev_char = default_message_loglevel
+ '0';
}
-   t = printk_clock();
+   t = cpu_clock(printk_cpu);
nanosec_rem = do_div(t, 10);
tlen = sprintf(tbuf,
"<%c>[%5lu.%06lu] ",
Index: linux/kernel/sched.c
===
--- linux.orig/kernel/sched.c
+++ linux/kernel/sched.c
@@ -599,7 +599,12 @@ unsigned long long cpu_clock(int cpu)
 
local_irq_save(flags);
rq = cpu_rq(cpu);
-   update_rq_clock(rq);
+   /*
+* Only call sched_clock() if the scheduler has already been
+* initialized (some code might call cpu_clock() very early):
+*/
+   if (rq->idle)
+   update_rq_clock(rq);
now = rq->clock;
local_irq_restore(flags);
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Guillaume Chazarain
"Guillaume Chazarain" <[EMAIL PROTECTED]> wrote:

> On Dec 7, 2007 6:51 AM, Thomas Gleixner <[EMAIL PROTECTED]> wrote:
> > Hmrpf. sched_clock() is used for the time stamp of the printks. We
> > need to find some better solution other than killing off the tsc
> > access completely.
> 
> Something like http://lkml.org/lkml/2007/3/16/291 that would need some 
> refresh?

And here is a refreshed one just for testing with 2.6-git. The 64 bit
part is a shamelessly untested copy/paste as I cannot test it.

diff --git a/arch/x86/kernel/tsc_32.c b/arch/x86/kernel/tsc_32.c
index 9ebc0da..d561b2f 100644
--- a/arch/x86/kernel/tsc_32.c
+++ b/arch/x86/kernel/tsc_32.c
@@ -5,6 +5,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -78,15 +79,32 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
  *  cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
  *  ([EMAIL PROTECTED])
  *
+ *  ns += offset to avoid sched_clock jumps with cpufreq
+ *
  * [EMAIL PROTECTED] "math is hard, lets go shopping!"
  */
-unsigned long cyc2ns_scale __read_mostly;
 
 #define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
 
-static inline void set_cyc2ns_scale(unsigned long cpu_khz)
+DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;
+
+static void set_cyc2ns_scale(unsigned long cpu_khz)
 {
-   cyc2ns_scale = (100 << CYC2NS_SCALE_FACTOR)/cpu_khz;
+   struct cyc2ns_params *params;
+   unsigned long flags;
+   unsigned long long tsc_now, ns_now;
+
+   rdtscll(tsc_now);
+   params = _cpu_var(cyc2ns);
+
+   local_irq_save(flags);
+   ns_now = __cycles_2_ns(params, tsc_now);
+
+   params->scale = (NSEC_PER_MSEC << CYC2NS_SCALE_FACTOR)/cpu_khz;
+   params->offset += ns_now - __cycles_2_ns(params, tsc_now);
+   local_irq_restore(flags);
+
+   put_cpu_var(cyc2ns);
 }
 
 /*
diff --git a/arch/x86/kernel/tsc_64.c b/arch/x86/kernel/tsc_64.c
index 9c70af4..93e7a06 100644
--- a/arch/x86/kernel/tsc_64.c
+++ b/arch/x86/kernel/tsc_64.c
@@ -10,6 +10,7 @@
 
 #include 
 #include 
+#include 
 
 static int notsc __initdata = 0;
 
@@ -18,16 +19,25 @@ EXPORT_SYMBOL(cpu_khz);
 unsigned int tsc_khz;
 EXPORT_SYMBOL(tsc_khz);
 
-static unsigned int cyc2ns_scale __read_mostly;
+DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;
 
-static inline void set_cyc2ns_scale(unsigned long khz)
+static void set_cyc2ns_scale(unsigned long cpu_khz)
 {
-   cyc2ns_scale = (NSEC_PER_MSEC << NS_SCALE) / khz;
-}
+   struct cyc2ns_params *params;
+   unsigned long flags;
+   unsigned long long tsc_now, ns_now;
 
-static unsigned long long cycles_2_ns(unsigned long long cyc)
-{
-   return (cyc * cyc2ns_scale) >> NS_SCALE;
+   rdtscll(tsc_now);
+   params = _cpu_var(cyc2ns);
+
+   local_irq_save(flags);
+   ns_now = __cycles_2_ns(params, tsc_now);
+
+   params->scale = (NSEC_PER_MSEC << CYC2NS_SCALE_FACTOR)/cpu_khz;
+   params->offset += ns_now - __cycles_2_ns(params, tsc_now);
+   local_irq_restore(flags);
+
+   put_cpu_var(cyc2ns);
 }
 
 unsigned long long sched_clock(void)
diff --git a/include/asm-x86/timer.h b/include/asm-x86/timer.h
index 0db7e99..ff4f2a3 100644
--- a/include/asm-x86/timer.h
+++ b/include/asm-x86/timer.h
@@ -2,6 +2,7 @@
 #define _ASMi386_TIMER_H
 #include 
 #include 
+#include 
 
 #define TICK_SIZE (tick_nsec / 1000)
 
@@ -16,7 +17,7 @@ extern int recalibrate_cpu_khz(void);
 #define calculate_cpu_khz() native_calculate_cpu_khz()
 #endif
 
-/* Accellerators for sched_clock()
+/* Accelerators for sched_clock()
  * convert from cycles(64bits) => nanoseconds (64bits)
  *  basic equation:
  * ns = cycles / (freq / ns_per_sec)
@@ -31,20 +32,44 @@ extern int recalibrate_cpu_khz(void);
  * And since SC is a constant power of two, we can convert the div
  *  into a shift.
  *
- *  We can use khz divisor instead of mhz to keep a better percision, since
+ *  We can use khz divisor instead of mhz to keep a better precision, since
  *  cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
  *  ([EMAIL PROTECTED])
  *
+ *  ns += offset to avoid sched_clock jumps with cpufreq
+ *
  * [EMAIL PROTECTED] "math is hard, lets go shopping!"
  */
-extern unsigned long cyc2ns_scale __read_mostly;
+
+struct cyc2ns_params {
+   unsigned long scale;
+   unsigned long long offset;
+};
+
+DECLARE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;
 
 #define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
 
-static inline unsigned long long cycles_2_ns(unsigned long long cyc)
+static inline unsigned long long __cycles_2_ns(struct cyc2ns_params *params,
+  unsigned long long cyc)
 {
-   return (cyc * cyc2ns_scale) >> CYC2NS_SCALE_FACTOR;
+   return ((cyc * params->scale) >> CYC2NS_SCALE_FACTOR) + params->offset;
 }
 
+static inline unsigned long long cycles_2_ns(unsigned long long cyc)
+{
+   struct cyc2ns_params 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Andrew Morton
On Fri, 7 Dec 2007 09:45:59 +0100 Ingo Molnar [EMAIL PROTECTED] wrote:

 
 * Stefano Brivio [EMAIL PROTECTED] wrote:
 
  This patch fixes a regression introduced by:
  
  commit bb29ab26863c022743143f27956cc0ca362f258c
  Author: Ingo Molnar [EMAIL PROTECTED]
  Date:   Mon Jul 9 18:51:59 2007 +0200
  
  This caused the jiffies counter to leap back and forth on cpufreq 
  changes on my x86 box. I'd say that we can't always assume that TSC 
  does small errors only, when marked unstable. On cpufreq changes 
  these errors can be huge.
 
 ah, printk_clock() still uses sched_clock(), not jiffies. So it's not 
 the jiffies counter that goes back and forth, it's sched_clock() - so 
 this is a printk timestamps anomaly, not related to jiffies. I thought 
 we have fixed this bug in the printk code already: sched_clock() is a 
 'raw' interface that should not be used directly - the proper interface 
 is cpu_clock(cpu). Does the patch below help?
 
   Ingo
 
 ---
 Subject: sched: fix CONFIG_PRINT_TIME's reliance on sched_clock()
 From: Ingo Molnar [EMAIL PROTECTED]
 
 Stefano Brivio reported weird printk timestamp behavior during
 CPU frequency changes:
 
   http://bugzilla.kernel.org/show_bug.cgi?id=9475
 
 fix CONFIG_PRINT_TIME's reliance on sched_clock() and use cpu_clock()
 instead.
 
 Reported-and-bisected-by: Stefano Brivio [EMAIL PROTECTED]
 Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
 ---
  kernel/printk.c |2 +-
  kernel/sched.c  |7 ++-
  2 files changed, 7 insertions(+), 2 deletions(-)
 
 Index: linux/kernel/printk.c
 ===
 --- linux.orig/kernel/printk.c
 +++ linux/kernel/printk.c
 @@ -680,7 +680,7 @@ asmlinkage int vprintk(const char *fmt, 
   loglev_char = default_message_loglevel
   + '0';
   }
 - t = printk_clock();
 + t = cpu_clock(printk_cpu);
   nanosec_rem = do_div(t, 10);
   tlen = sprintf(tbuf,
   %c[%5lu.%06lu] ,

A bit risky - it's quite an expansion of code which no longer can call printk.

You might want to take that WARN_ON out of __update_rq_clock() ;)

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Andi Kleen
Thomas Gleixner [EMAIL PROTECTED] writes:

 Hmrpf. sched_clock() is used for the time stamp of the printks. We
 need to find some better solution other than killing off the tsc
 access completely.

Doing it properly requires pretty much most of my old sched-clock ff patch.
Complicated and not pretty, but ..
Unfortunately that version still had some jumps on cpufreq, but they
are fixable there.

-Andi
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Ingo Molnar [EMAIL PROTECTED] wrote:

 third update. the cpufreq callbacks are not quite OK yet.

fourth update - the cpufreq callbacks are back. This is a version that 
is supposed fix all known aspects of TSC and frequency-change 
weirdnesses.

Ingo

Index: linux/arch/arm/kernel/time.c
===
--- linux.orig/arch/arm/kernel/time.c
+++ linux/arch/arm/kernel/time.c
@@ -79,17 +79,6 @@ static unsigned long dummy_gettimeoffset
 }
 #endif
 
-/*
- * An implementation of printk_clock() independent from
- * sched_clock().  This avoids non-bootable kernels when
- * printk_clock is enabled.
- */
-unsigned long long printk_clock(void)
-{
-   return (unsigned long long)(jiffies - INITIAL_JIFFIES) *
-   (10 / HZ);
-}
-
 static unsigned long next_rtc_update;
 
 /*
Index: linux/arch/ia64/kernel/time.c
===
--- linux.orig/arch/ia64/kernel/time.c
+++ linux/arch/ia64/kernel/time.c
@@ -344,33 +344,6 @@ udelay (unsigned long usecs)
 }
 EXPORT_SYMBOL(udelay);
 
-static unsigned long long ia64_itc_printk_clock(void)
-{
-   if (ia64_get_kr(IA64_KR_PER_CPU_DATA))
-   return sched_clock();
-   return 0;
-}
-
-static unsigned long long ia64_default_printk_clock(void)
-{
-   return (unsigned long long)(jiffies_64 - INITIAL_JIFFIES) *
-   (10/HZ);
-}
-
-unsigned long long (*ia64_printk_clock)(void) = ia64_default_printk_clock;
-
-unsigned long long printk_clock(void)
-{
-   return ia64_printk_clock();
-}
-
-void __init
-ia64_setup_printk_clock(void)
-{
-   if (!(sal_platform_features  IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT))
-   ia64_printk_clock = ia64_itc_printk_clock;
-}
-
 /* IA64 doesn't cache the timezone */
 void update_vsyscall_tz(void)
 {
Index: linux/arch/x86/kernel/process_32.c
===
--- linux.orig/arch/x86/kernel/process_32.c
+++ linux/arch/x86/kernel/process_32.c
@@ -113,10 +113,19 @@ void default_idle(void)
smp_mb();
 
local_irq_disable();
-   if (!need_resched())
+   if (!need_resched()) {
+   ktime_t t0, t1;
+   u64 t0n, t1n;
+
+   t0 = ktime_get();
+   t0n = ktime_to_ns(t0);
safe_halt();/* enables interrupts racelessly */
-   else
-   local_irq_enable();
+   local_irq_disable();
+   t1 = ktime_get();
+   t1n = ktime_to_ns(t1);
+   sched_clock_idle_wakeup_event(t1n - t0n);
+   }
+   local_irq_enable();
current_thread_info()-status |= TS_POLLING;
} else {
/* loop is done by the caller */
Index: linux/arch/x86/kernel/tsc_32.c
===
--- linux.orig/arch/x86/kernel/tsc_32.c
+++ linux/arch/x86/kernel/tsc_32.c
@@ -5,6 +5,7 @@
 #include linux/jiffies.h
 #include linux/init.h
 #include linux/dmi.h
+#include linux/percpu.h
 
 #include asm/delay.h
 #include asm/tsc.h
@@ -78,15 +79,35 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
  *  cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
  *  ([EMAIL PROTECTED])
  *
+ *  ns += offset to avoid sched_clock jumps with cpufreq
+ *
  * [EMAIL PROTECTED] math is hard, lets go shopping!
  */
-unsigned long cyc2ns_scale __read_mostly;
 
-#define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
+DEFINE_PER_CPU(unsigned long, cyc2ns);
 
-static inline void set_cyc2ns_scale(unsigned long cpu_khz)
+static void set_cyc2ns_scale(unsigned long cpu_khz, int cpu)
 {
-   cyc2ns_scale = (100  CYC2NS_SCALE_FACTOR)/cpu_khz;
+   unsigned long flags, prev_scale, *scale;
+   unsigned long long tsc_now, ns_now;
+
+   local_irq_save(flags);
+   sched_clock_idle_sleep_event();
+
+   scale = per_cpu(cyc2ns, cpu);
+
+   rdtscll(tsc_now);
+   ns_now = __cycles_2_ns(tsc_now);
+
+   prev_scale = *scale;
+   if (cpu_khz)
+   *scale = (NSEC_PER_MSEC  CYC2NS_SCALE_FACTOR)/cpu_khz;
+
+   /*
+* Start smoothly with the new frequency:
+*/
+   sched_clock_idle_wakeup_event(0);
+   local_irq_restore(flags);
 }
 
 /*
@@ -239,7 +260,9 @@ time_cpufreq_notifier(struct notifier_bl
ref_freq, freq-new);
if (!(freq-flags  CPUFREQ_CONST_LOOPS)) {
tsc_khz = cpu_khz;
-   set_cyc2ns_scale(cpu_khz);
+   preempt_disable();
+   set_cyc2ns_scale(cpu_khz, smp_processor_id());
+   preempt_enable();
/*

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Guillaume Chazarain
On Dec 7, 2007 12:18 PM, Guillaume Chazarain [EMAIL PROTECTED] wrote:
 Any pointer to it?

Nevermind, I found it ... in this same thread :-(

-- 
Guillaume
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Ingo Molnar [EMAIL PROTECTED] wrote:

  Stefano, could you try this ontop of a recent-ish Linus tree - does 
  this resolve all issues? (without introducing new ones ;-)
 
 updated version attached below.

third update. the cpufreq callbacks are not quite OK yet.

Ingo

Index: linux/arch/arm/kernel/time.c
===
--- linux.orig/arch/arm/kernel/time.c
+++ linux/arch/arm/kernel/time.c
@@ -79,17 +79,6 @@ static unsigned long dummy_gettimeoffset
 }
 #endif
 
-/*
- * An implementation of printk_clock() independent from
- * sched_clock().  This avoids non-bootable kernels when
- * printk_clock is enabled.
- */
-unsigned long long printk_clock(void)
-{
-   return (unsigned long long)(jiffies - INITIAL_JIFFIES) *
-   (10 / HZ);
-}
-
 static unsigned long next_rtc_update;
 
 /*
Index: linux/arch/ia64/kernel/time.c
===
--- linux.orig/arch/ia64/kernel/time.c
+++ linux/arch/ia64/kernel/time.c
@@ -344,33 +344,6 @@ udelay (unsigned long usecs)
 }
 EXPORT_SYMBOL(udelay);
 
-static unsigned long long ia64_itc_printk_clock(void)
-{
-   if (ia64_get_kr(IA64_KR_PER_CPU_DATA))
-   return sched_clock();
-   return 0;
-}
-
-static unsigned long long ia64_default_printk_clock(void)
-{
-   return (unsigned long long)(jiffies_64 - INITIAL_JIFFIES) *
-   (10/HZ);
-}
-
-unsigned long long (*ia64_printk_clock)(void) = ia64_default_printk_clock;
-
-unsigned long long printk_clock(void)
-{
-   return ia64_printk_clock();
-}
-
-void __init
-ia64_setup_printk_clock(void)
-{
-   if (!(sal_platform_features  IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT))
-   ia64_printk_clock = ia64_itc_printk_clock;
-}
-
 /* IA64 doesn't cache the timezone */
 void update_vsyscall_tz(void)
 {
Index: linux/arch/x86/kernel/process_32.c
===
--- linux.orig/arch/x86/kernel/process_32.c
+++ linux/arch/x86/kernel/process_32.c
@@ -113,10 +113,19 @@ void default_idle(void)
smp_mb();
 
local_irq_disable();
-   if (!need_resched())
+   if (!need_resched()) {
+   ktime_t t0, t1;
+   u64 t0n, t1n;
+
+   t0 = ktime_get();
+   t0n = ktime_to_ns(t0);
safe_halt();/* enables interrupts racelessly */
-   else
-   local_irq_enable();
+   local_irq_disable();
+   t1 = ktime_get();
+   t1n = ktime_to_ns(t1);
+   sched_clock_idle_wakeup_event(t1n - t0n);
+   }
+   local_irq_enable();
current_thread_info()-status |= TS_POLLING;
} else {
/* loop is done by the caller */
Index: linux/arch/x86/lib/delay_32.c
===
--- linux.orig/arch/x86/lib/delay_32.c
+++ linux/arch/x86/lib/delay_32.c
@@ -38,17 +38,21 @@ static void delay_loop(unsigned long loo
:0 (loops));
 }
 
-/* TSC based delay: */
+/* cpu_clock() [TSC] based delay: */
 static void delay_tsc(unsigned long loops)
 {
-   unsigned long bclock, now;
+   unsigned long long start, stop, now;
+   int this_cpu;
+
+   preempt_disable();
+
+   this_cpu = smp_processor_id();
+   start = now = cpu_clock(this_cpu);
+   stop = start + loops;
+
+   while ((long long)(stop - now)  0)
+   now = cpu_clock(this_cpu);
 
-   preempt_disable();  /* TSC's are per-cpu */
-   rdtscl(bclock);
-   do {
-   rep_nop();
-   rdtscl(now);
-   } while ((now-bclock)  loops);
preempt_enable();
 }
 
Index: linux/arch/x86/lib/delay_64.c
===
--- linux.orig/arch/x86/lib/delay_64.c
+++ linux/arch/x86/lib/delay_64.c
@@ -26,19 +26,28 @@ int read_current_timer(unsigned long *ti
return 0;
 }
 
-void __delay(unsigned long loops)
+/* cpu_clock() [TSC] based delay: */
+static void delay_tsc(unsigned long loops)
 {
-   unsigned bclock, now;
+   unsigned long long start, stop, now;
+   int this_cpu;
+
+   preempt_disable();
+
+   this_cpu = smp_processor_id();
+   start = now = cpu_clock(this_cpu);
+   stop = start + loops;
+
+   while ((long long)(stop - now)  0)
+   now = cpu_clock(this_cpu);
 
-   preempt_disable();  /* TSC's are pre-cpu */
-   rdtscl(bclock);
-   do {
-   rep_nop(); 
-   rdtscl(now);
-   }
-   while ((now-bclock)  loops);
preempt_enable();
 }
+
+void __delay(unsigned long loops)
+{
+   delay_tsc(loops);
+}
 EXPORT_SYMBOL(__delay);
 
 inline void __const_udelay(unsigned long 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Nick Piggin [EMAIL PROTECTED] wrote:

 My patch should fix the worst cpufreq sched_clock jumping issue I 
 think.

but it degrades the precision of sched_clock() and has other problems as 
well. cpu_clock() is the right interface to use for such things.

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Nick Piggin
On Friday 07 December 2007 19:45, Ingo Molnar wrote:
 * Stefano Brivio [EMAIL PROTECTED] wrote:
  This patch fixes a regression introduced by:
 
  commit bb29ab26863c022743143f27956cc0ca362f258c
  Author: Ingo Molnar [EMAIL PROTECTED]
  Date:   Mon Jul 9 18:51:59 2007 +0200
 
  This caused the jiffies counter to leap back and forth on cpufreq
  changes on my x86 box. I'd say that we can't always assume that TSC
  does small errors only, when marked unstable. On cpufreq changes
  these errors can be huge.

 ah, printk_clock() still uses sched_clock(), not jiffies. So it's not
 the jiffies counter that goes back and forth, it's sched_clock() - so
 this is a printk timestamps anomaly, not related to jiffies. I thought
 we have fixed this bug in the printk code already: sched_clock() is a
 'raw' interface that should not be used directly - the proper interface
 is cpu_clock(cpu).

It's a single CPU box, so sched_clock() jumping would still be
problematic, no?

My patch should fix the worst cpufreq sched_clock jumping issue
I think.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Guillaume Chazarain [EMAIL PROTECTED] wrote:

 I'll clean it up and resend it later. As I don't have the necessary 
 knowledge to do the tsc_{32,64}.c unification, should I copy paste 
 common functions into tsc_32.c and tsc_64.c to ease later unification 
 or should I start a common .c file?

note that there are a couple of existing patches in this area. One is 
the fix below. There's also older frequency-scaling TSC patches - i'll 
try to dig them out.

Ingo


Subject: x86: idle wakeup event in the HLT loop
From: Ingo Molnar [EMAIL PROTECTED]

do a proper idle-wakeup event on HLT as well - some CPUs stop the TSC
in HLT too, not just when going through the ACPI methods.

Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
---
 arch/x86/kernel/process_32.c |   15 ---
 1 file changed, 12 insertions(+), 3 deletions(-)

Index: linux/arch/x86/kernel/process_32.c
===
--- linux.orig/arch/x86/kernel/process_32.c
+++ linux/arch/x86/kernel/process_32.c
@@ -113,10 +113,19 @@ void default_idle(void)
smp_mb();
 
local_irq_disable();
-   if (!need_resched())
+   if (!need_resched()) {
+   ktime_t t0, t1;
+   u64 t0n, t1n;
+
+   t0 = ktime_get();
+   t0n = ktime_to_ns(t0);
safe_halt();/* enables interrupts racelessly */
-   else
-   local_irq_enable();
+   local_irq_disable();
+   t1 = ktime_get();
+   t1n = ktime_to_ns(t1);
+   sched_clock_idle_wakeup_event(t1n - t0n);
+   }
+   local_irq_enable();
current_thread_info()-status |= TS_POLLING;
} else {
/* loop is done by the caller */
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Stefano Brivio [EMAIL PROTECTED] wrote:

 This patch fixes a regression introduced by:
 
 commit bb29ab26863c022743143f27956cc0ca362f258c
 Author: Ingo Molnar [EMAIL PROTECTED]
 Date:   Mon Jul 9 18:51:59 2007 +0200
 
 This caused the jiffies counter to leap back and forth on cpufreq 
 changes on my x86 box. I'd say that we can't always assume that TSC 
 does small errors only, when marked unstable. On cpufreq changes 
 these errors can be huge.

ah, printk_clock() still uses sched_clock(), not jiffies. So it's not 
the jiffies counter that goes back and forth, it's sched_clock() - so 
this is a printk timestamps anomaly, not related to jiffies. I thought 
we have fixed this bug in the printk code already: sched_clock() is a 
'raw' interface that should not be used directly - the proper interface 
is cpu_clock(cpu). Does the patch below help?

Ingo

---
Subject: sched: fix CONFIG_PRINT_TIME's reliance on sched_clock()
From: Ingo Molnar [EMAIL PROTECTED]

Stefano Brivio reported weird printk timestamp behavior during
CPU frequency changes:

  http://bugzilla.kernel.org/show_bug.cgi?id=9475

fix CONFIG_PRINT_TIME's reliance on sched_clock() and use cpu_clock()
instead.

Reported-and-bisected-by: Stefano Brivio [EMAIL PROTECTED]
Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
---
 kernel/printk.c |2 +-
 kernel/sched.c  |7 ++-
 2 files changed, 7 insertions(+), 2 deletions(-)

Index: linux/kernel/printk.c
===
--- linux.orig/kernel/printk.c
+++ linux/kernel/printk.c
@@ -680,7 +680,7 @@ asmlinkage int vprintk(const char *fmt, 
loglev_char = default_message_loglevel
+ '0';
}
-   t = printk_clock();
+   t = cpu_clock(printk_cpu);
nanosec_rem = do_div(t, 10);
tlen = sprintf(tbuf,
%c[%5lu.%06lu] ,
Index: linux/kernel/sched.c
===
--- linux.orig/kernel/sched.c
+++ linux/kernel/sched.c
@@ -599,7 +599,12 @@ unsigned long long cpu_clock(int cpu)
 
local_irq_save(flags);
rq = cpu_rq(cpu);
-   update_rq_clock(rq);
+   /*
+* Only call sched_clock() if the scheduler has already been
+* initialized (some code might call cpu_clock() very early):
+*/
+   if (rq-idle)
+   update_rq_clock(rq);
now = rq-clock;
local_irq_restore(flags);
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Guillaume Chazarain
On Dec 7, 2007 12:13 PM, Nick Piggin [EMAIL PROTECTED] wrote:
 My patch should fix the worst cpufreq sched_clock jumping issue
 I think.

Any pointer to it?

Thanks.

-- 
Guillaume
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Nick Piggin [EMAIL PROTECTED] wrote:

  ah, printk_clock() still uses sched_clock(), not jiffies. So it's 
  not the jiffies counter that goes back and forth, it's sched_clock() 
  - so this is a printk timestamps anomaly, not related to jiffies. I 
  thought we have fixed this bug in the printk code already: 
  sched_clock() is a 'raw' interface that should not be used directly 
  - the proper interface is cpu_clock(cpu).
 
 It's a single CPU box, so sched_clock() jumping would still be 
 problematic, no?

sched_clock() is an internal API - the non-jumping API to be used by 
printk is cpu_clock().

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

ok, here's a rollup of 11 patches that relate to this. I hoped we could 
wait with this for 2.6.25, but it seems more urgent as per Stefano's 
testing, as udelay() and drivers are affected as well.

Stefano, could you try this ontop of a recent-ish Linus tree - does this 
resolve all issues? (without introducing new ones ;-)

Ingo

Index: linux/arch/arm/kernel/time.c
===
--- linux.orig/arch/arm/kernel/time.c
+++ linux/arch/arm/kernel/time.c
@@ -79,17 +79,6 @@ static unsigned long dummy_gettimeoffset
 }
 #endif
 
-/*
- * An implementation of printk_clock() independent from
- * sched_clock().  This avoids non-bootable kernels when
- * printk_clock is enabled.
- */
-unsigned long long printk_clock(void)
-{
-   return (unsigned long long)(jiffies - INITIAL_JIFFIES) *
-   (10 / HZ);
-}
-
 static unsigned long next_rtc_update;
 
 /*
Index: linux/arch/ia64/kernel/time.c
===
--- linux.orig/arch/ia64/kernel/time.c
+++ linux/arch/ia64/kernel/time.c
@@ -344,33 +344,6 @@ udelay (unsigned long usecs)
 }
 EXPORT_SYMBOL(udelay);
 
-static unsigned long long ia64_itc_printk_clock(void)
-{
-   if (ia64_get_kr(IA64_KR_PER_CPU_DATA))
-   return sched_clock();
-   return 0;
-}
-
-static unsigned long long ia64_default_printk_clock(void)
-{
-   return (unsigned long long)(jiffies_64 - INITIAL_JIFFIES) *
-   (10/HZ);
-}
-
-unsigned long long (*ia64_printk_clock)(void) = ia64_default_printk_clock;
-
-unsigned long long printk_clock(void)
-{
-   return ia64_printk_clock();
-}
-
-void __init
-ia64_setup_printk_clock(void)
-{
-   if (!(sal_platform_features  IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT))
-   ia64_printk_clock = ia64_itc_printk_clock;
-}
-
 /* IA64 doesn't cache the timezone */
 void update_vsyscall_tz(void)
 {
Index: linux/arch/x86/kernel/process_32.c
===
--- linux.orig/arch/x86/kernel/process_32.c
+++ linux/arch/x86/kernel/process_32.c
@@ -113,10 +113,19 @@ void default_idle(void)
smp_mb();
 
local_irq_disable();
-   if (!need_resched())
+   if (!need_resched()) {
+   ktime_t t0, t1;
+   u64 t0n, t1n;
+
+   t0 = ktime_get();
+   t0n = ktime_to_ns(t0);
safe_halt();/* enables interrupts racelessly */
-   else
-   local_irq_enable();
+   local_irq_disable();
+   t1 = ktime_get();
+   t1n = ktime_to_ns(t1);
+   sched_clock_idle_wakeup_event(t1n - t0n);
+   }
+   local_irq_enable();
current_thread_info()-status |= TS_POLLING;
} else {
/* loop is done by the caller */
Index: linux/arch/x86/kernel/tsc_32.c
===
--- linux.orig/arch/x86/kernel/tsc_32.c
+++ linux/arch/x86/kernel/tsc_32.c
@@ -5,6 +5,7 @@
 #include linux/jiffies.h
 #include linux/init.h
 #include linux/dmi.h
+#include linux/percpu.h
 
 #include asm/delay.h
 #include asm/tsc.h
@@ -78,15 +79,32 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
  *  cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
  *  ([EMAIL PROTECTED])
  *
+ *  ns += offset to avoid sched_clock jumps with cpufreq
+ *
  * [EMAIL PROTECTED] math is hard, lets go shopping!
  */
-unsigned long cyc2ns_scale __read_mostly;
 
 #define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
 
-static inline void set_cyc2ns_scale(unsigned long cpu_khz)
+DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;
+
+static void set_cyc2ns_scale(unsigned long cpu_khz)
 {
-   cyc2ns_scale = (100  CYC2NS_SCALE_FACTOR)/cpu_khz;
+   struct cyc2ns_params *params;
+   unsigned long flags;
+   unsigned long long tsc_now, ns_now;
+
+   rdtscll(tsc_now);
+   params = get_cpu_var(cyc2ns);
+
+   local_irq_save(flags);
+   ns_now = __cycles_2_ns(params, tsc_now);
+
+   params-scale = (NSEC_PER_MSEC  CYC2NS_SCALE_FACTOR)/cpu_khz;
+   params-offset += ns_now - __cycles_2_ns(params, tsc_now);
+   local_irq_restore(flags);
+
+   put_cpu_var(cyc2ns);
 }
 
 /*
Index: linux/arch/x86/kernel/tsc_64.c
===
--- linux.orig/arch/x86/kernel/tsc_64.c
+++ linux/arch/x86/kernel/tsc_64.c
@@ -10,6 +10,7 @@
 
 #include asm/hpet.h
 #include asm/timex.h
+#include asm/timer.h
 
 static int notsc __initdata = 0;
 
@@ -18,16 +19,25 @@ EXPORT_SYMBOL(cpu_khz);
 unsigned int tsc_khz;
 EXPORT_SYMBOL(tsc_khz);
 
-static unsigned int cyc2ns_scale __read_mostly;
+DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;
 
-static 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Ingo Molnar [EMAIL PROTECTED] wrote:

   - t = printk_clock();
   + t = cpu_clock(printk_cpu);
 nanosec_rem = do_div(t, 10);
 tlen = sprintf(tbuf,
 %c[%5lu.%06lu] ,
  
  A bit risky - it's quite an expansion of code which no longer can call 
  printk.
  
  You might want to take that WARN_ON out of __update_rq_clock() ;)
 
 hm, dont we already detect printk recursions and turn them into a 
 silent return instead of a hang/crash?

ugh, we dont. So i guess the (tested) patch below is highly needed. (If 
such incidents become frequent then we could save the stackdump of the 
recursion via save_stack_trace() too - but i wanted to keep the initial 
code simple.)

Ingo


Subject: printk: make printk more robust by not allowing recursion
From: Ingo Molnar [EMAIL PROTECTED]

make printk more robust by allowing recursion only if there's a crash
going on. Also add recursion detection.

I've tested it with an artificially injected printk recursion - instead
of a lockup or spontaneous reboot or other crash, the output was a well
controlled:

[   41.057335] SysRq : 2BUG: recent printk recursion!
[   41.057335] loglevel0-8 reBoot Crashdump show-all-locks(D) tErm Full kIll 
saK showMem Nice powerOff showPc show-all-timers(Q) unRaw Sync showTasks 
Unmount shoW-blocked-tasks

also do all this printk logic with irqs disabled.

Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
---
 kernel/printk.c |   52 ++--
 1 file changed, 42 insertions(+), 10 deletions(-)

Index: linux/kernel/printk.c
===
--- linux.orig/kernel/printk.c
+++ linux/kernel/printk.c
@@ -623,30 +623,57 @@ asmlinkage int printk(const char *fmt, .
 /* cpu currently holding logbuf_lock */
 static volatile unsigned int printk_cpu = UINT_MAX;
 
+const char printk_recursion_bug_msg [] =
+   KERN_CRIT BUG: recent printk recursion!\n;
+static int printk_recursion_bug;
+
 asmlinkage int vprintk(const char *fmt, va_list args)
 {
+   static int log_level_unknown = 1;
+   static char printk_buf[1024];
+
unsigned long flags;
-   int printed_len;
+   int printed_len = 0;
+   int this_cpu;
char *p;
-   static char printk_buf[1024];
-   static int log_level_unknown = 1;
 
boot_delay_msec();
 
preempt_disable();
-   if (unlikely(oops_in_progress)  printk_cpu == smp_processor_id())
-   /* If a crash is occurring during printk() on this CPU,
-* make sure we can't deadlock */
-   zap_locks();
-
/* This stops the holder of console_sem just where we want him */
raw_local_irq_save(flags);
+   this_cpu = smp_processor_id();
+
+   /*
+* Ouch, printk recursed into itself!
+*/
+   if (unlikely(printk_cpu == this_cpu)) {
+   /*
+* If a crash is occurring during printk() on this CPU,
+* then try to get the crash message out but make sure
+* we can't deadlock. Otherwise just return to avoid the
+* recursion and return - but flag the recursion so that
+* it can be printed at the next appropriate moment:
+*/
+   if (!oops_in_progress) {
+   printk_recursion_bug = 1;
+   goto out_restore_irqs;
+   }
+   zap_locks();
+   }
+
lockdep_off();
spin_lock(logbuf_lock);
-   printk_cpu = smp_processor_id();
+   printk_cpu = this_cpu;
 
+   if (printk_recursion_bug) {
+   printk_recursion_bug = 0;
+   strcpy(printk_buf, printk_recursion_bug_msg);
+   printed_len = sizeof(printk_recursion_bug_msg);
+   }
/* Emit the output into the temporary buffer */
-   printed_len = vscnprintf(printk_buf, sizeof(printk_buf), fmt, args);
+   printed_len += vscnprintf(printk_buf + printed_len,
+ sizeof(printk_buf), fmt, args);
 
/*
 * Copy the output into log_buf.  If the caller didn't provide
@@ -675,6 +702,10 @@ asmlinkage int vprintk(const char *fmt, 
loglev_char = default_message_loglevel
+ '0';
}
+   if (panic_timeout) {
+   panic_timeout = 0;
+   printk(recurse!\n);
+   }
t = cpu_clock(printk_cpu);
nanosec_rem = do_div(t, 10);
tlen = sprintf(tbuf,
@@ -739,6 +770,7 @@ asmlinkage int vprintk(const char 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Andrew Morton [EMAIL PROTECTED] wrote:

  -   t = printk_clock();
  +   t = cpu_clock(printk_cpu);
  nanosec_rem = do_div(t, 10);
  tlen = sprintf(tbuf,
  %c[%5lu.%06lu] ,
 
 A bit risky - it's quite an expansion of code which no longer can call 
 printk.
 
 You might want to take that WARN_ON out of __update_rq_clock() ;)

hm, dont we already detect printk recursions and turn them into a silent 
return instead of a hang/crash?

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Guillaume Chazarain
Guillaume Chazarain [EMAIL PROTECTED] wrote:

 On Dec 7, 2007 6:51 AM, Thomas Gleixner [EMAIL PROTECTED] wrote:
  Hmrpf. sched_clock() is used for the time stamp of the printks. We
  need to find some better solution other than killing off the tsc
  access completely.
 
 Something like http://lkml.org/lkml/2007/3/16/291 that would need some 
 refresh?

And here is a refreshed one just for testing with 2.6-git. The 64 bit
part is a shamelessly untested copy/paste as I cannot test it.

diff --git a/arch/x86/kernel/tsc_32.c b/arch/x86/kernel/tsc_32.c
index 9ebc0da..d561b2f 100644
--- a/arch/x86/kernel/tsc_32.c
+++ b/arch/x86/kernel/tsc_32.c
@@ -5,6 +5,7 @@
 #include linux/jiffies.h
 #include linux/init.h
 #include linux/dmi.h
+#include linux/percpu.h
 
 #include asm/delay.h
 #include asm/tsc.h
@@ -78,15 +79,32 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
  *  cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
  *  ([EMAIL PROTECTED])
  *
+ *  ns += offset to avoid sched_clock jumps with cpufreq
+ *
  * [EMAIL PROTECTED] math is hard, lets go shopping!
  */
-unsigned long cyc2ns_scale __read_mostly;
 
 #define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
 
-static inline void set_cyc2ns_scale(unsigned long cpu_khz)
+DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;
+
+static void set_cyc2ns_scale(unsigned long cpu_khz)
 {
-   cyc2ns_scale = (100  CYC2NS_SCALE_FACTOR)/cpu_khz;
+   struct cyc2ns_params *params;
+   unsigned long flags;
+   unsigned long long tsc_now, ns_now;
+
+   rdtscll(tsc_now);
+   params = get_cpu_var(cyc2ns);
+
+   local_irq_save(flags);
+   ns_now = __cycles_2_ns(params, tsc_now);
+
+   params-scale = (NSEC_PER_MSEC  CYC2NS_SCALE_FACTOR)/cpu_khz;
+   params-offset += ns_now - __cycles_2_ns(params, tsc_now);
+   local_irq_restore(flags);
+
+   put_cpu_var(cyc2ns);
 }
 
 /*
diff --git a/arch/x86/kernel/tsc_64.c b/arch/x86/kernel/tsc_64.c
index 9c70af4..93e7a06 100644
--- a/arch/x86/kernel/tsc_64.c
+++ b/arch/x86/kernel/tsc_64.c
@@ -10,6 +10,7 @@
 
 #include asm/hpet.h
 #include asm/timex.h
+#include asm/timer.h
 
 static int notsc __initdata = 0;
 
@@ -18,16 +19,25 @@ EXPORT_SYMBOL(cpu_khz);
 unsigned int tsc_khz;
 EXPORT_SYMBOL(tsc_khz);
 
-static unsigned int cyc2ns_scale __read_mostly;
+DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;
 
-static inline void set_cyc2ns_scale(unsigned long khz)
+static void set_cyc2ns_scale(unsigned long cpu_khz)
 {
-   cyc2ns_scale = (NSEC_PER_MSEC  NS_SCALE) / khz;
-}
+   struct cyc2ns_params *params;
+   unsigned long flags;
+   unsigned long long tsc_now, ns_now;
 
-static unsigned long long cycles_2_ns(unsigned long long cyc)
-{
-   return (cyc * cyc2ns_scale)  NS_SCALE;
+   rdtscll(tsc_now);
+   params = get_cpu_var(cyc2ns);
+
+   local_irq_save(flags);
+   ns_now = __cycles_2_ns(params, tsc_now);
+
+   params-scale = (NSEC_PER_MSEC  CYC2NS_SCALE_FACTOR)/cpu_khz;
+   params-offset += ns_now - __cycles_2_ns(params, tsc_now);
+   local_irq_restore(flags);
+
+   put_cpu_var(cyc2ns);
 }
 
 unsigned long long sched_clock(void)
diff --git a/include/asm-x86/timer.h b/include/asm-x86/timer.h
index 0db7e99..ff4f2a3 100644
--- a/include/asm-x86/timer.h
+++ b/include/asm-x86/timer.h
@@ -2,6 +2,7 @@
 #define _ASMi386_TIMER_H
 #include linux/init.h
 #include linux/pm.h
+#include linux/percpu.h
 
 #define TICK_SIZE (tick_nsec / 1000)
 
@@ -16,7 +17,7 @@ extern int recalibrate_cpu_khz(void);
 #define calculate_cpu_khz() native_calculate_cpu_khz()
 #endif
 
-/* Accellerators for sched_clock()
+/* Accelerators for sched_clock()
  * convert from cycles(64bits) = nanoseconds (64bits)
  *  basic equation:
  * ns = cycles / (freq / ns_per_sec)
@@ -31,20 +32,44 @@ extern int recalibrate_cpu_khz(void);
  * And since SC is a constant power of two, we can convert the div
  *  into a shift.
  *
- *  We can use khz divisor instead of mhz to keep a better percision, since
+ *  We can use khz divisor instead of mhz to keep a better precision, since
  *  cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
  *  ([EMAIL PROTECTED])
  *
+ *  ns += offset to avoid sched_clock jumps with cpufreq
+ *
  * [EMAIL PROTECTED] math is hard, lets go shopping!
  */
-extern unsigned long cyc2ns_scale __read_mostly;
+
+struct cyc2ns_params {
+   unsigned long scale;
+   unsigned long long offset;
+};
+
+DECLARE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;
 
 #define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
 
-static inline unsigned long long cycles_2_ns(unsigned long long cyc)
+static inline unsigned long long __cycles_2_ns(struct cyc2ns_params *params,
+  unsigned long long cyc)
 {
-   return (cyc * cyc2ns_scale)  CYC2NS_SCALE_FACTOR;
+   return ((cyc * params-scale)  CYC2NS_SCALE_FACTOR) + params-offset;
 }
 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Guillaume Chazarain [EMAIL PROTECTED] wrote:

  Something like http://lkml.org/lkml/2007/3/16/291 that would need 
  some refresh?
 
 And here is a refreshed one just for testing with 2.6-git. The 64 bit 
 part is a shamelessly untested copy/paste as I cannot test it.

yeah, we can do something like this in 2.6.25 - this will improve the 
quality of sched_clock(). The other patch i sent should solve the 
problem for 2.6.24 - printk should not be using raw sched_clock() calls. 
(as the name says it's for the scheduler's internal use.) I've also 
queued up the patch below - it removes the now unnecessary printk clock 
code.

Ingo

-
Subject: sched: remove printk_clock()
From: Ingo Molnar [EMAIL PROTECTED]

printk_clock() is obsolete - it has been replaced with cpu_clock().

Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
---
 arch/arm/kernel/time.c  |   11 ---
 arch/ia64/kernel/time.c |   27 ---
 kernel/printk.c |5 -
 3 files changed, 43 deletions(-)

Index: linux/arch/arm/kernel/time.c
===
--- linux.orig/arch/arm/kernel/time.c
+++ linux/arch/arm/kernel/time.c
@@ -79,17 +79,6 @@ static unsigned long dummy_gettimeoffset
 }
 #endif
 
-/*
- * An implementation of printk_clock() independent from
- * sched_clock().  This avoids non-bootable kernels when
- * printk_clock is enabled.
- */
-unsigned long long printk_clock(void)
-{
-   return (unsigned long long)(jiffies - INITIAL_JIFFIES) *
-   (10 / HZ);
-}
-
 static unsigned long next_rtc_update;
 
 /*
Index: linux/arch/ia64/kernel/time.c
===
--- linux.orig/arch/ia64/kernel/time.c
+++ linux/arch/ia64/kernel/time.c
@@ -344,33 +344,6 @@ udelay (unsigned long usecs)
 }
 EXPORT_SYMBOL(udelay);
 
-static unsigned long long ia64_itc_printk_clock(void)
-{
-   if (ia64_get_kr(IA64_KR_PER_CPU_DATA))
-   return sched_clock();
-   return 0;
-}
-
-static unsigned long long ia64_default_printk_clock(void)
-{
-   return (unsigned long long)(jiffies_64 - INITIAL_JIFFIES) *
-   (10/HZ);
-}
-
-unsigned long long (*ia64_printk_clock)(void) = ia64_default_printk_clock;
-
-unsigned long long printk_clock(void)
-{
-   return ia64_printk_clock();
-}
-
-void __init
-ia64_setup_printk_clock(void)
-{
-   if (!(sal_platform_features  IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT))
-   ia64_printk_clock = ia64_itc_printk_clock;
-}
-
 /* IA64 doesn't cache the timezone */
 void update_vsyscall_tz(void)
 {
Index: linux/kernel/printk.c
===
--- linux.orig/kernel/printk.c
+++ linux/kernel/printk.c
@@ -573,11 +573,6 @@ static int __init printk_time_setup(char
 
 __setup(time, printk_time_setup);
 
-__attribute__((weak)) unsigned long long printk_clock(void)
-{
-   return sched_clock();
-}
-
 /* Check if we have any console registered that can be called early in boot. */
 static int have_callable_console(void)
 {
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread stefano . brivio

Quoting Nick Piggin [EMAIL PROTECTED]:


On Friday 07 December 2007 19:45, Ingo Molnar wrote:


ah, printk_clock() still uses sched_clock(), not jiffies. So it's not
the jiffies counter that goes back and forth, it's sched_clock() - so
this is a printk timestamps anomaly, not related to jiffies. I thought
we have fixed this bug in the printk code already: sched_clock() is a
'raw' interface that should not be used directly - the proper interface
is cpu_clock(cpu).


It's a single CPU box, so sched_clock() jumping would still be
problematic, no?


I guess so. Definitely, it didn't look like a printk issue. Drivers  
don't read logs, usually. But they got confused anyway (it seems that  
udelay's get scaled or fail or somesuch - I can't test it right now,  
will provide more feedback in a few hours).



--
Ciao
Stefano



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Andrew Morton
On Fri, 7 Dec 2007 11:40:13 +0100 Ingo Molnar [EMAIL PROTECTED] wrote:

 
 * Andrew Morton [EMAIL PROTECTED] wrote:
 
   - t = printk_clock();
   + t = cpu_clock(printk_cpu);
 nanosec_rem = do_div(t, 10);
 tlen = sprintf(tbuf,
 %c[%5lu.%06lu] ,
  
  A bit risky - it's quite an expansion of code which no longer can call 
  printk.
  
  You might want to take that WARN_ON out of __update_rq_clock() ;)
 
 hm, dont we already detect printk recursions and turn them into a silent 
 return instead of a hang/crash?
 

We'll pop the locks and will proceed to do the nested printk.  So
__update_rq_clock() will need rather a lot of stack ;)
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Guillaume Chazarain
Le Fri, 7 Dec 2007 09:51:21 +0100,
Ingo Molnar [EMAIL PROTECTED] a écrit :

 yeah, we can do something like this in 2.6.25 - this will improve the 
 quality of sched_clock().

Thanks a lot for your interest!

I'll clean it up and resend it later. As I don't have the necessary
knowledge to do the tsc_{32,64}.c unification, should I copy paste
common functions into tsc_32.c and tsc_64.c to ease later unification
or should I start a common .c file?

Thanks again for showing interest.

-- 
Guillaume
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Ingo Molnar [EMAIL PROTECTED] wrote:

 ok, here's a rollup of 11 patches that relate to this. I hoped we 
 could wait with this for 2.6.25, but it seems more urgent as per 
 Stefano's testing, as udelay() and drivers are affected as well.
 
 Stefano, could you try this ontop of a recent-ish Linus tree - does 
 this resolve all issues? (without introducing new ones ;-)

updated version attached below.

 +DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns) __read_mostly;

__read_mostly is not a good idea for PER_CPU variables.

Ingo

Index: linux/arch/arm/kernel/time.c
===
--- linux.orig/arch/arm/kernel/time.c
+++ linux/arch/arm/kernel/time.c
@@ -79,17 +79,6 @@ static unsigned long dummy_gettimeoffset
 }
 #endif
 
-/*
- * An implementation of printk_clock() independent from
- * sched_clock().  This avoids non-bootable kernels when
- * printk_clock is enabled.
- */
-unsigned long long printk_clock(void)
-{
-   return (unsigned long long)(jiffies - INITIAL_JIFFIES) *
-   (10 / HZ);
-}
-
 static unsigned long next_rtc_update;
 
 /*
Index: linux/arch/ia64/kernel/time.c
===
--- linux.orig/arch/ia64/kernel/time.c
+++ linux/arch/ia64/kernel/time.c
@@ -344,33 +344,6 @@ udelay (unsigned long usecs)
 }
 EXPORT_SYMBOL(udelay);
 
-static unsigned long long ia64_itc_printk_clock(void)
-{
-   if (ia64_get_kr(IA64_KR_PER_CPU_DATA))
-   return sched_clock();
-   return 0;
-}
-
-static unsigned long long ia64_default_printk_clock(void)
-{
-   return (unsigned long long)(jiffies_64 - INITIAL_JIFFIES) *
-   (10/HZ);
-}
-
-unsigned long long (*ia64_printk_clock)(void) = ia64_default_printk_clock;
-
-unsigned long long printk_clock(void)
-{
-   return ia64_printk_clock();
-}
-
-void __init
-ia64_setup_printk_clock(void)
-{
-   if (!(sal_platform_features  IA64_SAL_PLATFORM_FEATURE_ITC_DRIFT))
-   ia64_printk_clock = ia64_itc_printk_clock;
-}
-
 /* IA64 doesn't cache the timezone */
 void update_vsyscall_tz(void)
 {
Index: linux/arch/x86/kernel/process_32.c
===
--- linux.orig/arch/x86/kernel/process_32.c
+++ linux/arch/x86/kernel/process_32.c
@@ -113,10 +113,19 @@ void default_idle(void)
smp_mb();
 
local_irq_disable();
-   if (!need_resched())
+   if (!need_resched()) {
+   ktime_t t0, t1;
+   u64 t0n, t1n;
+
+   t0 = ktime_get();
+   t0n = ktime_to_ns(t0);
safe_halt();/* enables interrupts racelessly */
-   else
-   local_irq_enable();
+   local_irq_disable();
+   t1 = ktime_get();
+   t1n = ktime_to_ns(t1);
+   sched_clock_idle_wakeup_event(t1n - t0n);
+   }
+   local_irq_enable();
current_thread_info()-status |= TS_POLLING;
} else {
/* loop is done by the caller */
Index: linux/arch/x86/kernel/tsc_32.c
===
--- linux.orig/arch/x86/kernel/tsc_32.c
+++ linux/arch/x86/kernel/tsc_32.c
@@ -5,6 +5,7 @@
 #include linux/jiffies.h
 #include linux/init.h
 #include linux/dmi.h
+#include linux/percpu.h
 
 #include asm/delay.h
 #include asm/tsc.h
@@ -78,15 +79,32 @@ EXPORT_SYMBOL_GPL(check_tsc_unstable);
  *  cyc2ns_scale is limited to 10^6 * 2^10, which fits in 32 bits.
  *  ([EMAIL PROTECTED])
  *
+ *  ns += offset to avoid sched_clock jumps with cpufreq
+ *
  * [EMAIL PROTECTED] math is hard, lets go shopping!
  */
-unsigned long cyc2ns_scale __read_mostly;
 
 #define CYC2NS_SCALE_FACTOR 10 /* 2^10, carefully chosen */
 
-static inline void set_cyc2ns_scale(unsigned long cpu_khz)
+DEFINE_PER_CPU(struct cyc2ns_params, cyc2ns);
+
+static void set_cyc2ns_scale(unsigned long cpu_khz)
 {
-   cyc2ns_scale = (100  CYC2NS_SCALE_FACTOR)/cpu_khz;
+   struct cyc2ns_params *params;
+   unsigned long flags;
+   unsigned long long tsc_now, ns_now;
+
+   rdtscll(tsc_now);
+   params = get_cpu_var(cyc2ns);
+
+   local_irq_save(flags);
+   ns_now = __cycles_2_ns(params, tsc_now);
+
+   params-scale = (NSEC_PER_MSEC  CYC2NS_SCALE_FACTOR)/cpu_khz;
+   params-offset += ns_now - __cycles_2_ns(params, tsc_now);
+   local_irq_restore(flags);
+
+   put_cpu_var(cyc2ns);
 }
 
 /*
Index: linux/arch/x86/kernel/tsc_64.c
===
--- linux.orig/arch/x86/kernel/tsc_64.c
+++ linux/arch/x86/kernel/tsc_64.c
@@ -10,6 +10,7 @@
 
 #include asm/hpet.h
 #include asm/timex.h
+#include asm/timer.h
 
 static int notsc __initdata = 0;
 
@@ -18,16 +19,25 @@ 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Andrew Morton [EMAIL PROTECTED] wrote:

   A bit risky - it's quite an expansion of code which no longer can 
   call printk.
   
   You might want to take that WARN_ON out of __update_rq_clock() ;)
  
  hm, dont we already detect printk recursions and turn them into a 
  silent return instead of a hang/crash?
 
 We'll pop the locks and will proceed to do the nested printk.  So 
 __update_rq_clock() will need rather a lot of stack ;)

yeah. That behavior of printk is rather fragile. I think my previous 
patch should handle all such incidents.

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Nick Piggin
On Saturday 08 December 2007 11:50, Nick Piggin wrote:

 I guess your patch is fairly complex but it should work

I should also add that although complex, it should have a
much smaller TSC delta window in which the wrong scaling
factor can get applied to it (I guess it is about as good
as you can possibly get). So I do like it :)
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Guillaume Chazarain
Le Fri, 7 Dec 2007 15:54:18 +0100,
Ingo Molnar [EMAIL PROTECTED] a écrit :

 This is a version that 
 is supposed fix all known aspects of TSC and frequency-change 
 weirdnesses.

Tested it with frequency changes, the clock is as smooth as I like
it :-)

The only remaining sched_clock user in need of conversion seems to be
lockdep.

Great work.

-- 
Guillaume
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* Guillaume Chazarain [EMAIL PROTECTED] wrote:

 Le Fri, 7 Dec 2007 15:54:18 +0100,
 Ingo Molnar [EMAIL PROTECTED] a écrit :
 
  This is a version that 
  is supposed fix all known aspects of TSC and frequency-change 
  weirdnesses.
 
 Tested it with frequency changes, the clock is as smooth as I like it
 :-)

ok, great :-)

 The only remaining sched_clock user in need of conversion seems to be 
 lockdep.

yeah - for CONFIG_LOCKSTAT - but that needs to be done even more 
carefully, due to rq-lock being lockdep-checked. We can perhaps try a 
lock-less cpu_clock() version - other CPUs are not supposed to update 
rq-clock.

 Great work.

thanks. I do get the impression that most of this can/should wait until 
2.6.25. The patches look quite dangerous.

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Ingo Molnar

* [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 It's a single CPU box, so sched_clock() jumping would still be 
 problematic, no?

 I guess so. Definitely, it didn't look like a printk issue. Drivers 
 don't read logs, usually. But they got confused anyway (it seems that 
 udelay's get scaled or fail or somesuch - I can't test it right now, 
 will provide more feedback in a few hours).

no, i think it's just another aspect of the broken TSC on that hardware. 
Does the patch below improve things?

Ingo

---
Subject: x86: cpu_clock() based udelay
From: Ingo Molnar [EMAIL PROTECTED]

use cpu_clock() for TSC based udelay - it's more reliable than raw
TSC based delay loops.

Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
---
 arch/x86/lib/delay_32.c |   20 
 arch/x86/lib/delay_64.c |   27 ++-
 2 files changed, 30 insertions(+), 17 deletions(-)

Index: linux/arch/x86/lib/delay_32.c
===
--- linux.orig/arch/x86/lib/delay_32.c
+++ linux/arch/x86/lib/delay_32.c
@@ -38,17 +38,21 @@ static void delay_loop(unsigned long loo
:0 (loops));
 }
 
-/* TSC based delay: */
+/* cpu_clock() [TSC] based delay: */
 static void delay_tsc(unsigned long loops)
 {
-   unsigned long bclock, now;
+   unsigned long long start, stop, now;
+   int this_cpu;
+
+   preempt_disable();
+
+   this_cpu = smp_processor_id();
+   start = now = cpu_clock(this_cpu);
+   stop = start + loops;
+
+   while ((long long)(stop - now)  0)
+   now = cpu_clock(this_cpu);
 
-   preempt_disable();  /* TSC's are per-cpu */
-   rdtscl(bclock);
-   do {
-   rep_nop();
-   rdtscl(now);
-   } while ((now-bclock)  loops);
preempt_enable();
 }
 
Index: linux/arch/x86/lib/delay_64.c
===
--- linux.orig/arch/x86/lib/delay_64.c
+++ linux/arch/x86/lib/delay_64.c
@@ -26,19 +26,28 @@ int read_current_timer(unsigned long *ti
return 0;
 }
 
-void __delay(unsigned long loops)
+/* cpu_clock() [TSC] based delay: */
+static void delay_tsc(unsigned long loops)
 {
-   unsigned bclock, now;
+   unsigned long long start, stop, now;
+   int this_cpu;
+
+   preempt_disable();
+
+   this_cpu = smp_processor_id();
+   start = now = cpu_clock(this_cpu);
+   stop = start + loops;
+
+   while ((long long)(stop - now)  0)
+   now = cpu_clock(this_cpu);
 
-   preempt_disable();  /* TSC's are pre-cpu */
-   rdtscl(bclock);
-   do {
-   rep_nop(); 
-   rdtscl(now);
-   }
-   while ((now-bclock)  loops);
preempt_enable();
 }
+
+void __delay(unsigned long loops)
+{
+   delay_tsc(loops);
+}
 EXPORT_SYMBOL(__delay);
 
 inline void __const_udelay(unsigned long xloops)
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Nick Piggin
On Friday 07 December 2007 22:17, Ingo Molnar wrote:
 * Nick Piggin [EMAIL PROTECTED] wrote:
   ah, printk_clock() still uses sched_clock(), not jiffies. So it's
   not the jiffies counter that goes back and forth, it's sched_clock()
   - so this is a printk timestamps anomaly, not related to jiffies. I
   thought we have fixed this bug in the printk code already:
   sched_clock() is a 'raw' interface that should not be used directly
   - the proper interface is cpu_clock(cpu).
 
  It's a single CPU box, so sched_clock() jumping would still be
  problematic, no?

 sched_clock() is an internal API - the non-jumping API to be used by
 printk is cpu_clock().

You know why sched_clock jumps when the TSC frequency changes, right?

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-07 Thread Nick Piggin
On Saturday 08 December 2007 03:48, Nick Piggin wrote:
 On Friday 07 December 2007 22:17, Ingo Molnar wrote:
  * Nick Piggin [EMAIL PROTECTED] wrote:
ah, printk_clock() still uses sched_clock(), not jiffies. So it's
not the jiffies counter that goes back and forth, it's sched_clock()
- so this is a printk timestamps anomaly, not related to jiffies. I
thought we have fixed this bug in the printk code already:
sched_clock() is a 'raw' interface that should not be used directly
- the proper interface is cpu_clock(cpu).
  
   It's a single CPU box, so sched_clock() jumping would still be
   problematic, no?
 
  sched_clock() is an internal API - the non-jumping API to be used by
  printk is cpu_clock().

 You know why sched_clock jumps when the TSC frequency changes, right?

Ah, hmm, I don't know why I wrote that :)

I guess your patch is fairly complex but it should work if the plan
is to convert all sched_clock users to use cpu_clock eg like lockdep
as well.

So it looks good to me, thanks for fixing this.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-06 Thread Guillaume Chazarain
On Dec 7, 2007 6:51 AM, Thomas Gleixner <[EMAIL PROTECTED]> wrote:
> Hmrpf. sched_clock() is used for the time stamp of the printks. We
> need to find some better solution other than killing off the tsc
> access completely.

Something like http://lkml.org/lkml/2007/3/16/291 that would need some refresh?

-- 
Guillaume
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-06 Thread Thomas Gleixner
On Fri, 7 Dec 2007, Stefano Brivio wrote:

> This patch fixes a regression introduced by:
> 
> commit bb29ab26863c022743143f27956cc0ca362f258c
> Author: Ingo Molnar <[EMAIL PROTECTED]>
> Date:   Mon Jul 9 18:51:59 2007 +0200
> 
> This caused the jiffies counter to leap back and forth on cpufreq changes
> on my x86 box. I'd say that we can't always assume that TSC does "small
> errors" only, when marked unstable. On cpufreq changes these errors can be
> huge.

Hmrpf. sched_clock() is used for the time stamp of the printks. We
need to find some better solution other than killing off the tsc
access completely.

Ingo ???

Thanks,

 tglx
 
> The original bug report can be found here:
> http://bugzilla.kernel.org/show_bug.cgi?id=9475
> 
> 
> Signed-off-by: Stefano Brivio <[EMAIL PROTECTED]>
> 
> ---
> 
> diff --git a/arch/x86/kernel/tsc_32.c b/arch/x86/kernel/tsc_32.c
> index 9ebc0da..d29cd9c 100644
> --- a/arch/x86/kernel/tsc_32.c
> +++ b/arch/x86/kernel/tsc_32.c
> @@ -98,13 +98,8 @@ unsigned long long native_sched_clock(void)
>  
>   /*
>* Fall back to jiffies if there's no TSC available:
> -  * ( But note that we still use it if the TSC is marked
> -  *   unstable. We do this because unlike Time Of Day,
> -  *   the scheduler clock tolerates small errors and it's
> -  *   very important for it to be as fast as the platform
> -  *   can achive it. )
>*/
> - if (unlikely(!tsc_enabled && !tsc_unstable))
> + if (unlikely(!tsc_enabled))
>   /* No locking but a rare wrong value is not a big deal: */
>   return (jiffies_64 - INITIAL_JIFFIES) * (10 / HZ);
>  
> 
> -- 
> Ciao
> Stefano
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-06 Thread Nick Piggin
On Friday 07 December 2007 12:19, Stefano Brivio wrote:
> This patch fixes a regression introduced by:
>
> commit bb29ab26863c022743143f27956cc0ca362f258c
> Author: Ingo Molnar <[EMAIL PROTECTED]>
> Date:   Mon Jul 9 18:51:59 2007 +0200
>
> This caused the jiffies counter to leap back and forth on cpufreq changes
> on my x86 box. I'd say that we can't always assume that TSC does "small
> errors" only, when marked unstable. On cpufreq changes these errors can be
> huge.
>
> The original bug report can be found here:
> http://bugzilla.kernel.org/show_bug.cgi?id=9475
>
>
> Signed-off-by: Stefano Brivio <[EMAIL PROTECTED]>

While your fix should probably go into 2.6.24...

This particular issue has aggravated me enough times. Let's
fix the damn thing properly already... I think what would work best
is a relatively simple change to the API along these lines:
Index: linux-2.6/arch/x86/kernel/tsc_32.c
===
--- linux-2.6.orig/arch/x86/kernel/tsc_32.c
+++ linux-2.6/arch/x86/kernel/tsc_32.c
@@ -92,27 +92,35 @@ static inline void set_cyc2ns_scale(unsi
 /*
  * Scheduler clock - returns current time in nanosec units.
  */
-unsigned long long native_sched_clock(void)
+u64 native_sched_clock(u64 *sample)
 {
-	unsigned long long this_offset;
+	u64 now, delta;
 
 	/*
-	 * Fall back to jiffies if there's no TSC available:
+	 * Fall back to the default sched_clock() implementation (keep in
+	 * synch with kernel/sched.c) if there's no TSC available:
 	 * ( But note that we still use it if the TSC is marked
 	 *   unstable. We do this because unlike Time Of Day,
 	 *   the scheduler clock tolerates small errors and it's
 	 *   very important for it to be as fast as the platform
 	 *   can achive it. )
 	 */
-	if (unlikely(!tsc_enabled && !tsc_unstable))
-		/* No locking but a rare wrong value is not a big deal: */
-		return (jiffies_64 - INITIAL_JIFFIES) * (10 / HZ);
+	if (unlikely(!tsc_enabled && !tsc_unstable)) {
+		now = (u64)jiffies;
+		delta = now - *sample;
+		*sample = now;
+
+		return delta * (NSEC_PER_SEC / HZ);
+
+	} else {
+		/* read the Time Stamp Counter: */
+		rdtscll(now);
+		delta = now - *sample;
+		*sample = now;
 
-	/* read the Time Stamp Counter: */
-	rdtscll(this_offset);
-
-	/* return the value in ns */
-	return cycles_2_ns(this_offset);
+		/* return the delta value in ns */
+		return cycles_2_ns(delta);
+	}
 }
 
 /* We need to define a real function for sched_clock, to override the
Index: linux-2.6/kernel/sched.c
===
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -72,9 +72,13 @@
  * This is default implementation.
  * Architectures and sub-architectures can override this.
  */
-unsigned long long __attribute__((weak)) sched_clock(void)
+u64 __attribute__((weak)) sched_clock(u64 *sample)
 {
-	return (unsigned long long)jiffies * (NSEC_PER_SEC / HZ);
+	u64 now = (u64)jiffies;
+	u64 delta = now - *sample;
+	*sample = now;
+
+	return delta * (NSEC_PER_SEC / HZ);
 }
 
 /*
@@ -314,7 +318,7 @@ struct rq {
 	unsigned long next_balance;
 	struct mm_struct *prev_mm;
 
-	u64 clock, prev_clock_raw;
+	u64 clock, prev_clock_sample;
 	s64 clock_max_delta;
 
 	unsigned int clock_warps, clock_overflows;
@@ -385,9 +389,7 @@ static inline int cpu_of(struct rq *rq)
  */
 static void __update_rq_clock(struct rq *rq)
 {
-	u64 prev_raw = rq->prev_clock_raw;
-	u64 now = sched_clock();
-	s64 delta = now - prev_raw;
+	u64 delta = sched_clock(>prev_clock_sample);
 	u64 clock = rq->clock;
 
 #ifdef CONFIG_SCHED_DEBUG
@@ -416,7 +418,6 @@ static void __update_rq_clock(struct rq 
 		}
 	}
 
-	rq->prev_clock_raw = now;
 	rq->clock = clock;
 }
 
@@ -656,7 +657,6 @@ EXPORT_SYMBOL_GPL(sched_clock_idle_sleep
 void sched_clock_idle_wakeup_event(u64 delta_ns)
 {
 	struct rq *rq = cpu_rq(smp_processor_id());
-	u64 now = sched_clock();
 
 	rq->idle_clock += delta_ns;
 	/*
@@ -666,7 +666,7 @@ void sched_clock_idle_wakeup_event(u64 d
 	 * rq clock:
 	 */
 	spin_lock(>lock);
-	rq->prev_clock_raw = now;
+	(void)sched_clock(>prev_clock_sample);
 	rq->clock += delta_ns;
 	spin_unlock(>lock);
 }
@@ -4967,7 +4967,7 @@ void __cpuinit init_idle(struct task_str
 	unsigned long flags;
 
 	__sched_fork(idle);
-	idle->se.exec_start = sched_clock();
+	idle->se.exec_start = 0;
 
 	idle->prio = idle->normal_prio = MAX_PRIO;
 	idle->cpus_allowed = cpumask_of_cpu(cpu);
Index: linux-2.6/arch/x86/kernel/tsc_64.c
===
--- linux-2.6.orig/arch/x86/kernel/tsc_64.c
+++ linux-2.6/arch/x86/kernel/tsc_64.c
@@ -30,18 +30,21 @@ static unsigned long long cycles_2_ns(un
 	return (cyc * cyc2ns_scale) >> NS_SCALE;
 }
 
-unsigned long long sched_clock(void)
+u64 sched_clock(u64 *sample)
 {
-	unsigned long a = 0;
+	u64 now, delta;
 
 	/* Could do CPU core sync here. Opteron can execute rdtsc speculatively,
 	 * which means it is not completely exact and may not be 

[PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-06 Thread Stefano Brivio
This patch fixes a regression introduced by:

commit bb29ab26863c022743143f27956cc0ca362f258c
Author: Ingo Molnar <[EMAIL PROTECTED]>
Date:   Mon Jul 9 18:51:59 2007 +0200

This caused the jiffies counter to leap back and forth on cpufreq changes
on my x86 box. I'd say that we can't always assume that TSC does "small
errors" only, when marked unstable. On cpufreq changes these errors can be
huge.

The original bug report can be found here:
http://bugzilla.kernel.org/show_bug.cgi?id=9475


Signed-off-by: Stefano Brivio <[EMAIL PROTECTED]>

---

diff --git a/arch/x86/kernel/tsc_32.c b/arch/x86/kernel/tsc_32.c
index 9ebc0da..d29cd9c 100644
--- a/arch/x86/kernel/tsc_32.c
+++ b/arch/x86/kernel/tsc_32.c
@@ -98,13 +98,8 @@ unsigned long long native_sched_clock(void)
 
/*
 * Fall back to jiffies if there's no TSC available:
-* ( But note that we still use it if the TSC is marked
-*   unstable. We do this because unlike Time Of Day,
-*   the scheduler clock tolerates small errors and it's
-*   very important for it to be as fast as the platform
-*   can achive it. )
 */
-   if (unlikely(!tsc_enabled && !tsc_unstable))
+   if (unlikely(!tsc_enabled))
/* No locking but a rare wrong value is not a big deal: */
return (jiffies_64 - INITIAL_JIFFIES) * (10 / HZ);
 

-- 
Ciao
Stefano
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-06 Thread Stefano Brivio
This patch fixes a regression introduced by:

commit bb29ab26863c022743143f27956cc0ca362f258c
Author: Ingo Molnar [EMAIL PROTECTED]
Date:   Mon Jul 9 18:51:59 2007 +0200

This caused the jiffies counter to leap back and forth on cpufreq changes
on my x86 box. I'd say that we can't always assume that TSC does small
errors only, when marked unstable. On cpufreq changes these errors can be
huge.

The original bug report can be found here:
http://bugzilla.kernel.org/show_bug.cgi?id=9475


Signed-off-by: Stefano Brivio [EMAIL PROTECTED]

---

diff --git a/arch/x86/kernel/tsc_32.c b/arch/x86/kernel/tsc_32.c
index 9ebc0da..d29cd9c 100644
--- a/arch/x86/kernel/tsc_32.c
+++ b/arch/x86/kernel/tsc_32.c
@@ -98,13 +98,8 @@ unsigned long long native_sched_clock(void)
 
/*
 * Fall back to jiffies if there's no TSC available:
-* ( But note that we still use it if the TSC is marked
-*   unstable. We do this because unlike Time Of Day,
-*   the scheduler clock tolerates small errors and it's
-*   very important for it to be as fast as the platform
-*   can achive it. )
 */
-   if (unlikely(!tsc_enabled  !tsc_unstable))
+   if (unlikely(!tsc_enabled))
/* No locking but a rare wrong value is not a big deal: */
return (jiffies_64 - INITIAL_JIFFIES) * (10 / HZ);
 

-- 
Ciao
Stefano
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-06 Thread Nick Piggin
On Friday 07 December 2007 12:19, Stefano Brivio wrote:
 This patch fixes a regression introduced by:

 commit bb29ab26863c022743143f27956cc0ca362f258c
 Author: Ingo Molnar [EMAIL PROTECTED]
 Date:   Mon Jul 9 18:51:59 2007 +0200

 This caused the jiffies counter to leap back and forth on cpufreq changes
 on my x86 box. I'd say that we can't always assume that TSC does small
 errors only, when marked unstable. On cpufreq changes these errors can be
 huge.

 The original bug report can be found here:
 http://bugzilla.kernel.org/show_bug.cgi?id=9475


 Signed-off-by: Stefano Brivio [EMAIL PROTECTED]

While your fix should probably go into 2.6.24...

This particular issue has aggravated me enough times. Let's
fix the damn thing properly already... I think what would work best
is a relatively simple change to the API along these lines:
Index: linux-2.6/arch/x86/kernel/tsc_32.c
===
--- linux-2.6.orig/arch/x86/kernel/tsc_32.c
+++ linux-2.6/arch/x86/kernel/tsc_32.c
@@ -92,27 +92,35 @@ static inline void set_cyc2ns_scale(unsi
 /*
  * Scheduler clock - returns current time in nanosec units.
  */
-unsigned long long native_sched_clock(void)
+u64 native_sched_clock(u64 *sample)
 {
-	unsigned long long this_offset;
+	u64 now, delta;
 
 	/*
-	 * Fall back to jiffies if there's no TSC available:
+	 * Fall back to the default sched_clock() implementation (keep in
+	 * synch with kernel/sched.c) if there's no TSC available:
 	 * ( But note that we still use it if the TSC is marked
 	 *   unstable. We do this because unlike Time Of Day,
 	 *   the scheduler clock tolerates small errors and it's
 	 *   very important for it to be as fast as the platform
 	 *   can achive it. )
 	 */
-	if (unlikely(!tsc_enabled  !tsc_unstable))
-		/* No locking but a rare wrong value is not a big deal: */
-		return (jiffies_64 - INITIAL_JIFFIES) * (10 / HZ);
+	if (unlikely(!tsc_enabled  !tsc_unstable)) {
+		now = (u64)jiffies;
+		delta = now - *sample;
+		*sample = now;
+
+		return delta * (NSEC_PER_SEC / HZ);
+
+	} else {
+		/* read the Time Stamp Counter: */
+		rdtscll(now);
+		delta = now - *sample;
+		*sample = now;
 
-	/* read the Time Stamp Counter: */
-	rdtscll(this_offset);
-
-	/* return the value in ns */
-	return cycles_2_ns(this_offset);
+		/* return the delta value in ns */
+		return cycles_2_ns(delta);
+	}
 }
 
 /* We need to define a real function for sched_clock, to override the
Index: linux-2.6/kernel/sched.c
===
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -72,9 +72,13 @@
  * This is default implementation.
  * Architectures and sub-architectures can override this.
  */
-unsigned long long __attribute__((weak)) sched_clock(void)
+u64 __attribute__((weak)) sched_clock(u64 *sample)
 {
-	return (unsigned long long)jiffies * (NSEC_PER_SEC / HZ);
+	u64 now = (u64)jiffies;
+	u64 delta = now - *sample;
+	*sample = now;
+
+	return delta * (NSEC_PER_SEC / HZ);
 }
 
 /*
@@ -314,7 +318,7 @@ struct rq {
 	unsigned long next_balance;
 	struct mm_struct *prev_mm;
 
-	u64 clock, prev_clock_raw;
+	u64 clock, prev_clock_sample;
 	s64 clock_max_delta;
 
 	unsigned int clock_warps, clock_overflows;
@@ -385,9 +389,7 @@ static inline int cpu_of(struct rq *rq)
  */
 static void __update_rq_clock(struct rq *rq)
 {
-	u64 prev_raw = rq-prev_clock_raw;
-	u64 now = sched_clock();
-	s64 delta = now - prev_raw;
+	u64 delta = sched_clock(rq-prev_clock_sample);
 	u64 clock = rq-clock;
 
 #ifdef CONFIG_SCHED_DEBUG
@@ -416,7 +418,6 @@ static void __update_rq_clock(struct rq 
 		}
 	}
 
-	rq-prev_clock_raw = now;
 	rq-clock = clock;
 }
 
@@ -656,7 +657,6 @@ EXPORT_SYMBOL_GPL(sched_clock_idle_sleep
 void sched_clock_idle_wakeup_event(u64 delta_ns)
 {
 	struct rq *rq = cpu_rq(smp_processor_id());
-	u64 now = sched_clock();
 
 	rq-idle_clock += delta_ns;
 	/*
@@ -666,7 +666,7 @@ void sched_clock_idle_wakeup_event(u64 d
 	 * rq clock:
 	 */
 	spin_lock(rq-lock);
-	rq-prev_clock_raw = now;
+	(void)sched_clock(rq-prev_clock_sample);
 	rq-clock += delta_ns;
 	spin_unlock(rq-lock);
 }
@@ -4967,7 +4967,7 @@ void __cpuinit init_idle(struct task_str
 	unsigned long flags;
 
 	__sched_fork(idle);
-	idle-se.exec_start = sched_clock();
+	idle-se.exec_start = 0;
 
 	idle-prio = idle-normal_prio = MAX_PRIO;
 	idle-cpus_allowed = cpumask_of_cpu(cpu);
Index: linux-2.6/arch/x86/kernel/tsc_64.c
===
--- linux-2.6.orig/arch/x86/kernel/tsc_64.c
+++ linux-2.6/arch/x86/kernel/tsc_64.c
@@ -30,18 +30,21 @@ static unsigned long long cycles_2_ns(un
 	return (cyc * cyc2ns_scale)  NS_SCALE;
 }
 
-unsigned long long sched_clock(void)
+u64 sched_clock(u64 *sample)
 {
-	unsigned long a = 0;
+	u64 now, delta;
 
 	/* Could do CPU core sync here. Opteron can execute rdtsc speculatively,
 	 * which means it is not completely exact and may not be monotonous
 	 * between CPUs. 

Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-06 Thread Thomas Gleixner
On Fri, 7 Dec 2007, Stefano Brivio wrote:

 This patch fixes a regression introduced by:
 
 commit bb29ab26863c022743143f27956cc0ca362f258c
 Author: Ingo Molnar [EMAIL PROTECTED]
 Date:   Mon Jul 9 18:51:59 2007 +0200
 
 This caused the jiffies counter to leap back and forth on cpufreq changes
 on my x86 box. I'd say that we can't always assume that TSC does small
 errors only, when marked unstable. On cpufreq changes these errors can be
 huge.

Hmrpf. sched_clock() is used for the time stamp of the printks. We
need to find some better solution other than killing off the tsc
access completely.

Ingo ???

Thanks,

 tglx
 
 The original bug report can be found here:
 http://bugzilla.kernel.org/show_bug.cgi?id=9475
 
 
 Signed-off-by: Stefano Brivio [EMAIL PROTECTED]
 
 ---
 
 diff --git a/arch/x86/kernel/tsc_32.c b/arch/x86/kernel/tsc_32.c
 index 9ebc0da..d29cd9c 100644
 --- a/arch/x86/kernel/tsc_32.c
 +++ b/arch/x86/kernel/tsc_32.c
 @@ -98,13 +98,8 @@ unsigned long long native_sched_clock(void)
  
   /*
* Fall back to jiffies if there's no TSC available:
 -  * ( But note that we still use it if the TSC is marked
 -  *   unstable. We do this because unlike Time Of Day,
 -  *   the scheduler clock tolerates small errors and it's
 -  *   very important for it to be as fast as the platform
 -  *   can achive it. )
*/
 - if (unlikely(!tsc_enabled  !tsc_unstable))
 + if (unlikely(!tsc_enabled))
   /* No locking but a rare wrong value is not a big deal: */
   return (jiffies_64 - INITIAL_JIFFIES) * (10 / HZ);
  
 
 -- 
 Ciao
 Stefano
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] scheduler: fix x86 regression in native_sched_clock

2007-12-06 Thread Guillaume Chazarain
On Dec 7, 2007 6:51 AM, Thomas Gleixner [EMAIL PROTECTED] wrote:
 Hmrpf. sched_clock() is used for the time stamp of the printks. We
 need to find some better solution other than killing off the tsc
 access completely.

Something like http://lkml.org/lkml/2007/3/16/291 that would need some refresh?

-- 
Guillaume
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/