On Sat, 2008-02-23 at 13:31 +0100, Andi Kleen wrote:
> > *) compute the context-switch pair time average for the system. This is
> > your time threshold (CSt).
>
Hi Andi,
> This is not a uniform time. Consider the difference between
> context switch on the same hyperthread, context switch
On Sat, 2008-02-23 at 13:31 +0100, Andi Kleen wrote:
*) compute the context-switch pair time average for the system. This is
your time threshold (CSt).
Hi Andi,
This is not a uniform time. Consider the difference between
context switch on the same hyperthread, context switch between
On Sat, Feb 23, 2008 at 01:31:00PM +0100, Andi Kleen wrote:
> > *) compute the context-switch pair time average for the system. This is
> > your time threshold (CSt).
>
> This is not a uniform time. Consider the difference between
> context switch on the same hyperthread, context switch between
> *) compute the context-switch pair time average for the system. This is
> your time threshold (CSt).
This is not a uniform time. Consider the difference between
context switch on the same hyperthread, context switch between cores
on a die, context switch between sockets, context switch
*) compute the context-switch pair time average for the system. This is
your time threshold (CSt).
This is not a uniform time. Consider the difference between
context switch on the same hyperthread, context switch between cores
on a die, context switch between sockets, context switch between
On Sat, Feb 23, 2008 at 01:31:00PM +0100, Andi Kleen wrote:
*) compute the context-switch pair time average for the system. This is
your time threshold (CSt).
This is not a uniform time. Consider the difference between
context switch on the same hyperthread, context switch between cores
On Fri, 2008-02-22 at 13:36 -0700, Peter W. Morreale wrote:
> On Fri, 2008-02-22 at 11:55 -0800, Sven-Thorsten Dietrich wrote:
> >
> > In high-contention, short-hold time situations, it may even make sense
> > to have multiple CPUs with multiple waiters spinning, depending on
> > hold-time vs.
Paul E. McKenney wrote:
Governing the timeout by context-switch overhead sounds even better to me.
Really easy to calibrate, and short critical sections are of much shorter
duration than are a context-switch pair.
Yeah, fully agree. This is on my research "todo" list. My theory is
that the
On Fri, 2008-02-22 at 11:55 -0800, Sven-Thorsten Dietrich wrote:
>
> In high-contention, short-hold time situations, it may even make sense
> to have multiple CPUs with multiple waiters spinning, depending on
> hold-time vs. time to put a waiter to sleep and wake them up.
>
> The wake-up side
On Fri, Feb 22, 2008 at 11:55:45AM -0800, Sven-Thorsten Dietrich wrote:
>
> On Fri, 2008-02-22 at 11:43 -0800, Paul E. McKenney wrote:
> > On Fri, Feb 22, 2008 at 11:21:14AM -0800, Bill Huey (hui) wrote:
> > > On Fri, Feb 22, 2008 at 11:19 AM, Bill Huey (hui) <[EMAIL PROTECTED]>
> > > wrote:
> >
On Fri, 2008-02-22 at 11:43 -0800, Paul E. McKenney wrote:
>
> The fixed-time spins are very useful in cases where the critical section
> is almost always very short but can sometimes be very long. In such
> cases, you would want to spin until either ownership changes or it is
> apparent that
On Fri, 2008-02-22 at 11:43 -0800, Paul E. McKenney wrote:
> On Fri, Feb 22, 2008 at 11:21:14AM -0800, Bill Huey (hui) wrote:
> > On Fri, Feb 22, 2008 at 11:19 AM, Bill Huey (hui) <[EMAIL PROTECTED]> wrote:
> > > Yeah, I'm not very keen on having a constant there without some
> > > contention
On Fri, Feb 22, 2008 at 11:21:14AM -0800, Bill Huey (hui) wrote:
> On Fri, Feb 22, 2008 at 11:19 AM, Bill Huey (hui) <[EMAIL PROTECTED]> wrote:
> > Yeah, I'm not very keen on having a constant there without some
> > contention instrumentation to see how long the spins are. It would be
> >
On Fri, Feb 22, 2008 at 11:19 AM, Bill Huey (hui) <[EMAIL PROTECTED]> wrote:
> Yeah, I'm not very keen on having a constant there without some
> contention instrumentation to see how long the spins are. It would be
> better to just let it run until either task->on_cpu is off or checking
> if
On Fri, Feb 22, 2008 at 11:08 AM, Paul E. McKenney
<[EMAIL PROTECTED]> wrote:
> One approach would be to set the RTLOCK_DELAY parameter to something like
> -1 for default, and to set it to the number of cycles required for about
> 10 cache misses at boot time. This would automatically scale
On Thu, Feb 21, 2008 at 05:41:09PM +0100, Andi Kleen wrote:
>
> > +config RTLOCK_DELAY
> > + int "Default delay (in loops) for adaptive rtlocks"
> > + range 0 10
> > + depends on ADAPTIVE_RTLOCK
>
> I must say I'm not a big fan of putting such subtle configurable numbers
> into
On Thu, Feb 21, 2008 at 05:41:09PM +0100, Andi Kleen wrote:
+config RTLOCK_DELAY
+ int Default delay (in loops) for adaptive rtlocks
+ range 0 10
+ depends on ADAPTIVE_RTLOCK
I must say I'm not a big fan of putting such subtle configurable numbers
into Kconfig.
On Fri, Feb 22, 2008 at 11:08 AM, Paul E. McKenney
[EMAIL PROTECTED] wrote:
One approach would be to set the RTLOCK_DELAY parameter to something like
-1 for default, and to set it to the number of cycles required for about
10 cache misses at boot time. This would automatically scale with
On Fri, Feb 22, 2008 at 11:19 AM, Bill Huey (hui) [EMAIL PROTECTED] wrote:
Yeah, I'm not very keen on having a constant there without some
contention instrumentation to see how long the spins are. It would be
better to just let it run until either task-on_cpu is off or checking
if the
On Fri, 2008-02-22 at 11:43 -0800, Paul E. McKenney wrote:
On Fri, Feb 22, 2008 at 11:21:14AM -0800, Bill Huey (hui) wrote:
On Fri, Feb 22, 2008 at 11:19 AM, Bill Huey (hui) [EMAIL PROTECTED] wrote:
Yeah, I'm not very keen on having a constant there without some
contention
On Fri, 2008-02-22 at 11:55 -0800, Sven-Thorsten Dietrich wrote:
In high-contention, short-hold time situations, it may even make sense
to have multiple CPUs with multiple waiters spinning, depending on
hold-time vs. time to put a waiter to sleep and wake them up.
The wake-up side could
On Fri, Feb 22, 2008 at 11:55:45AM -0800, Sven-Thorsten Dietrich wrote:
On Fri, 2008-02-22 at 11:43 -0800, Paul E. McKenney wrote:
On Fri, Feb 22, 2008 at 11:21:14AM -0800, Bill Huey (hui) wrote:
On Fri, Feb 22, 2008 at 11:19 AM, Bill Huey (hui) [EMAIL PROTECTED]
wrote:
Yeah, I'm
On Fri, 2008-02-22 at 11:43 -0800, Paul E. McKenney wrote:
The fixed-time spins are very useful in cases where the critical section
is almost always very short but can sometimes be very long. In such
cases, you would want to spin until either ownership changes or it is
apparent that the
Paul E. McKenney wrote:
Governing the timeout by context-switch overhead sounds even better to me.
Really easy to calibrate, and short critical sections are of much shorter
duration than are a context-switch pair.
Yeah, fully agree. This is on my research todo list. My theory is
that the
On Fri, 2008-02-22 at 13:36 -0700, Peter W. Morreale wrote:
On Fri, 2008-02-22 at 11:55 -0800, Sven-Thorsten Dietrich wrote:
In high-contention, short-hold time situations, it may even make sense
to have multiple CPUs with multiple waiters spinning, depending on
hold-time vs. time to
On Thu, 2008-02-21 at 17:41 +0100, Andi Kleen wrote:
> > +config RTLOCK_DELAY
> > + int "Default delay (in loops) for adaptive rtlocks"
> > + range 0 10
> > + depends on ADAPTIVE_RTLOCK
>
> I must say I'm not a big fan of putting such subtle configurable numbers
> into Kconfig.
On Thu, 2008-02-21 at 17:41 +0100, Andi Kleen wrote:
> > +config RTLOCK_DELAY
> > + int "Default delay (in loops) for adaptive rtlocks"
> > + range 0 10
> > + depends on ADAPTIVE_RTLOCK
>
> I must say I'm not a big fan of putting such subtle configurable numbers
> into Kconfig.
>>> On Thu, Feb 21, 2008 at 11:41 AM, in message <[EMAIL PROTECTED]>,
Andi Kleen <[EMAIL PROTECTED]> wrote:
>> +config RTLOCK_DELAY
>> +int "Default delay (in loops) for adaptive rtlocks"
>> +range 0 10
>> +depends on ADAPTIVE_RTLOCK
>
> I must say I'm not a big fan of
> +config RTLOCK_DELAY
> + int "Default delay (in loops) for adaptive rtlocks"
> + range 0 10
> + depends on ADAPTIVE_RTLOCK
I must say I'm not a big fan of putting such subtle configurable numbers
into Kconfig. Compilation is usually the wrong place to configure
such a
From: Sven Dietrich <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 11 +++
kernel/rtmutex.c |4
kernel/rtmutex_adaptive.h | 11 +--
kernel/sysctl.c | 12
4 files changed, 36
From: Sven Dietrich [EMAIL PROTECTED]
Signed-off-by: Sven Dietrich [EMAIL PROTECTED]
---
kernel/Kconfig.preempt| 11 +++
kernel/rtmutex.c |4
kernel/rtmutex_adaptive.h | 11 +--
kernel/sysctl.c | 12
4 files changed, 36
+config RTLOCK_DELAY
+ int Default delay (in loops) for adaptive rtlocks
+ range 0 10
+ depends on ADAPTIVE_RTLOCK
I must say I'm not a big fan of putting such subtle configurable numbers
into Kconfig. Compilation is usually the wrong place to configure
such a thing. Just
On Thu, Feb 21, 2008 at 11:41 AM, in message [EMAIL PROTECTED],
Andi Kleen [EMAIL PROTECTED] wrote:
+config RTLOCK_DELAY
+int Default delay (in loops) for adaptive rtlocks
+range 0 10
+depends on ADAPTIVE_RTLOCK
I must say I'm not a big fan of putting such subtle
On Thu, 2008-02-21 at 17:41 +0100, Andi Kleen wrote:
+config RTLOCK_DELAY
+ int Default delay (in loops) for adaptive rtlocks
+ range 0 10
+ depends on ADAPTIVE_RTLOCK
I must say I'm not a big fan of putting such subtle configurable numbers
into Kconfig. Compilation is
On Thu, 2008-02-21 at 17:41 +0100, Andi Kleen wrote:
+config RTLOCK_DELAY
+ int Default delay (in loops) for adaptive rtlocks
+ range 0 10
+ depends on ADAPTIVE_RTLOCK
I must say I'm not a big fan of putting such subtle configurable numbers
into Kconfig. Compilation is
35 matches
Mail list logo