*) compute the context-switch pair time average for the system. This is
your time threshold (CSt).
This is not a uniform time. Consider the difference between
context switch on the same hyperthread, context switch between cores
on a die, context switch between sockets, context switch between
On Thursday 21 February 2008 16:27:22 Gregory Haskins wrote:
@@ -660,12 +660,12 @@ rt_spin_lock_fastlock(struct rt_mutex *lock,
void fastcall (*slowfn)(struct rt_mutex *lock))
{
/* Temporary HACK! */
- if (!current-in_printk)
- might_sleep();
- else
Use real time pcp locking for page draining during cpu unplug
Looks like a merging mistake that happened at some point. This
is the only place in the file that disables interrupts directly.
This fixes one case of CPU hotunplug failing on RT, but there
are still more.
Signed-off-by: Andi Kleen
If it's agreed that this is the fix - can you submit a proper [PATCH] so
all users of watchdog_use_timer_and_hpet_on_x86_64.patch can be removed,
and replaced with yours.
ftp://ftp.firstfloor.org/pub/ak/x86_64/quilt/patches/watchdog-fix
-Andi
-
To unsubscribe from this list: send the line
On Wednesday 26 September 2007 20:03:12 David Bahi wrote:
Thanks to tglx and ghaskins for all the help in tracking down a very
early nmi_watchdog crash on certain x86_64 machines.
The patch is totally bogus. irq 0 doesn't say anything about whether
the current CPU still works or not. You always
On Monday 01 October 2007 20:54:21 Thomas Gleixner wrote:
On Mon, 1 Oct 2007, Andi Kleen wrote:
On Wednesday 26 September 2007 20:03:12 David Bahi wrote:
Thanks to tglx and ghaskins for all the help in tracking down a very
early nmi_watchdog crash on certain x86_64 machines
OTOH, the accounting hook would allow us to remove the IRQ#0 - CPU#0
restriction. Not sure whether it's worth the trouble.
Some SIS chipsets hang the machine when you migrate irq 0 to another
CPU. It's better to keep that Also I wouldn't be surprised if there are some
other assumptions about
count_active_rt_tasks() is undefined otherwise.
Signed-off-by: Andi Kleen [EMAIL PROTECTED]
Index: linux-2.6.23-rc4-rt1/kernel/timer.c
===
--- linux-2.6.23-rc4-rt1.orig/kernel/timer.c
+++ linux-2.6.23-rc4-rt1/kernel/timer.c
On Monday 17 September 2007 18:02:54 Sven-Thorsten Dietrich wrote:
On Mon, 2007-09-17 at 17:52 +0200, Andi Kleen wrote:
count_active_rt_tasks() is undefined otherwise.
This does fix the compile issue, but RT tasks can exist in !PREEMPT_RT
as well.
That might be, but neither
-static DEFINE_SPINLOCK(die_lock);
+static __raw_spinlock_t die_lock = __RAW_SPIN_LOCK_UNLOCKED;
You mean DEFINE_RAW_SPINLOCK() maybe? Unless I'm not following what your
doing here..
I just copied that from tsc_sync.c. If it's correct there it's presumably
correct here too.
-Andi
-
To
Since it's all got __ in the front, not good to use this method all
over .. If you just need a real spinlock best to use
DEFINE_RAW_SPINLOCK() unless your a special situation ..
Oopsing is a special situation. Nobody knows if all the fancy infrastructure
lurking inside the other macros still
On Monday 17 September 2007 21:38, Sven-Thorsten Dietrich wrote:
On Mon, 2007-09-17 at 12:07 -0700, Daniel Walker wrote:
On Mon, 2007-09-17 at 21:00 +0200, Andi Kleen wrote:
Since it's all got __ in the front, not good to use this method all
over .. If you just need a real spinlock best
Konstantin Baydarov [EMAIL PROTECTED] writes:
Hi, I've faced problem:
I have two x86_64 kernels with HPET enabled:
Is this for a standard kernel or for a RT kernel?
-Andi
-
To unsubscribe from this list: send the line unsubscribe linux-rt-users in
the body of a message to [EMAIL PROTECTED]
13 matches
Mail list logo