From: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
---
kernel/sched.c| 141 ++---
kernel/sched_rt.c | 44 +
2 files changed, 178 insertions(+), 7 deletions(-)
diff --git
Applies to 23-rt1 + Steve's latest push_rt patch
Changes since v3:
1) Rebased to Steve's latest
2) Added a "highest_prio" feature to eliminate a race w.r.t. activating a task
and the time it takes to actually reschedule the RQ.
3) Dropped the PI patch, because the highest_prio patch obsoletes
Applies to 23-rt1 + Steve's latest push_rt patch
Changes since v3:
1) Rebased to Steve's latest
2) Added a highest_prio feature to eliminate a race w.r.t. activating a task
and the time it takes to actually reschedule the RQ.
3) Dropped the PI patch, because the highest_prio patch obsoletes
From: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched.c| 141 ++---
kernel/sched_rt.c | 44 +
2 files changed, 178 insertions(+), 7 deletions(-)
diff --git a/kernel/sched.c
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins [EMAIL
A little cleanup to avoid #ifdef proliferation later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 16 +---
1 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 0da8c30..131f618 100644
We should init the base value of the current RQ priority to IDLE
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 131f618..d68f600 100644
--- a/kernel/sched.c
+++ b
This is an implementation of Steve's idea where we should update the RQ
concept of priority to show the highest-task, even if that task is not (yet)
running. This prevents us from pushing multiple tasks to the RQ before it
gets a chance to reschedule.
Signed-off-by: Gregory Haskins [EMAIL
Get rid of the superfluous dst_cpu, and move the cpu_mask inside the search
function.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 18 +++---
1 files changed, 7 insertions(+), 11 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 67034aa
) One or more CPUs are in overload, AND
2) We are about to switch to a task that lowers our priority.
(3) will be addressed in a later patch.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 88 ++--
1 files changed, 41
From: Steven Rostedt [EMAIL PROTECTED]
Steve found these errors in the original patch
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c|2 +-
kernel/sched_rt.c |2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
We can avoid dirtying a rq related cacheline with a simple check, so why not.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
0 files changed, 0 insertions(+), 0 deletions(-)
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL
Oops...forgot to refresh this patch before mailing it. Heres the actual
patch.
We can avoid dirtying a rq related cacheline with a simple check, so why not.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions
) One or more CPUs are in overload, AND
2) We are about to switch to a task that lowers our priority.
(3) will be addressed in a later patch.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 96
1 files c
We should init the base value of the current RQ priority to "IDLE"
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 1866a6e..b69e49b 100644
---
Priority of the running task can change at run-time due to
PI boosting / nice, so be sure to update the RQ version of
the priority as well.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/
A little cleanup to avoid #ifdef proliferation later in the series
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 16 +---
1 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 0a1ad0e..1866a6e
v3 contains the following changes since v2:
This is still based on 23-rt1 + Steve's last public patch (I think Steve has a
newer version available, but we have not rebased yet).
1) No longer includes the per-cpu-rtoverload patch, since Steve has already
ACKed it
2) Dropped the affinity patch
v3 contains the following changes since v2:
This is still based on 23-rt1 + Steve's last public patch (I think Steve has a
newer version available, but we have not rebased yet).
1) No longer includes the per-cpu-rtoverload patch, since Steve has already
ACKed it
2) Dropped the affinity patch
A little cleanup to avoid #ifdef proliferation later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 16 +---
1 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 0a1ad0e..1866a6e 100644
We should init the base value of the current RQ priority to IDLE
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 1866a6e..b69e49b 100644
--- a/kernel/sched.c
+++ b
Priority of the running task can change at run-time due to
PI boosting / nice, so be sure to update the RQ version of
the priority as well.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/kernel
) One or more CPUs are in overload, AND
2) We are about to switch to a task that lowers our priority.
(3) will be addressed in a later patch.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 96
1 files changed, 48
On Fri, 2007-10-12 at 20:16 -0400, Gregory Haskins wrote:
> In theory, tasks will be most efficient if they are allowed to re-wake to
> the CPU that they last ran on due to cache affinity. Short of that, it is
> cheaper to wake up the current CPU. If neither of those two are option
On Mon, 2007-10-15 at 13:45 -0400, Steven Rostedt wrote:
>
> --
>
> On Fri, 12 Oct 2007, Gregory Haskins wrote:
>
> > A little cleanup to avoid #ifdef proliferation later in the series
> >
> > Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
>
> N
On Mon, 2007-10-15 at 14:05 -0400, Steven Rostedt wrote:
> --
> On Fri, 12 Oct 2007, Gregory Haskins wrote:
>
> > There are three events that require consideration for redistributing RT
> > tasks:
> >
> > 1) When one or more higher-priority tasks preempts a lower
On Mon, 2007-10-15 at 14:05 -0400, Steven Rostedt wrote:
--
On Fri, 12 Oct 2007, Gregory Haskins wrote:
There are three events that require consideration for redistributing RT
tasks:
1) When one or more higher-priority tasks preempts a lower-one from a
RQ
2) When a lower
On Mon, 2007-10-15 at 13:45 -0400, Steven Rostedt wrote:
--
On Fri, 12 Oct 2007, Gregory Haskins wrote:
A little cleanup to avoid #ifdef proliferation later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
NACK
---
kernel/sched.c | 23
On Fri, 2007-10-12 at 20:16 -0400, Gregory Haskins wrote:
In theory, tasks will be most efficient if they are allowed to re-wake to
the CPU that they last ran on due to cache affinity. Short of that, it is
cheaper to wake up the current CPU. If neither of those two are options,
than
the (fairly expensive) checks (e.g. rq double-locks, etc)
in a subset (hopefully significant #) of the calls to schedule(),
which sounds like a good optimization to me ;) We shall see if that
pans out.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c
In theory, tasks will be most efficient if they are allowed to re-wake to
the CPU that they last ran on due to cache affinity. Short of that, it is
cheaper to wake up the current CPU. If neither of those two are options,
than the lowest CPU will do.
Signed-off-by: Gregory Haskins <[EM
) One or more CPUs are in overload, AND
2) We are about to switch to a task that lowers our priority.
(3) will be addressed in a later patch.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 68 ++--
1 files c
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 50c88e8..62f9f0b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4663,6 +4663,7 @@ void rt_mutex_s
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins <[EM
A little cleanup to avoid #ifdef proliferation later in the series
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 23 ---
1 files changed, 20 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 0a1ad0e..c9afc8a
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index c9afc8a..50c88e8 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -7395,6 +7395,8 @@ void __init sche
This series applies to 2.6.23-rt1 + Steven Rostedt's last published "push-rt"
patch.
Changes since v1:
- Rebased to the final 23-rt1 from 23-rt1-pre1
- Rebased to Steve's last published patch
- Removed controversial "cpupri" algorithm (may revisit later, drop for now)
- Fixed a missing priority
On Fri, 2007-10-12 at 12:29 +0200, Peter Zijlstra wrote:
> I'm wondering why we need the cpu prio management stuff.
I know we covered most of this on IRC, but let me recap so everyone can
follow the thread:
1) The cpupri alg is just one search alg vs the other. I think we are
all in agreement
On Fri, 2007-10-12 at 07:47 -0400, Steven Rostedt wrote:
> --
>
> On Fri, 12 Oct 2007, Peter Zijlstra wrote:
>
> >
> > And for that, steve's rq->curr_prio field seems quite suitable.
> >
> > so instead of the:
> > for (3 tries)
> > find lowest cpu
> > try push
> >
> > we do:
> >
> >
On Fri, 2007-10-12 at 07:47 -0400, Steven Rostedt wrote:
--
On Fri, 12 Oct 2007, Peter Zijlstra wrote:
And for that, steve's rq-curr_prio field seems quite suitable.
so instead of the:
for (3 tries)
find lowest cpu
try push
we do:
cpu_hotplug_lock();
On Fri, 2007-10-12 at 12:29 +0200, Peter Zijlstra wrote:
I'm wondering why we need the cpu prio management stuff.
I know we covered most of this on IRC, but let me recap so everyone can
follow the thread:
1) The cpupri alg is just one search alg vs the other. I think we are
all in agreement
This series applies to 2.6.23-rt1 + Steven Rostedt's last published push-rt
patch.
Changes since v1:
- Rebased to the final 23-rt1 from 23-rt1-pre1
- Rebased to Steve's last published patch
- Removed controversial cpupri algorithm (may revisit later, drop for now)
- Fixed a missing priority
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins [EMAIL
A little cleanup to avoid #ifdef proliferation later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 23 ---
1 files changed, 20 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 0a1ad0e..c9afc8a 100644
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index c9afc8a..50c88e8 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -7395,6 +7395,8 @@ void __init sched_init(void
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 50c88e8..62f9f0b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4663,6 +4663,7 @@ void rt_mutex_setprio
) One or more CPUs are in overload, AND
2) We are about to switch to a task that lowers our priority.
(3) will be addressed in a later patch.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 68 ++--
1 files changed, 41
In theory, tasks will be most efficient if they are allowed to re-wake to
the CPU that they last ran on due to cache affinity. Short of that, it is
cheaper to wake up the current CPU. If neither of those two are options,
than the lowest CPU will do.
Signed-off-by: Gregory Haskins [EMAIL
the (fairly expensive) checks (e.g. rq double-locks, etc)
in a subset (hopefully significant #) of the calls to schedule(),
which sounds like a good optimization to me ;) We shall see if that
pans out.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 16 ++--
1
, or equilibrium is achieved. The orignal logic only tried to push one
task per event.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 69 ++--
1 files changed, 42 insertions(+), 27 deletions(-)
diff --git a/
this overhead, such
as: seqlocks, per_cpu data to avoid cacheline contention, avoiding locks
in the update code when possible, etc.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
include/linux/cpupri.h | 25 +
kernel/Kconfig.preempt | 11 ++
kernel/Makefile|1
Normalize the CPU priority system between the two search algorithms, and
modularlize the search function within push_rt_tasks.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 91 ++--
1 files changed, 61 inse
This is my own interpretation of Peter's recommended changes Steven's push-rt
patch. Just to be clear, Peter does not endorse this patch unless he himself
specifically says so ;).
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 12 ++--
1 files chan
task->cpus_allowed can have bit positions that are set for CPUs that are
not currently online. So we optimze our search by ANDing against the online
set.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c |6 +-
1 files changed, 5 insertions(+), 1 deletions
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins <[EM
From: Steven Rostedt <[EMAIL PROTECTED]>
This has been complied tested (and no more ;-)
The idea here is when we find a situation that we just scheduled in an
RT task and we either pushed a lesser RT task away or more than one RT
task was scheduled on this CPU before scheduling occurred.
The
The current series applies to 23-rt1-pre1.
This is a snapshot of the current work-in-progress for the rt-overload
enhancements. The primary motivation for the series to to improve the
algorithm for distributing RT tasks to keep the highest tasks active. The
current system tends to blast IPIs
The current series applies to 23-rt1-pre1.
This is a snapshot of the current work-in-progress for the rt-overload
enhancements. The primary motivation for the series to to improve the
algorithm for distributing RT tasks to keep the highest tasks active. The
current system tends to blast IPIs
From: Steven Rostedt [EMAIL PROTECTED]
This has been complied tested (and no more ;-)
The idea here is when we find a situation that we just scheduled in an
RT task and we either pushed a lesser RT task away or more than one RT
task was scheduled on this CPU before scheduling occurred.
The
This is my own interpretation of Peter's recommended changes Steven's push-rt
patch. Just to be clear, Peter does not endorse this patch unless he himself
specifically says so ;).
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 12 ++--
1 files changed, 6
task-cpus_allowed can have bit positions that are set for CPUs that are
not currently online. So we optimze our search by ANDing against the online
set.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |6 +-
1 files changed, 5 insertions(+), 1 deletions(-)
diff
this overhead, such
as: seqlocks, per_cpu data to avoid cacheline contention, avoiding locks
in the update code when possible, etc.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
include/linux/cpupri.h | 25 +
kernel/Kconfig.preempt | 11 ++
kernel/Makefile|1
kernel/cpupri.c
Normalize the CPU priority system between the two search algorithms, and
modularlize the search function within push_rt_tasks.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 91 ++--
1 files changed, 61 insertions
, or equilibrium is achieved. The orignal logic only tried to push one
task per event.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 69 ++--
1 files changed, 42 insertions(+), 27 deletions(-)
diff --git a/kernel
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins [EMAIL
Applies to 2.6.23-rc9-rt2... This is another RTO related fix from the thread
two days ago.
---
RT: Fix special-case exception for preempting the local CPU
Check whether the local CPU is eligible to take the task before trying to
preempt it.
Signed-off-by: Gregory Haskins <[EMAIL PROTEC
Applies to 2.6.23-rc9-rt2... This is another RTO related fix from the thread
two days ago.
---
RT: Fix special-case exception for preempting the local CPU
Check whether the local CPU is eligible to take the task before trying to
preempt it.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED
On Tue, 2007-10-09 at 11:00 -0400, Steven Rostedt wrote:
>
Hi Steve, Peter,
> --
> On Tue, 9 Oct 2007, Gregory Haskins wrote:
> > Hi All,
>
> Hi Gregory,
>
> >
> > The first two patches are from Mike and Steven on LKML, which the rest of my
> > ser
those affected units.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Peter W. Morreale <[EMAIL PROTECTED]>
---
kernel/sched.c | 15 +--
1 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index a28ca9d..6ca5f4f 100644
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins <[EM
From: Mike Kravetz <[EMAIL PROTECTED]>
x86_64 based RESCHED_IPIs fail to set the reschedule flag
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
arch/x86_64/kernel/smp.c |6 +++---
1 files changed, 3 insertio
Any number of tasks could be queued behind the current task, so direct the
balance IPI at all CPUs (other than current)
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Steven Rostedt <[EMAIL PROTECTED]>
CC: Mike Kravetz <[EMAIL PROTECTED]>
CC: Peter W. Morreale
From: Mike Kravetz <[EMAIL PROTECTED]>
RESCHED_IPIs can be missed if more than one RT task is awoken simultaneously
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c |2 +-
1 files changed, 1 insertio
Hi All,
The first two patches are from Mike and Steven on LKML, which the rest of my
series is dependent on. Patch #4 is a resend from earlier.
Series Summary:
1) Send IPI on overload regardless of whether prev is an RT task
2) Set the NEEDS_RESCHED flag on reception of RESCHED_IPI
3) Fix a
Hi All,
The first two patches are from Mike and Steven on LKML, which the rest of my
series is dependent on. Patch #4 is a resend from earlier.
Series Summary:
1) Send IPI on overload regardless of whether prev is an RT task
2) Set the NEEDS_RESCHED flag on reception of RESCHED_IPI
3) Fix a
From: Mike Kravetz [EMAIL PROTECTED]
RESCHED_IPIs can be missed if more than one RT task is awoken simultaneously
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |2 +-
1 files changed, 1 insertions(+), 1 deletions
From: Mike Kravetz [EMAIL PROTECTED]
x86_64 based RESCHED_IPIs fail to set the reschedule flag
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
arch/x86_64/kernel/smp.c |6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff
Any number of tasks could be queued behind the current task, so direct the
balance IPI at all CPUs (other than current)
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Steven Rostedt [EMAIL PROTECTED]
CC: Mike Kravetz [EMAIL PROTECTED]
CC: Peter W. Morreale [EMAIL PROTECTED]
---
kernel
those affected units.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Peter W. Morreale [EMAIL PROTECTED]
---
kernel/sched.c | 15 +--
1 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index a28ca9d..6ca5f4f 100644
--- a/kernel
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins [EMAIL
On Tue, 2007-10-09 at 11:00 -0400, Steven Rostedt wrote:
Hi Steve, Peter,
--
On Tue, 9 Oct 2007, Gregory Haskins wrote:
Hi All,
Hi Gregory,
The first two patches are from Mike and Steven on LKML, which the rest of my
series is dependent on. Patch #4 is a resend from earlier
Hi Guys,
Nice find! Comment inline..
(adding linux-rt-users)
for reference to
http://lkml.org/lkml/2007/10/8/252
On Mon, 2007-10-08 at 22:46 -0400, Steven Rostedt wrote:
> Index: linux-2.6.23-rc9-rt2/kernel/sched.c
> ===
>
On Mon, 2007-10-08 at 10:41 -0400, Steven Rostedt wrote:
> --
> On Mon, 8 Oct 2007, Gregory Haskins wrote:
> >
> > Hi Steve,
> > What you describe is exactly what I did. The IRQF_NODELAY handler
> > just minimally checks to see if the character is a sysrq relate
On Mon, 2007-10-08 at 10:10 -0400, Steven Rostedt wrote:
> This issue has hit me enough times where I've played with a few other
> ideas. I just haven't had the time to finish them. The main problem is if
> the system locks up somewhere we have a lock held that keeps us from
> scheduling. Once
On Mon, 2007-10-08 at 10:10 -0400, Steven Rostedt wrote:
This issue has hit me enough times where I've played with a few other
ideas. I just haven't had the time to finish them. The main problem is if
the system locks up somewhere we have a lock held that keeps us from
scheduling. Once that
On Mon, 2007-10-08 at 10:41 -0400, Steven Rostedt wrote:
--
On Mon, 8 Oct 2007, Gregory Haskins wrote:
Hi Steve,
What you describe is exactly what I did. The IRQF_NODELAY handler
just minimally checks to see if the character is a sysrq related one (or
KDB, if you have the KDB
Hi Guys,
Nice find! Comment inline..
(adding linux-rt-users)
for reference to
http://lkml.org/lkml/2007/10/8/252
On Mon, 2007-10-08 at 22:46 -0400, Steven Rostedt wrote:
Index: linux-2.6.23-rc9-rt2/kernel/sched.c
===
---
On Fri, 2007-10-05 at 18:41 +0200, Thomas Gleixner wrote:
> On Fri, 5 Oct 2007, Gregory Haskins wrote:
> > This series may help debugging certain circumstances where the serial
> > console is unreponsive (e.g. RT51+ spinner, or scheduler problem). It
> > changes
> > t
through more reliably.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
drivers/char/sysrq.c|8 +
drivers/serial/8250.c | 239 ++-
drivers/serial/8250.h |6 +
drivers/serial/Kconfig | 16 +++
include/linux/serial_
This is a cleanup in preparation for the console-nodelay patch to follow
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
drivers/serial/8250.c | 459 ++---
1 files changed, 241 insertions(+), 218 deletions(-)
diff --git a/drivers/seria
This series may help debugging certain circumstances where the serial
console is unreponsive (e.g. RT51+ spinner, or scheduler problem). It changes
the serial8250 driver to use IRQF_NODELAY so that interrupts execute in irq
context instead of a kthread.
It works pretty well on this end, though
eed RT
balancing.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 12 +---
1 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 93fd6de..aaacec2 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -631
RT
balancing.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 12 +---
1 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 93fd6de..aaacec2 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -631,6 +631,7 @@ static
This series may help debugging certain circumstances where the serial
console is unreponsive (e.g. RT51+ spinner, or scheduler problem). It changes
the serial8250 driver to use IRQF_NODELAY so that interrupts execute in irq
context instead of a kthread.
It works pretty well on this end, though
This is a cleanup in preparation for the console-nodelay patch to follow
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
drivers/serial/8250.c | 459 ++---
1 files changed, 241 insertions(+), 218 deletions(-)
diff --git a/drivers/serial/8250.c
through more reliably.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
drivers/char/sysrq.c|8 +
drivers/serial/8250.c | 239 ++-
drivers/serial/8250.h |6 +
drivers/serial/Kconfig | 16 +++
include/linux/serial_core.h
On Fri, 2007-10-05 at 18:41 +0200, Thomas Gleixner wrote:
On Fri, 5 Oct 2007, Gregory Haskins wrote:
This series may help debugging certain circumstances where the serial
console is unreponsive (e.g. RT51+ spinner, or scheduler problem). It
changes
the serial8250 driver to use
inadvertently fail to find a hit in the cache
resulting in a new node being added to the graph for every acquire.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/lockdep.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/lockdep.c b/kernel/
Doh! I guess there should be a rule about sending patches out after midnight
;)
The original patch I worked on was written before the code was moved to
validate_chain(), so my previous posting didnt quite translate when I merged
with git HEAD. Here is an updated patch. Sorry for the confusion.
Doh! I guess there should be a rule about sending patches out after midnight
;)
The original patch I worked on was written before the code was moved to
validate_chain(), so my previous posting didnt quite translate when I merged
with git HEAD. Here is an updated patch. Sorry for the confusion.
401 - 500 of 583 matches
Mail list logo