Hi guys,
While working with the -rt kernel, I have noticed a problem in KVM.
Specifically, when you stop a VM you sometimes get a sleep while
atomic oopses. It turns out that the issue is related to an
smp_function_call IPI that KVM does to remotely flush the VMX hardware
on shutdown. The code
On Fri, 2007-07-27 at 07:56 +0300, Avi Kivity wrote:
Gregory Haskins wrote:
Hi guys,
While working with the -rt kernel, I have noticed a problem in KVM.
Specifically, when you stop a VM you sometimes get a sleep while
atomic oopses. It turns out that the issue is related
On Tue, 2007-07-31 at 11:21 +0200, Ingo Molnar wrote:
[ mail re-sent with lkml Cc:-ed. _Please_ Cc: all patches to lkml too!
Unless you want -rt to suffer the fate of -ck, keep upstream involved
all the time. The recent /proc/interrupts-all discussion with upstream
folks showed the
On Tue, 2007-07-31 at 14:32 +0200, John Sigler wrote:
The motherboard manufacturer (well, their level 1 support, anyway) told
me I could safely enable the LAPIC. If it is safe to enable the LAPIC,
then why are they disabling it in the BIOS?
I would guess that because they don't have an
Changelog from v2:
1) Converted smp_call_funtion[_single]__nodelay to
raw_smp_call_function[_single] to match existing nomenclature in the -rt
series.
2) Removed all PI related code from Patch #1 and moved it to #2 where it
belonged.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED
converted to rt_mutex under the hood. In summary, this
subsystem does for FCIPI interrupts what PREEMPT_HARDIRQs does for normal
interrupts.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
arch/i386/kernel/smpcommon.c | 16 +-
arch/ia64/kernel/smp.c |8 -
arch/powerpc/kernel/smp.c
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/vfcipi/thread.c | 144
1 files changed, 131 insertions(+), 13 deletions(-)
diff --git a/kernel/vfcipi/thread.c b/kernel/vfcipi/thread.c
index 45bb4e2..306560a 100644
--- a/kernel/vfcipi
On Tue, Jul 31, 2007 at 5:25 AM, in message [EMAIL PROTECTED],
Ingo Molnar [EMAIL PROTECTED] wrote:
as far as the prioritization of function calls goes, _that_ makes sense,
but it should not be a separate API but should be done to our normal
workqueue APIs. That not only extends the
On Tue, 2007-07-31 at 10:26 -0400, Gregory Haskins wrote:
On Tue, Jul 31, 2007 at 5:25 AM, in message [EMAIL PROTECTED],
Ingo Molnar [EMAIL PROTECTED] wrote:
as far as the prioritization of function calls goes, _that_ makes sense,
but it should not be a separate API but should
-by: Gregory Haskins [EMAIL PROTECTED]
---
include/linux/workqueue.h |2
kernel/workqueue.c| 198 +
2 files changed, 149 insertions(+), 51 deletions(-)
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 925d898
On Tue, 2007-07-31 at 20:52 -0700, Daniel Walker wrote:
Here's a simpler version .. uses the plist data structure instead of the
100 queues, which makes for a cleaner patch ..
Hi Daniel,
I like your idea on the plist simplification a lot. I will definitely
roll that into my series.
I am
On Wed, 2007-08-01 at 08:55 -0700, Daniel Walker wrote:
On Wed, 2007-08-01 at 11:19 -0400, Gregory Haskins wrote:
On Wed, 2007-08-01 at 08:10 -0700, Daniel Walker wrote:
rt_mutex_setprio() is just a function. It was also designed specifically
for PI , so it seems fairly sane to use
On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
On Thu, 2007-08-02 at 00:18 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
On Wed, 2007-08-01 at 22:12 +0400, Oleg Nesterov wrote:
And I personally think it is not very
On Thu, 2007-08-02 at 01:34 +0400, Oleg Nesterov wrote:
On 08/01, Gregory Haskins wrote:
On Thu, 2007-08-02 at 00:50 +0400, Oleg Nesterov wrote:
On 08/01, Daniel Walker wrote:
It's translating priorities through the work queues, which doesn't seem
to happen with the current
On Thu, 2007-08-02 at 02:22 +0400, Oleg Nesterov wrote:
No.
However, IIUC the point of flush_workqueue() is a barrier only relative
to your own submissions, correct?. E.g. to make sure *your* requests
are finished, not necessarily the entire queue.
No,
You sure are a confident one
On Thu, 2007-08-02 at 23:50 +0400, Oleg Nesterov wrote:
I strongly believe you guys take a _completely_ wrong approach.
queue_work() should _not_ take the priority of the caller into
account, this is bogus.
I think you have argued very effectively that there are situations in
which the
On Mon, 2007-08-06 at 18:26 +0400, Oleg Nesterov wrote:
This is true of course, and I didn't claim this.
My apologies. I misunderstood you.
When will the job complete?
Immediately, A inserts the work on CPU 1.
Well, if you didn't care about which CPU, that's true. But suppose we
On Mon, 2007-08-06 at 18:45 +0400, Oleg Nesterov wrote:
On 08/06, Peter Zijlstra wrote:
still this does not change the fundamental issue of a high prio piece of
work waiting on a lower prio task.
^^^
waiting. This is a key word, and this was my (perhaps wrong) point.
On Mon, 2007-08-06 at 19:36 +0400, Oleg Nesterov wrote:
Well, the trylock+requeue avoids the obvious recursive deadlock, but
it introduces a more subtle error: the reschedule effectively bypasses
the flush.
this is OK, flush_workqueue() should only care about work_struct's that are
On Tue, 2007-08-07 at 10:03 -0700, Daniel Walker wrote:
Could you drop the following config options and test again?
#
# Processor type and features
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
Will do.
I have a patch which works around the issue too, which I will
On Tue, 2007-08-07 at 11:43 -0600, Gregory Haskins wrote:
The following patch converts double_lock_balance to a full DP alogorithm to
work around a deadlock in the scheduler when running on an 8-way SMP system.
To LKML: This is an RT specific patch. I forgot to change the subject
line. Sorry
-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 19 ---
1 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 6f2cf6a..e946e3f 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2507,14 +2507,19 @@ static int
with debugging some issues in
-rt. The second patch is really a workaround for where X86_64 frame-pointers
apparently are not working under certain circumstances.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-rt-users in
the body
---
include/linux/sched.h |7 +--
kernel/latency_trace.c | 18 +++---
2 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 8ebb43c..233d26c 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@
On Mon, 2007-09-17 at 22:20 +0200, Andi Kleen wrote:
I disagree for the oops case. You want the simplest possible code
here.
I would have to agree with Andi. Being conservative here is probably a
good thing to avoid nasties like oops recursion. No sense in polishing
the brass;) We
Doh! I guess there should be a rule about sending patches out after midnight
;)
The original patch I worked on was written before the code was moved to
validate_chain(), so my previous posting didnt quite translate when I merged
with git HEAD. Here is an updated patch. Sorry for the confusion.
in the cache
resulting in a new node being added to the graph for every acquire.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/lockdep.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/lockdep.c b/kernel/lockdep.c
index 734da57..efb0d7e 100644
RT
balancing.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 12 +---
1 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 93fd6de..aaacec2 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -631,6 +631,7 @@ static
through more reliably.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
drivers/char/sysrq.c|8 +
drivers/serial/8250.c | 239 ++-
drivers/serial/8250.h |6 +
drivers/serial/Kconfig | 16 +++
include/linux/serial_core.h
On Fri, 2007-10-05 at 18:41 +0200, Thomas Gleixner wrote:
On Fri, 5 Oct 2007, Gregory Haskins wrote:
This series may help debugging certain circumstances where the serial
console is unreponsive (e.g. RT51+ spinner, or scheduler problem). It
changes
the serial8250 driver to use
On Mon, 2007-10-08 at 10:10 -0400, Steven Rostedt wrote:
This issue has hit me enough times where I've played with a few other
ideas. I just haven't had the time to finish them. The main problem is if
the system locks up somewhere we have a lock held that keeps us from
scheduling. Once that
Hi All,
The first two patches are from Mike and Steven on LKML, which the rest of my
series is dependent on. Patch #4 is a resend from earlier.
Series Summary:
1) Send IPI on overload regardless of whether prev is an RT task
2) Set the NEEDS_RESCHED flag on reception of RESCHED_IPI
3) Fix a
From: Mike Kravetz [EMAIL PROTECTED]
RESCHED_IPIs can be missed if more than one RT task is awoken simultaneously
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |2 +-
1 files changed, 1 insertions(+), 1 deletions
Any number of tasks could be queued behind the current task, so direct the
balance IPI at all CPUs (other than current)
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Steven Rostedt [EMAIL PROTECTED]
CC: Mike Kravetz [EMAIL PROTECTED]
CC: Peter W. Morreale [EMAIL PROTECTED]
---
kernel
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins [EMAIL
This is my own interpretation of Peter's recommended changes Steven's push-rt
patch. Just to be clear, Peter does not endorse this patch unless he himself
specifically says so ;).
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 12 ++--
1 files changed, 6
task-cpus_allowed can have bit positions that are set for CPUs that are
not currently online. So we optimze our search by ANDing against the online
set.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |6 +-
1 files changed, 5 insertions(+), 1 deletions(-)
diff
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins [EMAIL
this overhead, such
as: seqlocks, per_cpu data to avoid cacheline contention, avoiding locks
in the update code when possible, etc.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
include/linux/cpupri.h | 25 +
kernel/Kconfig.preempt | 11 ++
kernel/Makefile|1
kernel/cpupri.c
Normalize the CPU priority system between the two search algorithms, and
modularlize the search function within push_rt_tasks.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 91 ++--
1 files changed, 61 insertions
, or equilibrium is achieved. The orignal logic only tried to push one
task per event.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 69 ++--
1 files changed, 42 insertions(+), 27 deletions(-)
diff --git a/kernel
On Fri, 2007-10-12 at 07:47 -0400, Steven Rostedt wrote:
--
On Fri, 12 Oct 2007, Peter Zijlstra wrote:
And for that, steve's rq-curr_prio field seems quite suitable.
so instead of the:
for (3 tries)
find lowest cpu
try push
we do:
cpu_hotplug_lock();
This series applies to 2.6.23-rt1 + Steven Rostedt's last published push-rt
patch.
Changes since v1:
- Rebased to the final 23-rt1 from 23-rt1-pre1
- Rebased to Steve's last published patch
- Removed controversial cpupri algorithm (may revisit later, drop for now)
- Fixed a missing priority
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins [EMAIL
A little cleanup to avoid #ifdef proliferation later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 23 ---
1 files changed, 20 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 0a1ad0e..c9afc8a 100644
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index c9afc8a..50c88e8 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -7395,6 +7395,8 @@ void __init sched_init(void
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 50c88e8..62f9f0b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4663,6 +4663,7 @@ void rt_mutex_setprio
In theory, tasks will be most efficient if they are allowed to re-wake to
the CPU that they last ran on due to cache affinity. Short of that, it is
cheaper to wake up the current CPU. If neither of those two are options,
than the lowest CPU will do.
Signed-off-by: Gregory Haskins [EMAIL
the (fairly expensive) checks (e.g. rq double-locks, etc)
in a subset (hopefully significant #) of the calls to schedule(),
which sounds like a good optimization to me ;) We shall see if that
pans out.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 16 ++--
1
I cranked out a little test application this weekend to look at problems
in the RT scheduler. You can find the source here.
ftp://ftp.novell.com/dev/ghaskins/preempt.cc
Here is some example output:
# ./a.out -d 40 -a
Starting test with 5 threads on 4 cpus
Calibration: 40ms = 11306781 loops
0 -
v3 contains the following changes since v2:
This is still based on 23-rt1 + Steve's last public patch (I think Steve has a
newer version available, but we have not rebased yet).
1) No longer includes the per-cpu-rtoverload patch, since Steve has already
ACKed it
2) Dropped the affinity patch
Priority of the running task can change at run-time due to
PI boosting / nice, so be sure to update the RQ version of
the priority as well.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/kernel
From: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched.c| 141 ++---
kernel/sched_rt.c | 44 +
2 files changed, 178 insertions(+), 7 deletions(-)
diff --git a/kernel/sched.c
A little cleanup to avoid #ifdef proliferation later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 16 +---
1 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 0da8c30..131f618 100644
On Fri, 2007-10-19 at 14:42 -0400, Steven Rostedt wrote:
plain text document attachment (add-rq-highest-prio.patch)
This patch adds accounting to each runqueue to keep track of the
highest prio task queued on the run queue. We only care about
RT tasks, so if the run queue does not contain any
On Tue, 2007-10-23 at 10:35 +0200, Jan Kiszka wrote:
Sven-Thorsten Dietrich wrote:
On Mon, 2007-10-22 at 09:01 +0200, Back, Michael (ext) wrote:
Hallo,
I tried to run Windows XP with KVM on Linux 2.6.31.1 on a
You mean .21.1 ?
Classic typo I interestingly also did several times
http://rt.wiki.kernel.org/index.php/Preemption_Test
Thanks to Darren Hart for fixing the permissions on the site for me.
And thanks to Steven Rostedt for inspiring this test.
(Steve, feel free to edit the page to include your test as well)
-Greg
signature.asc
Description: This is a digitally
This is version 5 of the patch series against 23-rt1.
There have been numerous fixes/tweaks since v4, though we still are based on
the global rto_cpumask logic instead of Steve/Ingo's cpuset logic. Otherwise,
it's in pretty good shape.
Without the series applied, the following test will fail:
From: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched.c| 141 ++---
kernel/sched_rt.c | 44 +
2 files changed, 178 insertions(+), 7 deletions(-)
diff --git a/kernel/sched.c
We inadvertently added a redundant function, so clean it up
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c|9 +
kernel/sched_rt.c | 44
2 files changed, 5 insertions(+), 48 deletions(-)
diff --git a/kernel
A little cleanup to avoid #ifdef proliferation later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 16 +---
1 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index e22eec7..dfd0b92 100644
This is an implementation of Steve's idea where we should update the RQ
concept of priority to show the highest-task, even if that task is not (yet)
running. This prevents us from pushing multiple tasks to the RQ before it
gets a chance to reschedule.
Signed-off-by: Gregory Haskins [EMAIL
) One or more CPUs are in overload, AND
2) We are about to switch to a task that lowers our priority.
(3) will be addressed in a later patch.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 109
1 files changed, 62
We can avoid dirtying a rq related cacheline with a simple check, so why not.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index e536142..1058a1f 100644
--- a/kernel
From: Steven Rostedt [EMAIL PROTECTED]
Steve found these errors in the original patch
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c| 15 -
kernel/sched_rt.c | 90 +
2 files changed, 15 insertions(+), 90
We only need to track if the CPU is in a non-RT state, as opposed to its
priority within the non-RT state. So simplify setting in the effort of
reducing cache-thrash.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |8
1 files changed, 8 insertions(+), 0
with affinity restrictions, the algorithm has a
worst case complexity of O(min(102, NR_CPUS)), though the scenario that
yields the worst case search is fairly contrived.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/Makefile |2
kernel/sched.c| 37 +++--
kernel
is probably relatively expensive, so it is only
done when the cpus_allowed mask is updated (which should be relatively
infrequent, especially compared to scheduling frequency) and cached in
the task_struct.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
include/linux/sched.h |1
kernel
Applies to 23-rt1
Changes since v5:
*) Folded some of the smaller patches together to address feedback
*) Fixed several minor bugs related to PI re-factoring and wakeup paths
I still have yet to make Ingo Oeser's suggestion to the last patch.
At this point, things are looking really really
From: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched.c| 141 ++---
kernel/sched_rt.c | 44 +
2 files changed, 178 insertions(+), 7 deletions(-)
diff --git a/kernel/sched.c
From: Steven Rostedt [EMAIL PROTECTED]
Steve found these errors in the original patch
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c| 15 -
kernel/sched_rt.c | 90 +
2 files changed, 15 insertions(+), 90
Get rid of the superfluous dst_cpu, move the cpu_mask inside the search
function, and collapse the two redundant pick-next-rt() functions.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c| 27 ---
kernel/sched_rt.c | 44
A little cleanup to avoid #ifdef proliferation later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 18 +++---
1 files changed, 15 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 7b9b481..ce5292f 100644
) One or more CPUs are in overload, AND
2) We are about to switch to a task that lowers our priority.
(3) will be addressed in a later patch.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 109
1 files changed, 62
with affinity restrictions, the algorithm has a
worst case complexity of O(min(102, NR_CPUS)), though the scenario that
yields the worst case search is fairly contrived.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/Makefile |2
kernel/sched.c| 37 +++--
kernel
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins [EMAIL
is probably relatively expensive, so it is only
done when the cpus_allowed mask is updated (which should be relatively
infrequent, especially compared to scheduling frequency) and cached in
the task_struct.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
include/linux/sched.h |1
kernel
This is a mini-release of my series, rebased on -rt2. I have more changes
downstream which are not quite ready for primetime, but I need to work on some
other unrelated issues right now and I wanted to get what works out there.
Changes since v5
*) Rebased to rt2 - Many of the functions of the
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 10 ++
1 files changed, 2 insertions(+), 8 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 55da7d0..b59dc20 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -292,7 +292,6 @@ static
with affinity restrictions, the algorithm has a
worst case complexity of O(min(102, NR_CPUS)), though the scenario that
yields the worst case search is fairly contrived.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/Makefile |2
kernel/sched.c|4 +
kernel
On Thu, 2007-10-25 at 13:36 -0400, Steven Rostedt wrote:
Yep, -rt2 (and -rt3) are both horrible too. That's why I'm working on a
sched-domain version now to handle that.
Excellent. I'm not 100% sure I've got the mingo lingo ;) down enough
to know if sched_domains are the best fit, but I
On Thu, 2007-10-25 at 15:52 -0400, Steven Rostedt wrote:
+ p-sched_class-set_cpus_allowed(p,
new_mask); +else {
+ p-cpus_allowed= new_mask;
+ p-nr_cpus_allowed = cpus_weight(new_mask);
+ }
+
/* Can
Steven Rostedt [EMAIL PROTECTED] 10/25/07 8:03 PM
Why do you think moving the logic to pick_next_highest is a better
design? To be honest, I haven't really studied your new logic in
push_rt_tasks to understand why you might feel this way. If you can
make the case that it is better in the
to scheduling frequency
in the fast path.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
include/linux/sched.h |2 ++
kernel/fork.c |1 +
kernel/sched.c|9 +++-
kernel/sched_rt.c | 58 +
4 files changed, 64
Please fold into original -rt2 patches as appropriate
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 10 ++
1 files changed, 2 insertions(+), 8 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 55da7d0..b59dc20 100644
--- a/kernel
The current code use a linear algorithm which causes scaling issues
on larger SMP machines. This patch replaces that algorithm with a
2-dimensional bitmap to reduce latencies in the wake-up path.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/Makefile |1
kernel/sched.c
Isolate the search logic into a function so that it can be used later
in places other than find_locked_lowest_rq().
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 62 -
1 files changed, 37 insertions(+), 25
:1809424 Min: 2 Act:5 Avg:7 Max: 59
T: 6 ( 5033) P:84 I:700 C:1550935 Min: 2 Act:6 Avg:6 Max: 54
T: 7 ( 5034) P:83 I:800 C:1357068 Min: 2 Act:7 Avg:6 Max: 62
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
-
To unsubscribe from this list: send
It doesn't hurt if we allow the current CPU to be included in the
search. We will just simply skip it later if the current CPU turns out
to be the lowest.
We will use this later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c |5 +
1 files changed
-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c|1 +
kernel/sched_rt.c | 99 +++--
2 files changed, 88 insertions(+), 12 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index d16c686..8a27f09 100644
--- a/kernel/sched.c
inaccuracies caused by a condition
of priority mistargeting caused by the lightweight lookup. Most of the
time, the pre-routing should work and yield lower overhead. In the cases
where it doesnt, the post-router will bat cleanup.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel
this_rq is normally used to denote the RQ on the current cpu
(i.e. cpu_rq(this_cpu)). So clean up the usage of this_rq to be
more consistent with the rest of the code.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 46
We have logic to detect whether the system has migratable tasks, but we are
not using it when deciding whether to push tasks away. So we add support
for considering this new information.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c|2 ++
kernel/sched_rt.c | 10
this_rq is normally used to denote the RQ on the current cpu
(i.e. cpu_rq(this_cpu)). So clean up the usage of this_rq to be
more consistent with the rest of the code.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 46
bandwidth.
Therefore, we create a new sched_class interface to help with
pre-wakeup routing decisions and move the load calculation as a function
of CFS task's class.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
include/linux/sched.h |1
kernel/sched.c | 135
:1809424 Min: 2 Act:5 Avg:7 Max: 59
T: 6 ( 5033) P:84 I:700 C:1550935 Min: 2 Act:6 Avg:6 Max: 54
T: 7 ( 5034) P:83 I:800 C:1357068 Min: 2 Act:7 Avg:6 Max: 62
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
-
To unsubscribe from this list: send
Isolate the search logic into a function so that it can be used later
in places other than find_locked_lowest_rq().
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 62 -
1 files changed, 37 insertions(+), 25
We have logic to detect whether the system has migratable tasks, but we are
not using it when deciding whether to push tasks away. So we add support
for considering this new information.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c|2 ++
kernel/sched_rt.c | 10
It doesn't hurt if we allow the current CPU to be included in the
search. We will just simply skip it later if the current CPU turns out
to be the lowest.
We will use this later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c |5 +
1 files changed
Primary issue is cpupri_init() is not defined, but also clean up
some warnings related to uniproc builds.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Dragan Noveski [EMAIL PROTECTED]
---
kernel/sched.c|2 ++
kernel/sched_cpupri.h |5 +
kernel/sched_rt.c |2
1 - 100 of 180 matches
Mail list logo