Hallo,
I tried to run Windows XP with KVM on Linux 2.6.31.1 on a
AMD Opteron and on a Intel Xeon, on both it works fine!
After this test I patch the kernel with the current prempt-patch and on
both it doesn't works!
- After a very short time - I could see the windows startup screen -
the
Hi Dmitry,
On Sun, 2007-10-21 at 11:35 +0200, Dmitry Adamushko wrote:
Hi Steven,
When a pull RT is initiated, all overloaded runqueues are examined for
a RT task that is higher in prio than the highest prio task queued on the
target runqueue. If another runqueue holds a RT task that is
On RT struct plist_head-lock is a raw_spinlock_t, but struct
futex_hash_bucket-lock,
that is set to plist_head-lock is a spinlock, which becomes a mutex on RT.
Later in plist_check_head spin_is_locked can't figure out what is the right
type,
triggering a WARN_ON_SMP. As we were already special
On Mon, 2007-10-22 at 09:01 +0200, Back, Michael (ext) wrote:
Hallo,
I tried to run Windows XP with KVM on Linux 2.6.31.1 on a
You mean .21.1 ?
AMD Opteron and on a Intel Xeon, on both it works fine!
After this test I patch the kernel with the current prempt-patch and on
both it doesn't
On Wed, 2007-10-17 at 11:34 -0400, Steven Rostedt wrote:
Hmm, what about a __seq_irqsave_raw and __seq_nop?
That way it spells out that irqs are NOT touched if it is not a raw lock.
I took out the nop , and just did a save flags which makes sense.. There
is still more cleanup to do in that
Hi Steven,
agreed with your comments in the previous message. Indeed, I missed some points.
On wakeup, we can wake up several RT tasks (as my test case does) and if
we only push one task, then the other tasks may not migrate over to the
other run queues. I logged this happening in my tests.
--
On Tue, 23 Oct 2007, Dmitry Adamushko wrote:
Hi Steven,
agreed with your comments in the previous message. Indeed, I missed some
points.
On wakeup, we can wake up several RT tasks (as my test case does) and if
we only push one task, then the other tasks may not migrate over to the
This patch adds the algorithm to pull tasks from RT overloaded runqueues.
When a pull RT is initiated, all overloaded runqueues are examined for
a RT task that is higher in prio than the highest prio task queued on the
target runqueue. If another runqueue holds a RT task that is of higher
prio
[
Changes since V1:
Updated to git tree 55b70a0300b873c0ec7ea6e33752af56f41250ce
Various clean ups suggested by Gregory Haskins, Dmitry Adamushko,
and Peter Zijlstra.
Biggest change was recommended by Ingo Molnar. This is the use of cpusets
for keeping track of RT
This patch adds pushing of overloaded RT tasks from a runqueue that is
having tasks (most likely RT tasks) added to the run queue.
TODO: We don't cover the case of waking of new RT tasks (yet).
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched.c|2 ++
kernel/sched_rt.c |
This patch adds accounting to keep track of the number of RT tasks running
on a runqueue. This information will be used in later patches.
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched.c|1 +
kernel/sched_rt.c | 17 +
2 files changed, 18 insertions(+)
This patch adds accounting to each runqueue to keep track of the
highest prio task queued on the run queue. We only care about
RT tasks, so if the run queue does not contain any active RT tasks
its priority will be considered MAX_RT_PRIO.
This information will be used for later patches.
This patch adds an RT overload accounting system. When a runqueue has
more than one RT task queued, it is marked as overloaded. That is that it
is a candidate to have RT tasks pulled from it.
If CONFIG_CPUSET is defined, then an rt overloaded CPU bitmask is created
in the cpusets. The algorithm
Since we now take an active approach to load balancing, we don't need to
balance RT tasks via CFS. In fact, this code was found to pull RT tasks
away from CPUS that the active movement performed, resulting in
large latencies.
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched_rt.c
Steven wrote:
+void cpuset_rt_set_overload(struct task_struct *tsk, int cpu)
+{
+ cpu_set(cpu, task_cs(tsk)-rt_overload);
+}
Question for Steven:
What locks are held when cpuset_rt_set_overload() is called?
Questions for Paul Menage:
Does 'tsk' need to be locked for the above
15 matches
Mail list logo