On Mon, 24 Sep 2007, Peter Zijlstra wrote:
On Mon, 24 Sep 2007 13:22:14 +0200 Mike Galbraith <[EMAIL PROTECTED]> wrote:
On Mon, 2007-09-24 at 12:42 +0200, Mike Galbraith wrote:
On Mon, 2007-09-24 at 12:24 +0200, Peter Zijlstra wrote:
how about something like:
s64 delta = (s64)(vruntime -
On Mon, 24 Sep 2007 13:22:14 +0200 Mike Galbraith <[EMAIL PROTECTED]> wrote:
> On Mon, 2007-09-24 at 12:42 +0200, Mike Galbraith wrote:
> > On Mon, 2007-09-24 at 12:24 +0200, Peter Zijlstra wrote:
> >
> > > how about something like:
> > >
> > > s64 delta = (s64)(vruntime - min_vruntime);
> > >
On Mon, 2007-09-24 at 13:08 +0200, Peter Zijlstra wrote:
> Its perfectly valid for min_vruntime to exist in 1ULL << 63.
But wrap backward timewarp is what's killing my box.
-Mike
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
On Mon, 2007-09-24 at 12:42 +0200, Mike Galbraith wrote:
> On Mon, 2007-09-24 at 12:24 +0200, Peter Zijlstra wrote:
>
> > how about something like:
> >
> > s64 delta = (s64)(vruntime - min_vruntime);
> > if (delta > 0)
> > min_vruntime += delta;
> >
> > That would rid us of most of the
On Mon, 24 Sep 2007 12:42:15 +0200 Mike Galbraith <[EMAIL PROTECTED]> wrote:
> On Mon, 2007-09-24 at 12:24 +0200, Peter Zijlstra wrote:
>
> > how about something like:
> >
> > s64 delta = (s64)(vruntime - min_vruntime);
> > if (delta > 0)
> > min_vruntime += delta;
> >
> > That would rid
On Mon, 2007-09-24 at 12:24 +0200, Peter Zijlstra wrote:
> how about something like:
>
> s64 delta = (s64)(vruntime - min_vruntime);
> if (delta > 0)
> min_vruntime += delta;
>
> That would rid us of most of the funny conditionals there.
That still left me with negative min_vruntimes. The
On Mon, 24 Sep 2007 12:10:09 +0200 Mike Galbraith <[EMAIL PROTECTED]> wrote:
> @@ -117,7 +117,7 @@ static inline struct task_struct *task_o
> static inline u64
> max_vruntime(u64 min_vruntime, u64 vruntime)
> {
> - if ((vruntime > min_vruntime) ||
> + if (((s64)vruntime >
On Sun, 2007-09-23 at 23:21 -0700, Tong Li wrote:
> On Sun, 23 Sep 2007, Mike Galbraith wrote:
>
> > On Sat, 2007-09-22 at 12:01 +0200, Mike Galbraith wrote:
> >> On Fri, 2007-09-21 at 20:27 -0700, Tong Li wrote:
> >>> Mike,
> >>>
> >>> Could you try this patch to see if it solves the latency
On Sun, 23 Sep 2007, Mike Galbraith wrote:
On Sat, 2007-09-22 at 12:01 +0200, Mike Galbraith wrote:
On Fri, 2007-09-21 at 20:27 -0700, Tong Li wrote:
Mike,
Could you try this patch to see if it solves the latency problem?
No, but it helps some when running two un-pinned busy loops, one at
On Sun, 23 Sep 2007, Mike Galbraith wrote:
On Sat, 2007-09-22 at 12:01 +0200, Mike Galbraith wrote:
On Fri, 2007-09-21 at 20:27 -0700, Tong Li wrote:
Mike,
Could you try this patch to see if it solves the latency problem?
No, but it helps some when running two un-pinned busy loops, one at
On Sun, 2007-09-23 at 23:21 -0700, Tong Li wrote:
On Sun, 23 Sep 2007, Mike Galbraith wrote:
On Sat, 2007-09-22 at 12:01 +0200, Mike Galbraith wrote:
On Fri, 2007-09-21 at 20:27 -0700, Tong Li wrote:
Mike,
Could you try this patch to see if it solves the latency problem?
No, but
On Mon, 24 Sep 2007 12:10:09 +0200 Mike Galbraith [EMAIL PROTECTED] wrote:
@@ -117,7 +117,7 @@ static inline struct task_struct *task_o
static inline u64
max_vruntime(u64 min_vruntime, u64 vruntime)
{
- if ((vruntime min_vruntime) ||
+ if (((s64)vruntime (s64)min_vruntime) ||
On Mon, 2007-09-24 at 12:24 +0200, Peter Zijlstra wrote:
how about something like:
s64 delta = (s64)(vruntime - min_vruntime);
if (delta 0)
min_vruntime += delta;
That would rid us of most of the funny conditionals there.
That still left me with negative min_vruntimes. The pinned
On Mon, 24 Sep 2007 12:42:15 +0200 Mike Galbraith [EMAIL PROTECTED] wrote:
On Mon, 2007-09-24 at 12:24 +0200, Peter Zijlstra wrote:
how about something like:
s64 delta = (s64)(vruntime - min_vruntime);
if (delta 0)
min_vruntime += delta;
That would rid us of most of the
On Mon, 2007-09-24 at 12:42 +0200, Mike Galbraith wrote:
On Mon, 2007-09-24 at 12:24 +0200, Peter Zijlstra wrote:
how about something like:
s64 delta = (s64)(vruntime - min_vruntime);
if (delta 0)
min_vruntime += delta;
That would rid us of most of the funny conditionals
On Mon, 2007-09-24 at 13:08 +0200, Peter Zijlstra wrote:
Its perfectly valid for min_vruntime to exist in 1ULL 63.
But wrap backward timewarp is what's killing my box.
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL
On Mon, 24 Sep 2007 13:22:14 +0200 Mike Galbraith [EMAIL PROTECTED] wrote:
On Mon, 2007-09-24 at 12:42 +0200, Mike Galbraith wrote:
On Mon, 2007-09-24 at 12:24 +0200, Peter Zijlstra wrote:
how about something like:
s64 delta = (s64)(vruntime - min_vruntime);
if (delta 0)
On Mon, 24 Sep 2007, Peter Zijlstra wrote:
On Mon, 24 Sep 2007 13:22:14 +0200 Mike Galbraith [EMAIL PROTECTED] wrote:
On Mon, 2007-09-24 at 12:42 +0200, Mike Galbraith wrote:
On Mon, 2007-09-24 at 12:24 +0200, Peter Zijlstra wrote:
how about something like:
s64 delta = (s64)(vruntime -
Hi,
I haven't chased down the exact scenario, but using a min_vruntime which
is about to change definitely seems to be what's causing my latency
woes. Does the below cure your fairness woes as well?
(first bit is just some debug format changes for your convenience if you
try it)
diff -uprNX
On Sat, 2007-09-22 at 12:01 +0200, Mike Galbraith wrote:
> On Fri, 2007-09-21 at 20:27 -0700, Tong Li wrote:
> > Mike,
> >
> > Could you try this patch to see if it solves the latency problem?
>
> No, but it helps some when running two un-pinned busy loops, one at nice
> 0, and the other at nice
On Sat, 2007-09-22 at 12:01 +0200, Mike Galbraith wrote:
On Fri, 2007-09-21 at 20:27 -0700, Tong Li wrote:
Mike,
Could you try this patch to see if it solves the latency problem?
No, but it helps some when running two un-pinned busy loops, one at nice
0, and the other at nice 19.
Hi,
I haven't chased down the exact scenario, but using a min_vruntime which
is about to change definitely seems to be what's causing my latency
woes. Does the below cure your fairness woes as well?
(first bit is just some debug format changes for your convenience if you
try it)
diff -uprNX
On Fri, 2007-09-21 at 20:27 -0700, Tong Li wrote:
> Mike,
>
> Could you try this patch to see if it solves the latency problem?
No, but it helps some when running two un-pinned busy loops, one at nice
0, and the other at nice 19. Yesterday I hit latencies of up to 1.2
_seconds_ doing this, and
On Fri, 2007-09-21 at 20:27 -0700, Tong Li wrote:
Mike,
Could you try this patch to see if it solves the latency problem?
No, but it helps some when running two un-pinned busy loops, one at nice
0, and the other at nice 19. Yesterday I hit latencies of up to 1.2
_seconds_ doing this, and
Mike,
Could you try this patch to see if it solves the latency problem?
tong
Changes:
1. Modified vruntime adjustment logic in set_task_cpu(). See comments in
code. This fixed the negative vruntime problem.
2. This code in update_curr() seems to be wrong:
if (unlikely(!curr))
Mike,
Could you try this patch to see if it solves the latency problem?
tong
Changes:
1. Modified vruntime adjustment logic in set_task_cpu(). See comments in
code. This fixed the negative vruntime problem.
2. This code in update_curr() seems to be wrong:
if (unlikely(!curr))
On Fri, Sep 21, 2007 at 04:40:55AM +0200, Mike Galbraith wrote:
> On Thu, 2007-09-20 at 21:48 +0200, Willy Tarreau wrote:
>
> > I don't know if this is relevant, but 4294966399 in nr_uninterruptible
> > for cpu#0 equals -897, exactly the negation of cpu1.nr_uninterruptible.
> > I don't know if
On Thu, 2007-09-20 at 21:48 +0200, Willy Tarreau wrote:
> I don't know if this is relevant, but 4294966399 in nr_uninterruptible
> for cpu#0 equals -897, exactly the negation of cpu1.nr_uninterruptible.
> I don't know if this rings a bell for someone or if it's a completely
> useless comment, but
On Thu, Sep 20, 2007 at 09:15:07AM +0200, Mike Galbraith wrote:
> But, I did just manage to trigger some horrid behavior, and log it. I
> modified the kernel to print task's actual tree key instead of their
> current vruntime, and was watching that while make -j2 was running (and
> not seeing
On Thu, 2007-09-20 at 09:51 +0200, Ingo Molnar wrote:
> * Mike Galbraith <[EMAIL PROTECTED]> wrote:
>
> > [...] I modified the kernel to print task's actual tree key instead
> > of their current vruntime, [...]
>
> btw., that looks like a debug printout bug in sched-devel.git - could
> you
* Mike Galbraith <[EMAIL PROTECTED]> wrote:
> [...] I modified the kernel to print task's actual tree key instead
> of their current vruntime, [...]
btw., that looks like a debug printout bug in sched-devel.git - could
you send me your fix? I've pushed out the latest sched-devel (ontop of
On Thu, 2007-09-20 at 06:55 +0200, Mike Galbraith wrote:
> On Wed, 2007-09-19 at 10:06 -0700, Tong Li wrote:
>
> > Were the experiments run on a 2-CPU system?
>
> Yes.
>
> > When Xorg experiences large
> > wait time, is it on the same CPU that has the two pinned tasks? If this is
> > the
On Thu, 2007-09-20 at 06:55 +0200, Mike Galbraith wrote:
On Wed, 2007-09-19 at 10:06 -0700, Tong Li wrote:
Were the experiments run on a 2-CPU system?
Yes.
When Xorg experiences large
wait time, is it on the same CPU that has the two pinned tasks? If this is
the case, then the
* Mike Galbraith [EMAIL PROTECTED] wrote:
[...] I modified the kernel to print task's actual tree key instead
of their current vruntime, [...]
btw., that looks like a debug printout bug in sched-devel.git - could
you send me your fix? I've pushed out the latest sched-devel (ontop of
-rc7)
On Thu, 2007-09-20 at 09:51 +0200, Ingo Molnar wrote:
* Mike Galbraith [EMAIL PROTECTED] wrote:
[...] I modified the kernel to print task's actual tree key instead
of their current vruntime, [...]
btw., that looks like a debug printout bug in sched-devel.git - could
you send me your
On Thu, Sep 20, 2007 at 09:15:07AM +0200, Mike Galbraith wrote:
But, I did just manage to trigger some horrid behavior, and log it. I
modified the kernel to print task's actual tree key instead of their
current vruntime, and was watching that while make -j2 was running (and
not seeing
On Thu, 2007-09-20 at 21:48 +0200, Willy Tarreau wrote:
I don't know if this is relevant, but 4294966399 in nr_uninterruptible
for cpu#0 equals -897, exactly the negation of cpu1.nr_uninterruptible.
I don't know if this rings a bell for someone or if it's a completely
useless comment, but
On Fri, Sep 21, 2007 at 04:40:55AM +0200, Mike Galbraith wrote:
On Thu, 2007-09-20 at 21:48 +0200, Willy Tarreau wrote:
I don't know if this is relevant, but 4294966399 in nr_uninterruptible
for cpu#0 equals -897, exactly the negation of cpu1.nr_uninterruptible.
I don't know if this rings
On Wed, 2007-09-19 at 10:06 -0700, Tong Li wrote:
> Were the experiments run on a 2-CPU system?
Yes.
> When Xorg experiences large
> wait time, is it on the same CPU that has the two pinned tasks? If this is
> the case, then the problem could be X somehow advanced faster and got a
> larger
On Wed, 19 Sep 2007, Siddha, Suresh B wrote:
On Tue, Sep 18, 2007 at 11:03:59PM -0700, Tong Li wrote:
This patch attempts to improve CFS's SMP global fairness based on the new
virtual time design.
Removed vruntime adjustment in set_task_cpu() as it skews global fairness.
Modified
On Tue, Sep 18, 2007 at 11:03:59PM -0700, Tong Li wrote:
> This patch attempts to improve CFS's SMP global fairness based on the new
> virtual time design.
>
> Removed vruntime adjustment in set_task_cpu() as it skews global fairness.
>
> Modified small_imbalance logic in find_busiest_group().
On Wed, 19 Sep 2007, Mike Galbraith wrote:
On Wed, 2007-09-19 at 09:51 +0200, Mike Galbraith wrote:
The scenario which was previously cured was this:
taskset -c 1 nice -n 0 ./massive_intr 2
taskset -c 1 nice -n 5 ./massive_intr 2
click link
On Wed, 2007-09-19 at 09:51 +0200, Mike Galbraith wrote:
> The scenario which was previously cured was this:
> taskset -c 1 nice -n 0 ./massive_intr 2
> taskset -c 1 nice -n 5 ./massive_intr 2
> click link
> (http://pages.cs.wisc.edu/~shubu/talks/cachescrub-prdc2004.ppt) to bring
> up
On Wed, 2007-09-19 at 08:28 +0200, Mike Galbraith wrote:
> Greetings,
>
> On Tue, 2007-09-18 at 23:03 -0700, Tong Li wrote:
> > This patch attempts to improve CFS's SMP global fairness based on the new
> > virtual time design.
> >
> > Removed vruntime adjustment in set_task_cpu() as it skews
Greetings,
On Tue, 2007-09-18 at 23:03 -0700, Tong Li wrote:
> This patch attempts to improve CFS's SMP global fairness based on the new
> virtual time design.
>
> Removed vruntime adjustment in set_task_cpu() as it skews global fairness.
Since I'm (still) encountering Xorg latency issues
This patch attempts to improve CFS's SMP global fairness based on the new
virtual time design.
Removed vruntime adjustment in set_task_cpu() as it skews global fairness.
Modified small_imbalance logic in find_busiest_group(). If there's small
imbalance, move tasks from busiest to local
This patch attempts to improve CFS's SMP global fairness based on the new
virtual time design.
Removed vruntime adjustment in set_task_cpu() as it skews global fairness.
Modified small_imbalance logic in find_busiest_group(). If there's small
imbalance, move tasks from busiest to local
Greetings,
On Tue, 2007-09-18 at 23:03 -0700, Tong Li wrote:
This patch attempts to improve CFS's SMP global fairness based on the new
virtual time design.
Removed vruntime adjustment in set_task_cpu() as it skews global fairness.
Since I'm (still) encountering Xorg latency issues (which
On Wed, 2007-09-19 at 08:28 +0200, Mike Galbraith wrote:
Greetings,
On Tue, 2007-09-18 at 23:03 -0700, Tong Li wrote:
This patch attempts to improve CFS's SMP global fairness based on the new
virtual time design.
Removed vruntime adjustment in set_task_cpu() as it skews global
On Wed, 2007-09-19 at 09:51 +0200, Mike Galbraith wrote:
The scenario which was previously cured was this:
taskset -c 1 nice -n 0 ./massive_intr 2
taskset -c 1 nice -n 5 ./massive_intr 2
click link
(http://pages.cs.wisc.edu/~shubu/talks/cachescrub-prdc2004.ppt) to bring
up browser
On Wed, 19 Sep 2007, Mike Galbraith wrote:
On Wed, 2007-09-19 at 09:51 +0200, Mike Galbraith wrote:
The scenario which was previously cured was this:
taskset -c 1 nice -n 0 ./massive_intr 2
taskset -c 1 nice -n 5 ./massive_intr 2
click link
On Tue, Sep 18, 2007 at 11:03:59PM -0700, Tong Li wrote:
This patch attempts to improve CFS's SMP global fairness based on the new
virtual time design.
Removed vruntime adjustment in set_task_cpu() as it skews global fairness.
Modified small_imbalance logic in find_busiest_group(). If
On Wed, 19 Sep 2007, Siddha, Suresh B wrote:
On Tue, Sep 18, 2007 at 11:03:59PM -0700, Tong Li wrote:
This patch attempts to improve CFS's SMP global fairness based on the new
virtual time design.
Removed vruntime adjustment in set_task_cpu() as it skews global fairness.
Modified
On Wed, 2007-09-19 at 10:06 -0700, Tong Li wrote:
Were the experiments run on a 2-CPU system?
Yes.
When Xorg experiences large
wait time, is it on the same CPU that has the two pinned tasks? If this is
the case, then the problem could be X somehow advanced faster and got a
larger
On Tue, Sep 18, 2007 at 10:22:43PM +0200, Ingo Molnar wrote:
> (I have not tested the group scheduling bits but perhaps Srivatsa would
> like to do that?)
Ingo,
I plan to test it today and send you any updates that may be
required.
--
Regards,
vatsa
-
To unsubscribe from this list:
* Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> (3)
>
> rework enqueue/dequeue_entity() to get rid of
> sched_class::set_curr_task(). This simplifies sched_setscheduler(),
> rt_mutex_setprio() and sched_move_tasks().
ah. This makes your ready-queue patch a code size win. Applied.
* dimm <[EMAIL PROTECTED]> wrote:
> (1)
>
> due to the fact that we no longer keep the 'current' within the tree,
> dequeue/enqueue_entity() is useless for the 'current' in
> task_new_fair(). We are about to reschedule and
> sched_class->put_prev_task() will put the 'current' back into the
* Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> (2)
>
> the 'p' (task_struct) parameter in the sched_class :: yield_task() is
> redundant as the caller is always the 'current'. Get rid of it.
>
> Signed-off-by: Dmitry Adamushko <[EMAIL PROTECTED]>
ah - good one! I completely forgot about
* dimm <[EMAIL PROTECTED]> wrote:
> [ well, don't expect to find here anything like RDCFS (no, 'D' does
> not stand for 'dumb'!). I was focused on more prosaic things in the
> mean time so just didn't have time for writing it.. ]
>
> here is a few cleanup/simplification/optimization(s) based
(3)
rework enqueue/dequeue_entity() to get rid of sched_class::set_curr_task().
This simplifies sched_setscheduler(), rt_mutex_setprio() and sched_move_tasks().
Signed-off-by : Dmitry Adamushko <[EMAIL PROTECTED]>
Signed-off-by : Srivatsa Vaddagiri <[EMAIL PROTECTED]>
---
diff --git
[ well, don't expect to find here anything like RDCFS
(no, 'D' does not stand for 'dumb'!). I was focused
on more prosaic things in the mean time so just
didn't have time for writing it.. ]
here is a few cleanup/simplification/optimization(s)
based on the recent modifications in the sched-dev
[ well, don't expect to find here anything like RDCFS
(no, 'D' does not stand for 'dumb'!). I was focused
on more prosaic things in the mean time so just
didn't have time for writing it.. ]
here is a few cleanup/simplification/optimization(s)
based on the recent modifications in the sched-dev
(3)
rework enqueue/dequeue_entity() to get rid of sched_class::set_curr_task().
This simplifies sched_setscheduler(), rt_mutex_setprio() and sched_move_tasks().
Signed-off-by : Dmitry Adamushko [EMAIL PROTECTED]
Signed-off-by : Srivatsa Vaddagiri [EMAIL PROTECTED]
---
diff --git
* dimm [EMAIL PROTECTED] wrote:
[ well, don't expect to find here anything like RDCFS (no, 'D' does
not stand for 'dumb'!). I was focused on more prosaic things in the
mean time so just didn't have time for writing it.. ]
here is a few cleanup/simplification/optimization(s) based on the
* Dmitry Adamushko [EMAIL PROTECTED] wrote:
(2)
the 'p' (task_struct) parameter in the sched_class :: yield_task() is
redundant as the caller is always the 'current'. Get rid of it.
Signed-off-by: Dmitry Adamushko [EMAIL PROTECTED]
ah - good one! I completely forgot about that
* dimm [EMAIL PROTECTED] wrote:
(1)
due to the fact that we no longer keep the 'current' within the tree,
dequeue/enqueue_entity() is useless for the 'current' in
task_new_fair(). We are about to reschedule and
sched_class-put_prev_task() will put the 'current' back into the
tree,
* Dmitry Adamushko [EMAIL PROTECTED] wrote:
(3)
rework enqueue/dequeue_entity() to get rid of
sched_class::set_curr_task(). This simplifies sched_setscheduler(),
rt_mutex_setprio() and sched_move_tasks().
ah. This makes your ready-queue patch a code size win. Applied.
Ingo
-
To
On Tue, Sep 18, 2007 at 10:22:43PM +0200, Ingo Molnar wrote:
(I have not tested the group scheduling bits but perhaps Srivatsa would
like to do that?)
Ingo,
I plan to test it today and send you any updates that may be
required.
--
Regards,
vatsa
-
To unsubscribe from this list: send
68 matches
Mail list logo