On Thu, 2007-07-26 at 23:31 +0200, Ingo Molnar wrote:
> * Tong Li <[EMAIL PROTECTED]> wrote:
>
> > > you need to measure it over longer periods of time. Its not worth
> > > balancing for such a thing in any high-frequency manner. (we'd trash
> > > the cache constantly migrating tasks back and
On Thu, 2007-07-26 at 23:31 +0200, Ingo Molnar wrote:
* Tong Li [EMAIL PROTECTED] wrote:
you need to measure it over longer periods of time. Its not worth
balancing for such a thing in any high-frequency manner. (we'd trash
the cache constantly migrating tasks back and forth.)
I
On Wed, 2007-07-25 at 16:55 -0400, Chris Snook wrote:
> Chris Friesen wrote:
> > Ingo Molnar wrote:
> >
> >> the 3s is the problem: change that to 60s! We no way want to
> >> over-migrate for SMP fairness, the change i did gives us reasonable
> >> long-term SMP fairness without the need for
On Wed, 2007-07-25 at 14:03 +0200, Ingo Molnar wrote:
> Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
> ---
> include/linux/sched.h |2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> Index: linux/include/linux/sched.h
>
On Wed, 2007-07-25 at 14:03 +0200, Ingo Molnar wrote:
Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
---
include/linux/sched.h |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux/include/linux/sched.h
===
On Wed, 2007-07-25 at 16:55 -0400, Chris Snook wrote:
Chris Friesen wrote:
Ingo Molnar wrote:
the 3s is the problem: change that to 60s! We no way want to
over-migrate for SMP fairness, the change i did gives us reasonable
long-term SMP fairness without the need for high-rate
On Tue, 2007-07-24 at 16:39 -0400, Chris Snook wrote:
> Divining the intentions of the administrator is an AI-complete problem and
> we're
> not going to try to solve that in the kernel. An intelligent administrator
> could also allocate 50% of each CPU to a resource group containing all the
On Tue, 2007-07-24 at 04:07 -0400, Chris Snook wrote:
> To clarify, I'm not suggesting that the "balance with cpu (x+1)%n only"
> algorithm is the only way to do this. Rather, I'm pointing out that
> even an extremely simple algorithm can give you fair loading when you
> already have CFS
On Tue, 2007-07-24 at 04:07 -0400, Chris Snook wrote:
To clarify, I'm not suggesting that the balance with cpu (x+1)%n only
algorithm is the only way to do this. Rather, I'm pointing out that
even an extremely simple algorithm can give you fair loading when you
already have CFS managing
On Tue, 2007-07-24 at 16:39 -0400, Chris Snook wrote:
Divining the intentions of the administrator is an AI-complete problem and
we're
not going to try to solve that in the kernel. An intelligent administrator
could also allocate 50% of each CPU to a resource group containing all the
I benchmarked an early version of this code (against 2.6.21) with
SPECjbb, SPEComp, kernbench, etc. on an 8-processor system, and didn't
see any slowdown compared to the stock scheduler. I'll generate the data
again with this version of the code. On the other hand, if locking does
become a problem
I benchmarked an early version of this code (against 2.6.21) with
SPECjbb, SPEComp, kernbench, etc. on an 8-processor system, and didn't
see any slowdown compared to the stock scheduler. I'll generate the data
again with this version of the code. On the other hand, if locking does
become a problem
> -Original Message-
> From: Adrian Bunk [mailto:[EMAIL PROTECTED]
> Sent: Sunday, July 15, 2007 4:46 PM
> To: Li, Tong N
> Cc: Giuseppe Bilotta; linux-kernel@vger.kernel.org
> Subject: Re: Re: [ANNOUNCE][RFC] PlugSched-6.5.1 for 2.6.22
>
> On Sun, Jul 15, 2007
> On Thursday 12 July 2007 00:17, Al Boldi wrote:
>
> > Peter Williams wrote:
> >>
> >> Probably the last one now that CFS is in the main line :-(.
> >
> > What do you mean? A pluggable scheduler framework is indispensible
even
> in
> > the presence of CFS or SD.
>
> Indeed, and I hope it gets
On Thursday 12 July 2007 00:17, Al Boldi wrote:
Peter Williams wrote:
Probably the last one now that CFS is in the main line :-(.
What do you mean? A pluggable scheduler framework is indispensible
even
in
the presence of CFS or SD.
Indeed, and I hope it gets merged, giving
-Original Message-
From: Adrian Bunk [mailto:[EMAIL PROTECTED]
Sent: Sunday, July 15, 2007 4:46 PM
To: Li, Tong N
Cc: Giuseppe Bilotta; linux-kernel@vger.kernel.org
Subject: Re: Re: [ANNOUNCE][RFC] PlugSched-6.5.1 for 2.6.22
On Sun, Jul 15, 2007 at 10:47:51AM -0700, Li, Tong N
Mathieu,
> cycles_per_iter = 0.0;
> for (i=0; i time1 = get_cycles();
> for (j = 0; j < NR_ITER; j++) {
> testval = [random() % ARRAY_SIZE];
> }
> time2 = get_cycles();
> cycles_per_iter +=
Mathieu,
cycles_per_iter = 0.0;
for (i=0; iNR_TESTS; i++) {
time1 = get_cycles();
for (j = 0; j NR_ITER; j++) {
testval = array[random() % ARRAY_SIZE];
}
time2 = get_cycles();
> I found that memory latency is difficult to measure in modern x86
> CPUs because they have very clever prefetchers that can often
> outwit benchmarks.
A pointer-chasing program that accesses a random sequence of addresses
usually can produce a good estimate on memory latency. Also, prefetching
> Also cache misses in this situation tend to be much more than 48
cycles
> (even an K8 with integrated memory controller with fastest DIMMs is
> slower than that) Mathieu probably measured an L2 miss, not a load
from
> RAM.
> Load from RAM can be hundreds of ns in the worst case.
>
The 48
Also cache misses in this situation tend to be much more than 48
cycles
(even an K8 with integrated memory controller with fastest DIMMs is
slower than that) Mathieu probably measured an L2 miss, not a load
from
RAM.
Load from RAM can be hundreds of ns in the worst case.
The 48 cycles
I found that memory latency is difficult to measure in modern x86
CPUs because they have very clever prefetchers that can often
outwit benchmarks.
A pointer-chasing program that accesses a random sequence of addresses
usually can produce a good estimate on memory latency. Also, prefetching
can
> As I know, there are a lot of standalone kernel developer in China. They
> write device drivers for their chips or iptables modules for their
> linux based network devices. They send source files to their customers
> or publish them on web but seldom do anything to make the codes into
>
As I know, there are a lot of standalone kernel developer in China. They
write device drivers for their chips or iptables modules for their
linux based network devices. They send source files to their customers
or publish them on web but seldom do anything to make the codes into
kernel
more).
Thanks,
tong
> -Original Message-
> From: Willy Tarreau [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, June 05, 2007 8:33 PM
> To: Li, Tong N
> Cc: linux-kernel@vger.kernel.org; Ingo Molnar; Con Kolivas; Linus
Torvalds;
> Arjan van de Ven; Siddha, Suresh B; Barnes,
Hi all,
I've ported my code to mainline 2.6.21.3. You can get it at
http://www.cs.duke.edu/~tongli/linux/. As I said before, the intent of
the patch is not to compete with CFS and SD because the design relies on
the underlying scheduler for interactive performance. The goal here is
to present a
Hi all,
I've ported my code to mainline 2.6.21.3. You can get it at
http://www.cs.duke.edu/~tongli/linux/. As I said before, the intent of
the patch is not to compete with CFS and SD because the design relies on
the underlying scheduler for interactive performance. The goal here is
to present a
more).
Thanks,
tong
-Original Message-
From: Willy Tarreau [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 05, 2007 8:33 PM
To: Li, Tong N
Cc: linux-kernel@vger.kernel.org; Ingo Molnar; Con Kolivas; Linus
Torvalds;
Arjan van de Ven; Siddha, Suresh B; Barnes, Jesse; William Lee Irwin
On Fri, 2007-05-25 at 21:44 +0530, Srivatsa Vaddagiri wrote:
> >
> > That assumes per-user scheduling groups; other configurations would
> > make it one step for each level of hierarchy. It may be possible to
> > reduce those steps to only state transitions that change weightings
> > and
On Fri, 2007-05-25 at 21:44 +0530, Srivatsa Vaddagiri wrote:
That assumes per-user scheduling groups; other configurations would
make it one step for each level of hierarchy. It may be possible to
reduce those steps to only state transitions that change weightings
and incremental
On Mon, 2007-05-07 at 19:52 +0530, Srivatsa Vaddagiri wrote:
> On Thu, May 03, 2007 at 08:53:47AM -0700, William Lee Irwin III wrote:
> > On Thu, May 03, 2007 at 08:23:18PM +0530, Srivatsa Vaddagiri wrote:
> > > And what about group scheduling extensions? Do you have plans to work on
> > > it? I
On Mon, 2007-05-07 at 19:52 +0530, Srivatsa Vaddagiri wrote:
On Thu, May 03, 2007 at 08:53:47AM -0700, William Lee Irwin III wrote:
On Thu, May 03, 2007 at 08:23:18PM +0530, Srivatsa Vaddagiri wrote:
And what about group scheduling extensions? Do you have plans to work on
it? I was
On Thu, 2007-05-03 at 08:53 -0700, William Lee Irwin III wrote:
> On Thu, May 03, 2007 at 03:29:32PM +0200, Damien Wyart wrote:
> >> What are your thoughts/plans concerning merging CFS into mainline ? Is
> >> it a bit premature to get it into 2.6.22 ? I remember Linus was ok to
> >> change the
On Thu, 2007-05-03 at 08:53 -0700, William Lee Irwin III wrote:
On Thu, May 03, 2007 at 03:29:32PM +0200, Damien Wyart wrote:
What are your thoughts/plans concerning merging CFS into mainline ? Is
it a bit premature to get it into 2.6.22 ? I remember Linus was ok to
change the default
> Based on my understanding, adopting something like EEVDF in CFS should
> not be very difficult given their similarities, although I do not have
> any idea on how this impacts the load balancing for SMP. Does this worth
> a try?
>
> Sorry for such a long email :-)
Thanks for the
Based on my understanding, adopting something like EEVDF in CFS should
not be very difficult given their similarities, although I do not have
any idea on how this impacts the load balancing for SMP. Does this worth
a try?
Sorry for such a long email :-)
Thanks for the excellent
On Wed, 2007-04-25 at 22:13 +0200, Willy Tarreau wrote:
> On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
>
> > Adjustments to the lag computation for for arrivals and departures
> > during execution are among the missing pieces. Some algorithmic devices
> > are also needed
On Wed, 2007-04-25 at 22:13 +0200, Willy Tarreau wrote:
On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
Adjustments to the lag computation for for arrivals and departures
during execution are among the missing pieces. Some algorithmic devices
are also needed to
ill consider an algorithm to be fair as long as the
second metric is bounded by a constant.
>
> On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
> > I understand that via experiments we can show a design is reasonably
> > fair in the common case, but IMHO, to claim that a d
On Mon, 2007-04-23 at 18:57 -0700, Bill Huey wrote:
> On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
> > I don't know if we've discussed this or not. Since both CFS and SD claim
> > to be fair, I'd like to hear more opinions on the fairness aspect of
> > th
On Mon, 2007-04-23 at 18:57 -0700, Bill Huey wrote:
On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
I don't know if we've discussed this or not. Since both CFS and SD claim
to be fair, I'd like to hear more opinions on the fairness aspect of
these designs. In areas such as OS
an algorithm to be fair as long as the
second metric is bounded by a constant.
On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
I understand that via experiments we can show a design is reasonably
fair in the common case, but IMHO, to claim that a design is fair,
there
needs to be some
I don't know if we've discussed this or not. Since both CFS and SD claim
to be fair, I'd like to hear more opinions on the fairness aspect of
these designs. In areas such as OS, networking, and real-time, fairness,
and its more general form, proportional fairness, are well-defined
terms. In fact,
I don't know if we've discussed this or not. Since both CFS and SD claim
to be fair, I'd like to hear more opinions on the fairness aspect of
these designs. In areas such as OS, networking, and real-time, fairness,
and its more general form, proportional fairness, are well-defined
terms. In fact,
44 matches
Mail list logo