* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > [...] The timeslices of tasks (i.e. the time they spend on a CPU
> > without scheduling away) is _not_ maintained directly in CFS as a
> > per-task variable that can be "cleared", it's not the metric that
> > drives scheduling. Yes, of course
* Jarek Poplawski [EMAIL PROTECTED] wrote:
[...] The timeslices of tasks (i.e. the time they spend on a CPU
without scheduling away) is _not_ maintained directly in CFS as a
per-task variable that can be cleared, it's not the metric that
drives scheduling. Yes, of course CFS too
Ingo Molnar wrote:
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
[...] (Btw, in -rc8-mm2 I see new sched_slice() function which seems
to return... time.)
wrong again. That is a function, not a variable to be cleared.
It still gives us a target time, so could we not simply have
On Mon, 2007-10-01 at 09:49 -0700, David Schwartz wrote:
> > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> >
> > > BTW, it looks like risky to criticise sched_yield too much: some
> > > people can misinterpret such discussions and stop using this at all,
> > > even where it's right.
>
> >
On Wed, Oct 03, 2007 at 12:55:34PM +0200, Dmitry Adamushko wrote:
...
> just a quick patch, not tested and I've not evaluated all possible
> implications yet.
> But someone might give it a try with his/(her -- are even more
> welcomed :-) favourite sched_yield() load.
Of course, after some
David Schwartz wrote:
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
BTW, it looks like risky to criticise sched_yield too much: some
people can misinterpret such discussions and stop using this at all,
even where it's right.
Really, i have never seen a _single_ mainstream app
* Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> + se->vruntime += delta_exec_weighted;
thanks Dmitry.
Btw., this is quite similar to the yield_granularity patch i did
originally, just less flexible. It turned out that apps want either zero
granularity or "infinite"
On Wed, Oct 03, 2007 at 12:58:26PM +0200, Dmitry Adamushko wrote:
> On 03/10/2007, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> > On 03/10/2007, Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > > I can't see anything about clearing. I think, this was about charging,
> > > which should change the
On 03/10/2007, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> On 03/10/2007, Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > I can't see anything about clearing. I think, this was about charging,
> > which should change the key enough, to move a task to, maybe, a better
> > place in a que (tree)
On 03/10/2007, Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> I can't see anything about clearing. I think, this was about charging,
> which should change the key enough, to move a task to, maybe, a better
> place in a que (tree) than with current ways.
just a quick patch, not tested and I've not
On Wed, Oct 03, 2007 at 11:10:58AM +0200, Ingo Molnar wrote:
>
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> > On Wed, Oct 03, 2007 at 10:16:13AM +0200, Ingo Molnar wrote:
> > >
> > > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > >
> > > > > firstly, there's no notion of "timeslices"
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 03, 2007 at 10:16:13AM +0200, Ingo Molnar wrote:
> >
> > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> >
> > > > firstly, there's no notion of "timeslices" in CFS. (in CFS tasks
> > > > "earn" a right to the CPU, and that "right" is
On Wed, Oct 03, 2007 at 10:16:13AM +0200, Ingo Molnar wrote:
>
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> > > firstly, there's no notion of "timeslices" in CFS. (in CFS tasks
> > > "earn" a right to the CPU, and that "right" is not sliced in the
> > > traditional sense) But we tried a
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > firstly, there's no notion of "timeslices" in CFS. (in CFS tasks
> > "earn" a right to the CPU, and that "right" is not sliced in the
> > traditional sense) But we tried a conceptually similar thing [...]
>
> >From kernel/sched_fair.c:
>
> "/*
On 02-10-2007 08:06, Ingo Molnar wrote:
> * David Schwartz <[EMAIL PROTECTED]> wrote:
...
>> I'm not familiar enough with CFS' internals to help much on the
>> implementation, but there may be some simple compromise yield that
>> might work well enough. How about simply acting as if the task
On 02-10-2007 17:37, David Schwartz wrote:
...
> So now I not only have to come up with an example where sched_yield is the
> best practical choice, I have to come up with one where sched_yield is the
> best conceivable choice? Didn't we start out by agreeing these are very rare
> cases? Why are
On 02-10-2007 17:37, David Schwartz wrote:
...
So now I not only have to come up with an example where sched_yield is the
best practical choice, I have to come up with one where sched_yield is the
best conceivable choice? Didn't we start out by agreeing these are very rare
cases? Why are we
On 02-10-2007 08:06, Ingo Molnar wrote:
* David Schwartz [EMAIL PROTECTED] wrote:
...
I'm not familiar enough with CFS' internals to help much on the
implementation, but there may be some simple compromise yield that
might work well enough. How about simply acting as if the task used up
* Jarek Poplawski [EMAIL PROTECTED] wrote:
firstly, there's no notion of timeslices in CFS. (in CFS tasks
earn a right to the CPU, and that right is not sliced in the
traditional sense) But we tried a conceptually similar thing [...]
From kernel/sched_fair.c:
/*
* Targeted
On Wed, Oct 03, 2007 at 10:16:13AM +0200, Ingo Molnar wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
firstly, there's no notion of timeslices in CFS. (in CFS tasks
earn a right to the CPU, and that right is not sliced in the
traditional sense) But we tried a conceptually similar
* Jarek Poplawski [EMAIL PROTECTED] wrote:
On Wed, Oct 03, 2007 at 10:16:13AM +0200, Ingo Molnar wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
firstly, there's no notion of timeslices in CFS. (in CFS tasks
earn a right to the CPU, and that right is not sliced in the
On Wed, Oct 03, 2007 at 11:10:58AM +0200, Ingo Molnar wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
On Wed, Oct 03, 2007 at 10:16:13AM +0200, Ingo Molnar wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
firstly, there's no notion of timeslices in CFS. (in CFS tasks
On 03/10/2007, Jarek Poplawski [EMAIL PROTECTED] wrote:
I can't see anything about clearing. I think, this was about charging,
which should change the key enough, to move a task to, maybe, a better
place in a que (tree) than with current ways.
just a quick patch, not tested and I've not
On 03/10/2007, Dmitry Adamushko [EMAIL PROTECTED] wrote:
On 03/10/2007, Jarek Poplawski [EMAIL PROTECTED] wrote:
I can't see anything about clearing. I think, this was about charging,
which should change the key enough, to move a task to, maybe, a better
place in a que (tree) than with
On Wed, Oct 03, 2007 at 12:58:26PM +0200, Dmitry Adamushko wrote:
On 03/10/2007, Dmitry Adamushko [EMAIL PROTECTED] wrote:
On 03/10/2007, Jarek Poplawski [EMAIL PROTECTED] wrote:
I can't see anything about clearing. I think, this was about charging,
which should change the key enough, to
* Dmitry Adamushko [EMAIL PROTECTED] wrote:
+ se-vruntime += delta_exec_weighted;
thanks Dmitry.
Btw., this is quite similar to the yield_granularity patch i did
originally, just less flexible. It turned out that apps want either zero
granularity or infinite
David Schwartz wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
BTW, it looks like risky to criticise sched_yield too much: some
people can misinterpret such discussions and stop using this at all,
even where it's right.
Really, i have never seen a _single_ mainstream app
On Wed, Oct 03, 2007 at 12:55:34PM +0200, Dmitry Adamushko wrote:
...
just a quick patch, not tested and I've not evaluated all possible
implications yet.
But someone might give it a try with his/(her -- are even more
welcomed :-) favourite sched_yield() load.
Of course, after some evaluation
On Mon, 2007-10-01 at 09:49 -0700, David Schwartz wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
BTW, it looks like risky to criticise sched_yield too much: some
people can misinterpret such discussions and stop using this at all,
even where it's right.
Really, i have never
Ingo Molnar wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
[...] (Btw, in -rc8-mm2 I see new sched_slice() function which seems
to return... time.)
wrong again. That is a function, not a variable to be cleared.
It still gives us a target time, so could we not simply have
This is a combined response to Arjan's:
> that's also what trylock is for... as well as spinaphores...
> (you can argue that futexes should be more intelligent and do
> spinaphore stuff etc... and I can buy that, lets improve them in the
> kernel by any means. But userspace yield() isn't the
On Tue, Oct 02, 2007 at 11:03:46AM +0200, Jarek Poplawski wrote:
...
> should suffice. Currently, I wonder if simply charging (with a key
> recalculated) such a task for all the time it could've used isn't one
> of such methods. It seems, it's functionally analogous with going to
> the end of que
On Mon, Oct 01, 2007 at 10:43:56AM +0200, Jarek Poplawski wrote:
...
> etc., if we know (after testing) eg. average expedition time of such
No new theory - it's only my reverse Polish translation. Should be:
"etc., if we know (after testing) eg. average dispatch time of such".
Sorry,
Jarek P.
-
On Mon, Oct 01, 2007 at 06:25:07PM +0200, Ingo Molnar wrote:
>
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> > BTW, it looks like risky to criticise sched_yield too much: some
> > people can misinterpret such discussions and stop using this at all,
> > even where it's right.
>
> Really,
Ingo Molnar <[EMAIL PROTECTED]> writes:
> * David Schwartz <[EMAIL PROTECTED]> wrote:
>
> > > These are generic statements, but i'm _really_ interested in the
> > > specifics. Real, specific code that i can look at. The typical Linux
> > > distro consists of in execess of 500 millions of lines
* David Schwartz <[EMAIL PROTECTED]> wrote:
> > at a quick glance this seems broken too - but if you show the
> > specific code i might be able to point out the breakage in detail.
> > (One underlying problem here appears to be fairness: a quick
> > unlock/lock sequence may starve out other
* David Schwartz <[EMAIL PROTECTED]> wrote:
> > (user-space spinlocks are broken beyond words for anything but
> > perhaps SCHED_FIFO tasks.)
>
> User-space spinlocks are broken so spinlocks can only be implemented
> in kernel-space? Even if you use the kernel to schedule/unschedule the
>
* David Schwartz <[EMAIL PROTECTED]> wrote:
> > These are generic statements, but i'm _really_ interested in the
> > specifics. Real, specific code that i can look at. The typical Linux
> > distro consists of in execess of 500 millions of lines of code, in
> > tens of thousands of apps, so
* David Schwartz [EMAIL PROTECTED] wrote:
These are generic statements, but i'm _really_ interested in the
specifics. Real, specific code that i can look at. The typical Linux
distro consists of in execess of 500 millions of lines of code, in
tens of thousands of apps, so there really
* David Schwartz [EMAIL PROTECTED] wrote:
(user-space spinlocks are broken beyond words for anything but
perhaps SCHED_FIFO tasks.)
User-space spinlocks are broken so spinlocks can only be implemented
in kernel-space? Even if you use the kernel to schedule/unschedule the
tasks, you
* David Schwartz [EMAIL PROTECTED] wrote:
at a quick glance this seems broken too - but if you show the
specific code i might be able to point out the breakage in detail.
(One underlying problem here appears to be fairness: a quick
unlock/lock sequence may starve out other threads.
Ingo Molnar [EMAIL PROTECTED] writes:
* David Schwartz [EMAIL PROTECTED] wrote:
These are generic statements, but i'm _really_ interested in the
specifics. Real, specific code that i can look at. The typical Linux
distro consists of in execess of 500 millions of lines of code, in
On Mon, Oct 01, 2007 at 06:25:07PM +0200, Ingo Molnar wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
BTW, it looks like risky to criticise sched_yield too much: some
people can misinterpret such discussions and stop using this at all,
even where it's right.
Really, i have never
On Mon, Oct 01, 2007 at 10:43:56AM +0200, Jarek Poplawski wrote:
...
etc., if we know (after testing) eg. average expedition time of such
No new theory - it's only my reverse Polish translation. Should be:
etc., if we know (after testing) eg. average dispatch time of such.
Sorry,
Jarek P.
-
To
On Tue, Oct 02, 2007 at 11:03:46AM +0200, Jarek Poplawski wrote:
...
should suffice. Currently, I wonder if simply charging (with a key
recalculated) such a task for all the time it could've used isn't one
of such methods. It seems, it's functionally analogous with going to
the end of que of
This is a combined response to Arjan's:
that's also what trylock is for... as well as spinaphores...
(you can argue that futexes should be more intelligent and do
spinaphore stuff etc... and I can buy that, lets improve them in the
kernel by any means. But userspace yield() isn't the answer.
On Mon, 1 Oct 2007 15:44:09 -0700
"David Schwartz" <[EMAIL PROTECTED]> wrote:
>
> > yielding IS blocking. Just with indeterminate fuzzyness added to
> > it
>
> Yielding is sort of blocking, but the difference is that yielding
> will not idle the CPU while blocking might.
not really;
> yielding IS blocking. Just with indeterminate fuzzyness added to it
Yielding is sort of blocking, but the difference is that yielding will not
idle the CPU while blocking might. Yielding is sometimes preferable to
blocking in a case where the thread knows it can make forward progress even
On Mon, 1 Oct 2007 15:17:52 -0700
"David Schwartz" <[EMAIL PROTECTED]> wrote:
>
> Arjan van de Ven wrote:
>
> > > It can occasionally be an optimization. You may have a case where
> > > you can do something very efficiently if a lock is not held, but
> > > you cannot afford to wait for the lock
Arjan van de Ven wrote:
> > It can occasionally be an optimization. You may have a case where you
> > can do something very efficiently if a lock is not held, but you
> > cannot afford to wait for the lock to be released. So you check the
> > lock, if it's held, you yield and then check again.
Ingo Molnar wrote:
>
> Really, i have never seen a _single_ mainstream app where the use of
> sched_yield() was the right choice.
Pliant 'FastSem' semaphore implementation (as oppsed to 'Sem') uses 'yield'
http://old.fullpliant.org/
Basically, if the ressource you are protecting with the
On Mon, 1 Oct 2007 09:49:35 -0700
"David Schwartz" <[EMAIL PROTECTED]> wrote:
>
> > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> >
> > > BTW, it looks like risky to criticise sched_yield too much: some
> > > people can misinterpret such discussions and stop using this at
> > > all, even where
> These are generic statements, but i'm _really_ interested in the
> specifics. Real, specific code that i can look at. The typical Linux
> distro consists of in execess of 500 millions of lines of code, in tens
> of thousands of apps, so there really must be some good, valid and
> "right" use of
Ingo Molnar wrote:
* Chris Friesen <[EMAIL PROTECTED]> wrote:
However, there are closed-source and/or frozen-source apps where it's
not practical to rewrite or rebuild the app. Does it make sense to
break the behaviour of all of these?
See the background and answers to that in:
* David Schwartz <[EMAIL PROTECTED]> wrote:
> > > BTW, it looks like risky to criticise sched_yield too much: some
> > > people can misinterpret such discussions and stop using this at
> > > all, even where it's right.
>
> > Really, i have never seen a _single_ mainstream app where the use of
* Chris Friesen <[EMAIL PROTECTED]> wrote:
> Ingo Molnar wrote:
>
> >But, because you assert it that it's risky to "criticise sched_yield()
> >too much", you sure must know at least one real example where it's right
> >to use it (and cite the line and code where it's used, with
>
Ingo Molnar wrote:
But, because you assert it that it's risky to "criticise sched_yield()
too much", you sure must know at least one real example where it's right
to use it (and cite the line and code where it's used, with
specificity)?
It's fine to criticise sched_yield(). I agree that
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> > BTW, it looks like risky to criticise sched_yield too much: some
> > people can misinterpret such discussions and stop using this at all,
> > even where it's right.
> Really, i have never seen a _single_ mainstream app where the use of
>
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> BTW, it looks like risky to criticise sched_yield too much: some
> people can misinterpret such discussions and stop using this at all,
> even where it's right.
Really, i have never seen a _single_ mainstream app where the use of
sched_yield()
On Fri, Sep 28, 2007 at 04:10:00PM +1000, Nick Piggin wrote:
> On Friday 28 September 2007 00:42, Jarek Poplawski wrote:
> > On Thu, Sep 27, 2007 at 03:31:23PM +0200, Ingo Molnar wrote:
> > > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> >
> > ...
> >
> > > > OK, but let's forget about fixing
On Fri, Sep 28, 2007 at 04:10:00PM +1000, Nick Piggin wrote:
On Friday 28 September 2007 00:42, Jarek Poplawski wrote:
On Thu, Sep 27, 2007 at 03:31:23PM +0200, Ingo Molnar wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
...
OK, but let's forget about fixing iperf. Probably I got
* Jarek Poplawski [EMAIL PROTECTED] wrote:
BTW, it looks like risky to criticise sched_yield too much: some
people can misinterpret such discussions and stop using this at all,
even where it's right.
Really, i have never seen a _single_ mainstream app where the use of
sched_yield() was
* Jarek Poplawski [EMAIL PROTECTED] wrote:
BTW, it looks like risky to criticise sched_yield too much: some
people can misinterpret such discussions and stop using this at all,
even where it's right.
Really, i have never seen a _single_ mainstream app where the use of
sched_yield() was
Ingo Molnar wrote:
But, because you assert it that it's risky to criticise sched_yield()
too much, you sure must know at least one real example where it's right
to use it (and cite the line and code where it's used, with
specificity)?
It's fine to criticise sched_yield(). I agree that new
* Chris Friesen [EMAIL PROTECTED] wrote:
Ingo Molnar wrote:
But, because you assert it that it's risky to criticise sched_yield()
too much, you sure must know at least one real example where it's right
to use it (and cite the line and code where it's used, with
specificity)?
It's
* David Schwartz [EMAIL PROTECTED] wrote:
BTW, it looks like risky to criticise sched_yield too much: some
people can misinterpret such discussions and stop using this at
all, even where it's right.
Really, i have never seen a _single_ mainstream app where the use of
Ingo Molnar wrote:
* Chris Friesen [EMAIL PROTECTED] wrote:
However, there are closed-source and/or frozen-source apps where it's
not practical to rewrite or rebuild the app. Does it make sense to
break the behaviour of all of these?
See the background and answers to that in:
These are generic statements, but i'm _really_ interested in the
specifics. Real, specific code that i can look at. The typical Linux
distro consists of in execess of 500 millions of lines of code, in tens
of thousands of apps, so there really must be some good, valid and
right use of
On Mon, 1 Oct 2007 09:49:35 -0700
David Schwartz [EMAIL PROTECTED] wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
BTW, it looks like risky to criticise sched_yield too much: some
people can misinterpret such discussions and stop using this at
all, even where it's right.
Ingo Molnar wrote:
Really, i have never seen a _single_ mainstream app where the use of
sched_yield() was the right choice.
Pliant 'FastSem' semaphore implementation (as oppsed to 'Sem') uses 'yield'
http://old.fullpliant.org/
Basically, if the ressource you are protecting with the semaphore
Arjan van de Ven wrote:
It can occasionally be an optimization. You may have a case where you
can do something very efficiently if a lock is not held, but you
cannot afford to wait for the lock to be released. So you check the
lock, if it's held, you yield and then check again. If that
On Mon, 1 Oct 2007 15:17:52 -0700
David Schwartz [EMAIL PROTECTED] wrote:
Arjan van de Ven wrote:
It can occasionally be an optimization. You may have a case where
you can do something very efficiently if a lock is not held, but
you cannot afford to wait for the lock to be released.
yielding IS blocking. Just with indeterminate fuzzyness added to it
Yielding is sort of blocking, but the difference is that yielding will not
idle the CPU while blocking might. Yielding is sometimes preferable to
blocking in a case where the thread knows it can make forward progress even
On Mon, 1 Oct 2007 15:44:09 -0700
David Schwartz [EMAIL PROTECTED] wrote:
yielding IS blocking. Just with indeterminate fuzzyness added to
it
Yielding is sort of blocking, but the difference is that yielding
will not idle the CPU while blocking might.
not really; SOMEONE will make
On Friday 28 September 2007 00:42, Jarek Poplawski wrote:
> On Thu, Sep 27, 2007 at 03:31:23PM +0200, Ingo Molnar wrote:
> > * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> ...
>
> > > OK, but let's forget about fixing iperf. Probably I got this wrong,
> > > but I've thought this "bad" iperf
On Friday 28 September 2007 00:42, Jarek Poplawski wrote:
On Thu, Sep 27, 2007 at 03:31:23PM +0200, Ingo Molnar wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
...
OK, but let's forget about fixing iperf. Probably I got this wrong,
but I've thought this bad iperf patch was tested on
On Thu, Sep 27, 2007 at 03:31:23PM +0200, Ingo Molnar wrote:
>
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
...
> > OK, but let's forget about fixing iperf. Probably I got this wrong,
> > but I've thought this "bad" iperf patch was tested on a few nixes and
> > linux was the most different
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 27, 2007 at 11:46:03AM +0200, Ingo Molnar wrote:
[...]
> > What you missed is that there is no such thing as "predictable yield
> > behavior" for anything but SCHED_FIFO/RR tasks (for which tasks CFS does
> > keep the behavior). Please
On Thu, Sep 27, 2007 at 11:46:03AM +0200, Ingo Molnar wrote:
>
> * Jarek Poplawski <[EMAIL PROTECTED]> wrote:
>
> > > the (small) patch below fixes the iperf locking bug and removes the
> > > yield() use. There are numerous immediate benefits of this patch:
> > ...
> > >
> > > sched_yield() is
* Ingo Molnar <[EMAIL PROTECTED]> [2007-09-27 12:56]:
> i'm curious by how much does CPU go down, and what's the output of
> iperf? (does it saturate full 100mbit network bandwidth)
I get about 94-95 Mbits/sec and CPU drops from 99% to about 82% (this
is with a 600 MHz ARM CPU).
--
Martin
* Martin Michlmayr <[EMAIL PROTECTED]> wrote:
> * Ingo Molnar <[EMAIL PROTECTED]> [2007-09-27 11:49]:
> > Martin, could you check the iperf patch below instead of the yield
> > patch - does it solve the iperf performance problem equally well,
> > and does CPU utilization drop for you too?
>
>
* Ingo Molnar <[EMAIL PROTECTED]> [2007-09-27 11:49]:
> Martin, could you check the iperf patch below instead of the yield
> patch - does it solve the iperf performance problem equally well,
> and does CPU utilization drop for you too?
Yes, it works and CPU goes down too.
--
Martin Michlmayr
* Martin Michlmayr <[EMAIL PROTECTED]> wrote:
> > I think the real fix would be for iperf to use blocking network IO
> > though, or maybe to use a POSIX mutex or POSIX semaphores.
>
> So it's definitely not a bug in the kernel, only in iperf?
>
> (CCing Stephen Hemminger who wrote the iperf
* Jarek Poplawski <[EMAIL PROTECTED]> wrote:
> > the (small) patch below fixes the iperf locking bug and removes the
> > yield() use. There are numerous immediate benefits of this patch:
> ...
> >
> > sched_yield() is almost always the symptom of broken locking or other
> > bug. In that sense
On 26-09-2007 15:31, Ingo Molnar wrote:
> * David Schwartz <[EMAIL PROTECTED]> wrote:
>
I think the real fix would be for iperf to use blocking network IO
though, or maybe to use a POSIX mutex or POSIX semaphores.
>>> So it's definitely not a bug in the kernel, only in iperf?
>>
On 26-09-2007 15:31, Ingo Molnar wrote:
* David Schwartz [EMAIL PROTECTED] wrote:
I think the real fix would be for iperf to use blocking network IO
though, or maybe to use a POSIX mutex or POSIX semaphores.
So it's definitely not a bug in the kernel, only in iperf?
Martin:
Actually, in
* Jarek Poplawski [EMAIL PROTECTED] wrote:
the (small) patch below fixes the iperf locking bug and removes the
yield() use. There are numerous immediate benefits of this patch:
...
sched_yield() is almost always the symptom of broken locking or other
bug. In that sense CFS does the
* Martin Michlmayr [EMAIL PROTECTED] wrote:
I think the real fix would be for iperf to use blocking network IO
though, or maybe to use a POSIX mutex or POSIX semaphores.
So it's definitely not a bug in the kernel, only in iperf?
(CCing Stephen Hemminger who wrote the iperf patch.)
* Ingo Molnar [EMAIL PROTECTED] [2007-09-27 11:49]:
Martin, could you check the iperf patch below instead of the yield
patch - does it solve the iperf performance problem equally well,
and does CPU utilization drop for you too?
Yes, it works and CPU goes down too.
--
Martin Michlmayr
* Martin Michlmayr [EMAIL PROTECTED] wrote:
* Ingo Molnar [EMAIL PROTECTED] [2007-09-27 11:49]:
Martin, could you check the iperf patch below instead of the yield
patch - does it solve the iperf performance problem equally well,
and does CPU utilization drop for you too?
Yes, it works
* Ingo Molnar [EMAIL PROTECTED] [2007-09-27 12:56]:
i'm curious by how much does CPU go down, and what's the output of
iperf? (does it saturate full 100mbit network bandwidth)
I get about 94-95 Mbits/sec and CPU drops from 99% to about 82% (this
is with a 600 MHz ARM CPU).
--
Martin Michlmayr
On Thu, Sep 27, 2007 at 11:46:03AM +0200, Ingo Molnar wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
the (small) patch below fixes the iperf locking bug and removes the
yield() use. There are numerous immediate benefits of this patch:
...
sched_yield() is almost always the
* Jarek Poplawski [EMAIL PROTECTED] wrote:
On Thu, Sep 27, 2007 at 11:46:03AM +0200, Ingo Molnar wrote:
[...]
What you missed is that there is no such thing as predictable yield
behavior for anything but SCHED_FIFO/RR tasks (for which tasks CFS does
keep the behavior). Please read this
On Thu, Sep 27, 2007 at 03:31:23PM +0200, Ingo Molnar wrote:
* Jarek Poplawski [EMAIL PROTECTED] wrote:
...
OK, but let's forget about fixing iperf. Probably I got this wrong,
but I've thought this bad iperf patch was tested on a few nixes and
linux was the most different one. The main
Here is the combined fixes from iperf-users list.
Begin forwarded message:
Date: Thu, 30 Aug 2007 15:55:22 -0400
From: "Andrew Gallatin" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: [PATCH] performance fixes for non-linux
Hi,
I've attached a patch which gives iperf similar performance
On Wed, 26 Sep 2007 15:31:38 +0200
Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * David Schwartz <[EMAIL PROTECTED]> wrote:
>
> > > > I think the real fix would be for iperf to use blocking network
> > > > IO though, or maybe to use a POSIX mutex or POSIX semaphores.
> > >
> > > So it's
* David Schwartz <[EMAIL PROTECTED]> wrote:
> > > I think the real fix would be for iperf to use blocking network IO
> > > though, or maybe to use a POSIX mutex or POSIX semaphores.
> >
> > So it's definitely not a bug in the kernel, only in iperf?
>
> Martin:
>
> Actually, in this case I
> > I think the real fix would be for iperf to use blocking network IO
> > though, or maybe to use a POSIX mutex or POSIX semaphores.
>
> So it's definitely not a bug in the kernel, only in iperf?
Martin:
Actually, in this case I think iperf is doing the right thing (though not
the best thing)
* Ingo Molnar <[EMAIL PROTECTED]> [2007-09-26 13:21]:
> > > I noticed on the iperf website a patch which contains sched_yield().
> > > http://dast.nlanr.net/Projects/Iperf2.0/patch-iperf-linux-2.6.21.txt
>
> great! Could you try this too:
>echo 1 > /proc/sys/kernel/sched_compat_yield
>
>
* Martin Michlmayr <[EMAIL PROTECTED]> wrote:
> * Mike Galbraith <[EMAIL PROTECTED]> [2007-09-26 12:23]:
> > I noticed on the iperf website a patch which contains sched_yield().
> > http://dast.nlanr.net/Projects/Iperf2.0/patch-iperf-linux-2.6.21.txt
> >
> > Do you have that patch applied by
1 - 100 of 122 matches
Mail list logo