Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-21 Thread Peter Williams
Con Kolivas wrote:
Marc E. Fiuczynski wrote:
Paraphrasing Jens Axboe:
I don't think you can compare [plugsched with the plugio framework].
Yes they are both schedulers, but that's about where the 'similarity'
stops. The CPU scheduler must be really fast, overhead must be kept
to a minimum. For a disk scheduler, we can affort to burn cpu cycles
to increase the io performance. The extra abstraction required to
fully modularize the cpu scheduler would come at a non-zero cost as
well, but I bet it would have a larger impact there. I doubt you
could measure the difference in the disk scheduler.

Modularization usually is done through a level of indirection (function
pointers).  I have a can of "indirection be gone" almost ready to 
spray over
the plugsched framework that would reduce the overhead to zero at 
runtime.
I'd be happy to finish that work if it makes it more palpable to 
integrate a
plugsched framework into the kernel?

The indirection was a minor point. On modern cpus it was suggested by 
wli that this would not be a demonstrable hit in perormance. Having said 
that, I'm sure Peter would be happy for another developer. I know how 
tiring and lonely it can feel maintaining such a monster.
Indeed, the more hands the lighter the load.
Another issue (than indirection) that I think needs to be addressed at 
some stage is freeing up the memory occupied by the code of the 
schedulers that were unlucky not to be picked.  Something like what 
__init offers only more selective.

And the option of allowing more than one CPU per run queue is another 
direction that needs addressing.  This could allow a better balance 
between the good scheduling fairness that is obtained by using a single 
run queue with the better scalability obtained by using separate run queues.

Peter
--
Peter Williams   [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
 -- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-21 Thread Con Kolivas
Marc E. Fiuczynski wrote:
Paraphrasing Jens Axboe:
I don't think you can compare [plugsched with the plugio framework].
Yes they are both schedulers, but that's about where the 'similarity'
stops. The CPU scheduler must be really fast, overhead must be kept
to a minimum. For a disk scheduler, we can affort to burn cpu cycles
to increase the io performance. The extra abstraction required to
fully modularize the cpu scheduler would come at a non-zero cost as
well, but I bet it would have a larger impact there. I doubt you
could measure the difference in the disk scheduler.

Modularization usually is done through a level of indirection (function
pointers).  I have a can of "indirection be gone" almost ready to spray over
the plugsched framework that would reduce the overhead to zero at runtime.
I'd be happy to finish that work if it makes it more palpable to integrate a
plugsched framework into the kernel?
The indirection was a minor point. On modern cpus it was suggested by 
wli that this would not be a demonstrable hit in perormance. Having said 
that, I'm sure Peter would be happy for another developer. I know how 
tiring and lonely it can feel maintaining such a monster.

Cheers,
Con


signature.asc
Description: OpenPGP digital signature


RE: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-21 Thread Marc E. Fiuczynski
Paraphrasing Jens Axboe:
> I don't think you can compare [plugsched with the plugio framework].
> Yes they are both schedulers, but that's about where the 'similarity'
> stops. The CPU scheduler must be really fast, overhead must be kept
> to a minimum. For a disk scheduler, we can affort to burn cpu cycles
> to increase the io performance. The extra abstraction required to
> fully modularize the cpu scheduler would come at a non-zero cost as
> well, but I bet it would have a larger impact there. I doubt you
> could measure the difference in the disk scheduler.

Modularization usually is done through a level of indirection (function
pointers).  I have a can of "indirection be gone" almost ready to spray over
the plugsched framework that would reduce the overhead to zero at runtime.
I'd be happy to finish that work if it makes it more palpable to integrate a
plugsched framework into the kernel?

Marc

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ckrm-tech] RE: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-21 Thread Shailabh Nagar
Marc E. Fiuczynski wrote:
Hi Peter,

I'm hoping that the CKRM folks will send me a patch to add their
scheduler to plugsched :-)

They are planning to release a patch against 2.6.10.  But their patch wont
stand alone against 2.6.10 and so it might be difficult for you to integrate
their code into a scheduler for plugsched.
Thats true. The current CKRM CPU scheduler is not a standalone 
component...if it were made one, it would need a non-CKRM interface to 
define classes, set their shares etc.

However, we have not investigated the possibility of making our CPU 
scheduler a pluggable one that could be loaded into a kernel equipped 
with the plugsched patches AND the CKRM framework. This should be 
possible but not a high priority until there is more  consensus for 
having CPU schedulers pluggable at all  (we have more basic stuff to fix 
in our scheduler such as load balancing).

Of course, we're more than happpy to work with someone willing to chip 
in and make our scheduler pluggable.

-- Shailabh
Also, the CKRM scheduler only modifies Ingo's O(1) scheduler.  It certainly
would be interesting to have CKRM variants of the other schedulers.  This
points to a whole new level of 'plugsched' in that general O(1) schedulers
need to support fair share plugins.
Marc

---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag-&-drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
___
ckrm-tech mailing list
https://lists.sourceforge.net/lists/listinfo/ckrm-tech
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-21 Thread Jens Axboe
On Thu, Jan 20 2005, [EMAIL PROTECTED] wrote:
> On Thu, 20 Jan 2005 11:14:48 EST, "Marc E. Fiuczynski" said:
> > Peter, thank you for maintaining Con's plugsched code in light of Linus' and
> > Ingo's prior objections to this idea.  On the one hand, I partially agree
> > with Linus's prior views that when there is only one scheduler that the
> > rest of the world + dog will focus on making it better. On the other hand,
> > having a clean framework that lets developers in a clean way plug in new
> > schedulers is quite useful.
> > 
> > Linus & Ingo, it would be good to have an indepth discussion on this topic.
> > I'd argue that the Linux kernel NEEDS a clean pluggable scheduling
> > framework.
> 
> Is this something that would benefit from several trips around the -mm
> series?
> 
> ISTR that we started with one disk elevator, and now we have 3 or 4
> that are selectable on the fly after some banging around in -mm.  (And
> yes, I realize that the only reason we can change the elevator on the
> fly is because it can switch from the current to the 'stupid FIFO
> none' elevator and thence to the new one, which wouldn't really work
> for the CPU scheduler)

I don't think you can compare the two. Yes they are both schedulers, but
that's about where the 'similarity' stops. The CPU scheduler must be
really fast, overhead must be kept to a minimum. For a disk scheduler,
we can affort to burn cpu cycles to increase the io performance. The
extra abstraction required to fully modularize the cpu scheduler would
come at a non-zero cost as well, but I bet it would have a larger impact
there. I doubt you could measure the difference in the disk scheduler.

There are vast differences between io storage devices, that is why we
have different io schedulers. I made those modular so that the desktop
user didn't have to incur the cost of having 4 schedulers when he only
really needs one.

> All the arguments that support having more than one elevator apply
> equally well to the CPU scheduler

Not at all, imho. It's two completely different problems.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-21 Thread Jens Axboe
On Thu, Jan 20 2005, [EMAIL PROTECTED] wrote:
 On Thu, 20 Jan 2005 11:14:48 EST, Marc E. Fiuczynski said:
  Peter, thank you for maintaining Con's plugsched code in light of Linus' and
  Ingo's prior objections to this idea.  On the one hand, I partially agree
  with LinusIngo's prior views that when there is only one scheduler that the
  rest of the world + dog will focus on making it better. On the other hand,
  having a clean framework that lets developers in a clean way plug in new
  schedulers is quite useful.
  
  Linus  Ingo, it would be good to have an indepth discussion on this topic.
  I'd argue that the Linux kernel NEEDS a clean pluggable scheduling
  framework.
 
 Is this something that would benefit from several trips around the -mm
 series?
 
 ISTR that we started with one disk elevator, and now we have 3 or 4
 that are selectable on the fly after some banging around in -mm.  (And
 yes, I realize that the only reason we can change the elevator on the
 fly is because it can switch from the current to the 'stupid FIFO
 none' elevator and thence to the new one, which wouldn't really work
 for the CPU scheduler)

I don't think you can compare the two. Yes they are both schedulers, but
that's about where the 'similarity' stops. The CPU scheduler must be
really fast, overhead must be kept to a minimum. For a disk scheduler,
we can affort to burn cpu cycles to increase the io performance. The
extra abstraction required to fully modularize the cpu scheduler would
come at a non-zero cost as well, but I bet it would have a larger impact
there. I doubt you could measure the difference in the disk scheduler.

There are vast differences between io storage devices, that is why we
have different io schedulers. I made those modular so that the desktop
user didn't have to incur the cost of having 4 schedulers when he only
really needs one.

 All the arguments that support having more than one elevator apply
 equally well to the CPU scheduler

Not at all, imho. It's two completely different problems.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ckrm-tech] RE: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-21 Thread Shailabh Nagar
Marc E. Fiuczynski wrote:
Hi Peter,

I'm hoping that the CKRM folks will send me a patch to add their
scheduler to plugsched :-)

They are planning to release a patch against 2.6.10.  But their patch wont
stand alone against 2.6.10 and so it might be difficult for you to integrate
their code into a scheduler for plugsched.
Thats true. The current CKRM CPU scheduler is not a standalone 
component...if it were made one, it would need a non-CKRM interface to 
define classes, set their shares etc.

However, we have not investigated the possibility of making our CPU 
scheduler a pluggable one that could be loaded into a kernel equipped 
with the plugsched patches AND the CKRM framework. This should be 
possible but not a high priority until there is more  consensus for 
having CPU schedulers pluggable at all  (we have more basic stuff to fix 
in our scheduler such as load balancing).

Of course, we're more than happpy to work with someone willing to chip 
in and make our scheduler pluggable.

-- Shailabh
Also, the CKRM scheduler only modifies Ingo's O(1) scheduler.  It certainly
would be interesting to have CKRM variants of the other schedulers.  This
points to a whole new level of 'plugsched' in that general O(1) schedulers
need to support fair share plugins.
Marc

---
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag--drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
___
ckrm-tech mailing list
https://lists.sourceforge.net/lists/listinfo/ckrm-tech
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-21 Thread Marc E. Fiuczynski
Paraphrasing Jens Axboe:
 I don't think you can compare [plugsched with the plugio framework].
 Yes they are both schedulers, but that's about where the 'similarity'
 stops. The CPU scheduler must be really fast, overhead must be kept
 to a minimum. For a disk scheduler, we can affort to burn cpu cycles
 to increase the io performance. The extra abstraction required to
 fully modularize the cpu scheduler would come at a non-zero cost as
 well, but I bet it would have a larger impact there. I doubt you
 could measure the difference in the disk scheduler.

Modularization usually is done through a level of indirection (function
pointers).  I have a can of indirection be gone almost ready to spray over
the plugsched framework that would reduce the overhead to zero at runtime.
I'd be happy to finish that work if it makes it more palpable to integrate a
plugsched framework into the kernel?

Marc

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-21 Thread Con Kolivas
Marc E. Fiuczynski wrote:
Paraphrasing Jens Axboe:
I don't think you can compare [plugsched with the plugio framework].
Yes they are both schedulers, but that's about where the 'similarity'
stops. The CPU scheduler must be really fast, overhead must be kept
to a minimum. For a disk scheduler, we can affort to burn cpu cycles
to increase the io performance. The extra abstraction required to
fully modularize the cpu scheduler would come at a non-zero cost as
well, but I bet it would have a larger impact there. I doubt you
could measure the difference in the disk scheduler.

Modularization usually is done through a level of indirection (function
pointers).  I have a can of indirection be gone almost ready to spray over
the plugsched framework that would reduce the overhead to zero at runtime.
I'd be happy to finish that work if it makes it more palpable to integrate a
plugsched framework into the kernel?
The indirection was a minor point. On modern cpus it was suggested by 
wli that this would not be a demonstrable hit in perormance. Having said 
that, I'm sure Peter would be happy for another developer. I know how 
tiring and lonely it can feel maintaining such a monster.

Cheers,
Con


signature.asc
Description: OpenPGP digital signature


Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-21 Thread Peter Williams
Con Kolivas wrote:
Marc E. Fiuczynski wrote:
Paraphrasing Jens Axboe:
I don't think you can compare [plugsched with the plugio framework].
Yes they are both schedulers, but that's about where the 'similarity'
stops. The CPU scheduler must be really fast, overhead must be kept
to a minimum. For a disk scheduler, we can affort to burn cpu cycles
to increase the io performance. The extra abstraction required to
fully modularize the cpu scheduler would come at a non-zero cost as
well, but I bet it would have a larger impact there. I doubt you
could measure the difference in the disk scheduler.

Modularization usually is done through a level of indirection (function
pointers).  I have a can of indirection be gone almost ready to 
spray over
the plugsched framework that would reduce the overhead to zero at 
runtime.
I'd be happy to finish that work if it makes it more palpable to 
integrate a
plugsched framework into the kernel?

The indirection was a minor point. On modern cpus it was suggested by 
wli that this would not be a demonstrable hit in perormance. Having said 
that, I'm sure Peter would be happy for another developer. I know how 
tiring and lonely it can feel maintaining such a monster.
Indeed, the more hands the lighter the load.
Another issue (than indirection) that I think needs to be addressed at 
some stage is freeing up the memory occupied by the code of the 
schedulers that were unlucky not to be picked.  Something like what 
__init offers only more selective.

And the option of allowing more than one CPU per run queue is another 
direction that needs addressing.  This could allow a better balance 
between the good scheduling fairness that is obtained by using a single 
run queue with the better scalability obtained by using separate run queues.

Peter
--
Peter Williams   [EMAIL PROTECTED]
Learning, n. The kind of ignorance distinguishing the studious.
 -- Ambrose Bierce
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-20 Thread Marc E. Fiuczynski
Hi Peter,

> I'm hoping that the CKRM folks will send me a patch to add their
> scheduler to plugsched :-)

They are planning to release a patch against 2.6.10.  But their patch wont
stand alone against 2.6.10 and so it might be difficult for you to integrate
their code into a scheduler for plugsched.

Also, the CKRM scheduler only modifies Ingo's O(1) scheduler.  It certainly
would be interesting to have CKRM variants of the other schedulers.  This
points to a whole new level of 'plugsched' in that general O(1) schedulers
need to support fair share plugins.

Marc

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-20 Thread Peter Williams
Marc E. Fiuczynski wrote:
Peter, thank you for maintaining Con's plugsched code in light of Linus' and
Ingo's prior objections to this idea.  On the one hand, I partially agree
with Linus's prior views that when there is only one scheduler that the
rest of the world + dog will focus on making it better. On the other hand,
having a clean framework that lets developers in a clean way plug in new
schedulers is quite useful.
Linus & Ingo, it would be good to have an indepth discussion on this topic.
I'd argue that the Linux kernel NEEDS a clean pluggable scheduling
framework.
Let me make a case for this NEED by example.  Ingo's scheduler belongs to
the egalitarian regime of schedulers that do a poor job of isolating
workloads from each other in multiprogrammed environments such as those
found on Enterprise servers and in my case on PlanetLab (www.planet-lab.org)
nodes.  This has been rectified by HP-UX, Solaris, and AIX through the use
of fair share schedulers that use O(1) schedulers within a share.  Currently
PlanetLab uses a CKRM modified version of Ingo's scheduler.
I'm hoping that the CKRM folks will send me a patch to add their 
scheduler to plugsched :-)

 Similarly, the
linux-vserver project also modifies Ingo's scheduler to construct an
entitlement based scheduling regime. These are not just variants of O(1)
schedulers in the sense of Con's staircase O(1). Nor is it clear what the
best type of scheduler is for these environments (i.e., HP-UX, Solaris and
AIX don't have it fully solved yet either). The ability to dynamically swap
out schedulers on a production system like PlanetLab would help in
determining what type of scheduler is the most appropriate.  This is because
it is non-trivial, if not impossible, to recreate the multiprogrammed
workloads that we see in a lab.
For these reasons, it would be useful for plugsched (or something like it)
to make its way into the mainline kernel as a framework to plug in different
schedulers.  Alternatively, it would be useful to consider in what way
Ingo's scheduler needs to support plugins such as the CKRM and Vserver types
of changes.
Best regards,
Marc

--
Peter Williams   [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
 -- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-20 Thread Valdis . Kletnieks
On Thu, 20 Jan 2005 11:14:48 EST, "Marc E. Fiuczynski" said:
> Peter, thank you for maintaining Con's plugsched code in light of Linus' and
> Ingo's prior objections to this idea.  On the one hand, I partially agree
> with Linus's prior views that when there is only one scheduler that the
> rest of the world + dog will focus on making it better. On the other hand,
> having a clean framework that lets developers in a clean way plug in new
> schedulers is quite useful.
> 
> Linus & Ingo, it would be good to have an indepth discussion on this topic.
> I'd argue that the Linux kernel NEEDS a clean pluggable scheduling
> framework.

Is this something that would benefit from several trips around the -mm series?

ISTR that we started with one disk elevator, and now we have 3 or 4 that are
selectable on the fly after some banging around in -mm.  (And yes, I realize 
that
the only reason we can change the elevator on the fly is because it can switch
from the current to the 'stupid FIFO none' elevator and thence to the new one,
which wouldn't really work for the CPU scheduler)

All the arguments that support having more than one elevator apply equally
well to the CPU scheduler


pgpNSGReZV3qB.pgp
Description: PGP signature


RE: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-20 Thread Marc E. Fiuczynski
Peter, thank you for maintaining Con's plugsched code in light of Linus' and
Ingo's prior objections to this idea.  On the one hand, I partially agree
with Linus's prior views that when there is only one scheduler that the
rest of the world + dog will focus on making it better. On the other hand,
having a clean framework that lets developers in a clean way plug in new
schedulers is quite useful.

Linus & Ingo, it would be good to have an indepth discussion on this topic.
I'd argue that the Linux kernel NEEDS a clean pluggable scheduling
framework.

Let me make a case for this NEED by example.  Ingo's scheduler belongs to
the egalitarian regime of schedulers that do a poor job of isolating
workloads from each other in multiprogrammed environments such as those
found on Enterprise servers and in my case on PlanetLab (www.planet-lab.org)
nodes.  This has been rectified by HP-UX, Solaris, and AIX through the use
of fair share schedulers that use O(1) schedulers within a share.  Currently
PlanetLab uses a CKRM modified version of Ingo's scheduler.  Similarly, the
linux-vserver project also modifies Ingo's scheduler to construct an
entitlement based scheduling regime. These are not just variants of O(1)
schedulers in the sense of Con's staircase O(1). Nor is it clear what the
best type of scheduler is for these environments (i.e., HP-UX, Solaris and
AIX don't have it fully solved yet either). The ability to dynamically swap
out schedulers on a production system like PlanetLab would help in
determining what type of scheduler is the most appropriate.  This is because
it is non-trivial, if not impossible, to recreate the multiprogrammed
workloads that we see in a lab.

For these reasons, it would be useful for plugsched (or something like it)
to make its way into the mainline kernel as a framework to plug in different
schedulers.  Alternatively, it would be useful to consider in what way
Ingo's scheduler needs to support plugins such as the CKRM and Vserver types
of changes.

Best regards,
Marc

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-20 Thread Marc E. Fiuczynski
Peter, thank you for maintaining Con's plugsched code in light of Linus' and
Ingo's prior objections to this idea.  On the one hand, I partially agree
with LinusIngo's prior views that when there is only one scheduler that the
rest of the world + dog will focus on making it better. On the other hand,
having a clean framework that lets developers in a clean way plug in new
schedulers is quite useful.

Linus  Ingo, it would be good to have an indepth discussion on this topic.
I'd argue that the Linux kernel NEEDS a clean pluggable scheduling
framework.

Let me make a case for this NEED by example.  Ingo's scheduler belongs to
the egalitarian regime of schedulers that do a poor job of isolating
workloads from each other in multiprogrammed environments such as those
found on Enterprise servers and in my case on PlanetLab (www.planet-lab.org)
nodes.  This has been rectified by HP-UX, Solaris, and AIX through the use
of fair share schedulers that use O(1) schedulers within a share.  Currently
PlanetLab uses a CKRM modified version of Ingo's scheduler.  Similarly, the
linux-vserver project also modifies Ingo's scheduler to construct an
entitlement based scheduling regime. These are not just variants of O(1)
schedulers in the sense of Con's staircase O(1). Nor is it clear what the
best type of scheduler is for these environments (i.e., HP-UX, Solaris and
AIX don't have it fully solved yet either). The ability to dynamically swap
out schedulers on a production system like PlanetLab would help in
determining what type of scheduler is the most appropriate.  This is because
it is non-trivial, if not impossible, to recreate the multiprogrammed
workloads that we see in a lab.

For these reasons, it would be useful for plugsched (or something like it)
to make its way into the mainline kernel as a framework to plug in different
schedulers.  Alternatively, it would be useful to consider in what way
Ingo's scheduler needs to support plugins such as the CKRM and Vserver types
of changes.

Best regards,
Marc

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-20 Thread Valdis . Kletnieks
On Thu, 20 Jan 2005 11:14:48 EST, Marc E. Fiuczynski said:
 Peter, thank you for maintaining Con's plugsched code in light of Linus' and
 Ingo's prior objections to this idea.  On the one hand, I partially agree
 with LinusIngo's prior views that when there is only one scheduler that the
 rest of the world + dog will focus on making it better. On the other hand,
 having a clean framework that lets developers in a clean way plug in new
 schedulers is quite useful.
 
 Linus  Ingo, it would be good to have an indepth discussion on this topic.
 I'd argue that the Linux kernel NEEDS a clean pluggable scheduling
 framework.

Is this something that would benefit from several trips around the -mm series?

ISTR that we started with one disk elevator, and now we have 3 or 4 that are
selectable on the fly after some banging around in -mm.  (And yes, I realize 
that
the only reason we can change the elevator on the fly is because it can switch
from the current to the 'stupid FIFO none' elevator and thence to the new one,
which wouldn't really work for the CPU scheduler)

All the arguments that support having more than one elevator apply equally
well to the CPU scheduler


pgpNSGReZV3qB.pgp
Description: PGP signature


Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-20 Thread Peter Williams
Marc E. Fiuczynski wrote:
Peter, thank you for maintaining Con's plugsched code in light of Linus' and
Ingo's prior objections to this idea.  On the one hand, I partially agree
with LinusIngo's prior views that when there is only one scheduler that the
rest of the world + dog will focus on making it better. On the other hand,
having a clean framework that lets developers in a clean way plug in new
schedulers is quite useful.
Linus  Ingo, it would be good to have an indepth discussion on this topic.
I'd argue that the Linux kernel NEEDS a clean pluggable scheduling
framework.
Let me make a case for this NEED by example.  Ingo's scheduler belongs to
the egalitarian regime of schedulers that do a poor job of isolating
workloads from each other in multiprogrammed environments such as those
found on Enterprise servers and in my case on PlanetLab (www.planet-lab.org)
nodes.  This has been rectified by HP-UX, Solaris, and AIX through the use
of fair share schedulers that use O(1) schedulers within a share.  Currently
PlanetLab uses a CKRM modified version of Ingo's scheduler.
I'm hoping that the CKRM folks will send me a patch to add their 
scheduler to plugsched :-)

 Similarly, the
linux-vserver project also modifies Ingo's scheduler to construct an
entitlement based scheduling regime. These are not just variants of O(1)
schedulers in the sense of Con's staircase O(1). Nor is it clear what the
best type of scheduler is for these environments (i.e., HP-UX, Solaris and
AIX don't have it fully solved yet either). The ability to dynamically swap
out schedulers on a production system like PlanetLab would help in
determining what type of scheduler is the most appropriate.  This is because
it is non-trivial, if not impossible, to recreate the multiprogrammed
workloads that we see in a lab.
For these reasons, it would be useful for plugsched (or something like it)
to make its way into the mainline kernel as a framework to plug in different
schedulers.  Alternatively, it would be useful to consider in what way
Ingo's scheduler needs to support plugins such as the CKRM and Vserver types
of changes.
Best regards,
Marc

--
Peter Williams   [EMAIL PROTECTED]
Learning, n. The kind of ignorance distinguishing the studious.
 -- Ambrose Bierce
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-20 Thread Marc E. Fiuczynski
Hi Peter,

 I'm hoping that the CKRM folks will send me a patch to add their
 scheduler to plugsched :-)

They are planning to release a patch against 2.6.10.  But their patch wont
stand alone against 2.6.10 and so it might be difficult for you to integrate
their code into a scheduler for plugsched.

Also, the CKRM scheduler only modifies Ingo's O(1) scheduler.  It certainly
would be interesting to have CKRM variants of the other schedulers.  This
points to a whole new level of 'plugsched' in that general O(1) schedulers
need to support fair share plugins.

Marc

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-19 Thread Kasper Sandberg
its nice to see that this project is not dead after all :DD

On Thu, 2005-01-20 at 12:23 +1100, Peter Williams wrote:
> ... are now available from:
> 
> 
> 
> as a single patch to linux-2.6.10 and at:
> 
> 
> 

> 
> Peter

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-19 Thread Peter Williams
... are now available from:

as a single patch to linux-2.6.10 and at:

as a (gzipped and tarred) patch set including "series" file which 
nominates the order of application of the patches.

This is an update of the earlier version of plugsched (previously 
released by Con Kolivas) and has a considerably modified scheduler 
interface that is intended to reduce the amount of code duplication 
required when adding a new scheduler.  It also contains a sysfs 
interface based on work submitted by Chris Han.

This version of plugsched contains 4 schedulers:
1. "ingosched" which is the standard active/expired array O(1) scheduler 
created by Ingo Molnar,
2. "staircase" which is Con Kolivas's version 10.5 O(1) staircase scheduler,
3. "spa_no_frills" which is a single priority array O(1) scheduler 
without any interactive response enhancements, etc., and
4. "zaphod" which is a single priority array O(1) scheduler with 
interactive response bonuses, throughput bonuses and a choice of 
priority based or entitlement based interpretation of "nice".

Schedulers 3 and 4 also offer unprivileged real time tasks and hard/soft 
per task CPU rate caps.

The required scheduler can be selected at boot time by supplying a 
string of the form "cpusched=" where  is one of the names 
listed above.

The default scheduler (that will be used in the absence of a "cpusched" 
boot argument) can be configured at build time and is set to "ingosched" 
by default.

The file /proc/scheduler contains a string describing the current scheduler.
The directory /sys/cpusched// contains any 
scheduler configuration control files that may apply to the current 
scheduler.

Peter
--
Peter Williams   [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
 -- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-19 Thread Peter Williams
... are now available from:
http://prdownloads.sourceforge.net/cpuse/plugsched-2.0-for-2.6.10.patch?download
as a single patch to linux-2.6.10 and at:
http://prdownloads.sourceforge.net/cpuse/plugsched-2.0-for-2.6.10.patchset.tar.gz?download
as a (gzipped and tarred) patch set including series file which 
nominates the order of application of the patches.

This is an update of the earlier version of plugsched (previously 
released by Con Kolivas) and has a considerably modified scheduler 
interface that is intended to reduce the amount of code duplication 
required when adding a new scheduler.  It also contains a sysfs 
interface based on work submitted by Chris Han.

This version of plugsched contains 4 schedulers:
1. ingosched which is the standard active/expired array O(1) scheduler 
created by Ingo Molnar,
2. staircase which is Con Kolivas's version 10.5 O(1) staircase scheduler,
3. spa_no_frills which is a single priority array O(1) scheduler 
without any interactive response enhancements, etc., and
4. zaphod which is a single priority array O(1) scheduler with 
interactive response bonuses, throughput bonuses and a choice of 
priority based or entitlement based interpretation of nice.

Schedulers 3 and 4 also offer unprivileged real time tasks and hard/soft 
per task CPU rate caps.

The required scheduler can be selected at boot time by supplying a 
string of the form cpusched=name where name is one of the names 
listed above.

The default scheduler (that will be used in the absence of a cpusched 
boot argument) can be configured at build time and is set to ingosched 
by default.

The file /proc/scheduler contains a string describing the current scheduler.
The directory /sys/cpusched/current scheduler name/ contains any 
scheduler configuration control files that may apply to the current 
scheduler.

Peter
--
Peter Williams   [EMAIL PROTECTED]
Learning, n. The kind of ignorance distinguishing the studious.
 -- Ambrose Bierce
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE][RFC] plugsched-2.0 patches ...

2005-01-19 Thread Kasper Sandberg
its nice to see that this project is not dead after all :DD

On Thu, 2005-01-20 at 12:23 +1100, Peter Williams wrote:
 ... are now available from:
 
 http://prdownloads.sourceforge.net/cpuse/plugsched-2.0-for-2.6.10.patch?download
 
 as a single patch to linux-2.6.10 and at:
 
 http://prdownloads.sourceforge.net/cpuse/plugsched-2.0-for-2.6.10.patchset.tar.gz?download
 
snip
 
 Peter

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/