On Wed, May 30, 2007 at 09:09:26PM -0700, William Lee Irwin III wrote:
It's not all that tricky.
On Thu, May 31, 2007 at 11:18:28AM +0530, Srivatsa Vaddagiri wrote:
Hmm ..the fact that each task runs for a minimum of 1 tick seems to
complicate the matters to me (when doing group fairness
On Thu, May 31, 2007 at 02:03:53PM +0530, Srivatsa Vaddagiri wrote:
Its -wait_runtime will drop less significantly, which lets it be
inserted in rb-tree much to the left of those 1000 tasks (and which
indirectly lets it gain back its fair share during subsequent
schedule cycles).
Hmm ..is
On Wed, May 30, 2007 at 12:14:55AM -0700, Andrew Morton wrote:
So how do we do this?
Is there any sneaky way in which we can modify the kernel so that this new
code gets exercised more? Obviously, tossing init into some default
system-wide container would be a start. But I wonder if we can
On Sat, May 26, 2007 at 08:41:12AM -0700, William Lee Irwin III wrote:
The smpnice affair is better phrased in terms of task weighting. It's
simple to honor nice in such an arrangement. First unravel the
grouping hierarchy, then weight by nice. This looks like
[...]
conveniently collapse to 1
On Wed, May 30, 2007 at 01:13:59PM -0700, William Lee Irwin III wrote:
The step beyond was to show how nice numbers can be done with all that
hierarchical task grouping so they have global effects instead of
effects limited to the scope of the narrowest grouping hierarchy
containing the task
William Lee Irwin III wrote:
Lag is the deviation of a task's allocated CPU time from the CPU time
it would be granted by the ideal fair scheduling algorithm (generalized
processor sharing; take the limit of RR with per-task timeslices
proportional to load weight as the scale factor approaches
On Mon, May 28, 2007 at 10:09:19PM +0530, Srivatsa Vaddagiri wrote:
What do these task weights control? Timeslice primarily? If so, I am not
sure how well it can co-exist with cfs then (unless you are planning to
replace cfs with a equally good interactive/fair scheduler :)
I would be very
* William Lee Irwin III [EMAIL PROTECTED] wrote:
[...] sched_yield() semantics are yet another twist.
On Wed, May 23, 2007 at 08:40:35PM +0200, Ingo Molnar wrote:
that's nonsense, sched_yield() semantics are totally uninteresting. It
is a fundamentally broken interface.
They're not totally
* William Lee Irwin III [EMAIL PROTECTED] wrote:
[...] As an interface it may be poor and worse yet poorly specified,
[...]
On Wed, May 23, 2007 at 09:26:54PM +0200, Ingo Molnar wrote:
that's the only thing that matters to fundamental design questions like
this.
I'm not sure where it comes
* William Lee Irwin III [EMAIL PROTECTED] wrote:
I'm not sure where it comes in as a design question. [...]
On Wed, May 23, 2007 at 09:55:28PM +0200, Ingo Molnar wrote:
uhm, that's my whole point: it does _not_ come in as a design question
at all. You raised it, and i simply stated the fact
On Fri, 2007-05-18 at 09:37 +0530, Balbir Singh wrote:
oops! I wonder if AIM7 creates too many processes and exhausts all
memory. I've seen a case where during an upgrade of my tetex on my
laptop, the setup process failed and continued to fork processes
filling up 4GB of swap.
On Mon, May 21,
On Fri, Apr 06, 2007 at 04:32:27PM -0700, [EMAIL PROTECTED] wrote:
This patch implements the BeanCounter resource control abstraction
over generic process containers. It contains the beancounter core
code, plus the numfiles resource counter. It doesn't currently contain
any of the memory
12 matches
Mail list logo