* William Lee Irwin III <[EMAIL PROTECTED]> wrote: > [...] Also rest assured that the tone of the critique is not hostile, > and wasn't meant to sound that way.
ok :) (And i guess i was too touchy - sorry about coming out swinging.) > Also, given the general comments it appears clear that some > statistical metric of deviation from the intended behavior furthermore > qualified by timescale is necessary, so this appears to be headed > toward a sort of performance metric as opposed to a pass/fail test > anyway. However, to even measure this at all, some statement of > intention is required. I'd prefer that there be a Linux-standard > semantics for nice so results are more directly comparable and so that > users also get similar nice behavior from the scheduler as it varies > over time and possibly implementations if users should care to switch > them out with some scheduler patch or other. yeah. If you could come up with a sane definition that also translates into low overhead on the algorithm side that would be great! The only good generic definition i could come up with (nice levels are isolated buckets with a constant maximum relative percentage of CPU time available to every active bucket) resulted in having a per-nice-level array of rbtree roots, which did not look worth the hassle at first sight :-) until now the main approach for nice levels in Linux was always: "implement your main scheduling logic for nice 0 and then look for some low-overhead method that can be glued to it that does something that behaves like nice levels". Feel free to turn that around into a more natural approach, but the algorithm should remain fairly simple i think. Ingo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/