This is just for testing at the moment! The reason is the size of this patch.

In the interest of evolution, I've taken the RSDL cpu scheduler and increased 
the resolution of the task timekeeping to nanosecond resolution. This removes 
the need for the runqueue rotation component entirely out of RSDL. The design 
basically is mostly unchanged, minus over 150 lines of code for the rotation, 
yet should be slightly better performing. It should be indistinguishable in 
usage from v0.33.

Other changes from v0.33:
-rr interval was not being properly scaled with HZ
-fix possible race in checking task_queued in task_running_tick
-scale down rr interval for niced tasks if HZ can tolerate it
-cull list_splice_tail

What does this mean for the next version of RSDL?

Assuming all works as expected on these test patches, it will be cleanest to 
submit a new series of patches for -mm with the renamed Staircase-Deadline 
scheduler and new documentation (when it's done).


So for testing here are full rollups for 2.6.20.4 and 2.6.21-rc4:
http://ck.kolivas.org/patches/staircase-deadline/2.6.20.4-sd-0.34-test.patch
http://ck.kolivas.org/patches/staircase-deadline/2.6.21-rc4-sd-0.34-test.patch

The patches available also include a rollup of sched: accurate user accounting 
as this code touches the same area and it is most convenient to include them 
together.

(incrementals in each subdir of staircase-deadline/ for those interested).

Thanks Mike for continuing to attempt to use the cluebat on me on this one. 
>From the start I wasn't sure if this was necessary or not but ends up being 
less code than RSDL.

While I'm still far from being well, luckily I am in much better shape to be 
able to spend the time at the pc to have done this. Thanks to all those who 
expressed their concern.

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to