Here's V2 of the token-based CPU controller I have been working on. Changes since last version (posted at http://lkml.org/lkml/2006/8/20/115):
- Task load was not changed when it moved between task-groups of different quota (bug hit by Mike Galbraith). - SMP load balance seems to work -much- better now wrt its awaress of quota on each task-group. The trick was to go beyond the max_load difference in __move_tasks and instead use the load difference between two task-groups on the different cpus as basis of pulling tasks. - Better timeslice management, aimed at handling bursty workloads better. Patch 3/9 has documentation on timeslice management for various task-groups. - Modified cpuset interface as per Paul Jackson's suggestions. Some of the changes are: - s/meter_cpu/cpu_meter_enabled - s/cpu_quota/cpu_meter_quota - s/FILE_METER_FLAG/FILE_CPU_METER_ENABLED - s/FILE_METER_QUOTA/FILE_CPU_METER_QUOTA - Dont allow cpu_meter_enabled to be turned on for an "in-use" cpuset (which has tasks attached to it) - Dont allow cpu_meter_quota to be changed for an "in-use" cpuset (which has tasks attached to it) Last two are temporary limitations until we figure out how to get to a cpuset's task-list more easily. Still on my todo list: - Improved surplus cycles management. If A, B and C groups have been given 50%, 30% and 20% quota respectively and if group B is idle, B's quota has to be divided b/n A and C in the 5:2 proportion. - Although load balance seems to be working nicely for the testcases I have been running, I anticipate certain corner cases which are yet to be worked out. Especially I need to make sure some of the HT/MC optimizations are not broken. Ingo/Nick, IMHO virtualizing cpu-runqueues approach to solve the controller need is not a good idea, since: - retaining existing load-balance optimizations for MC/SMT case is going to be hard (has to be done at schedule time now) - because of virtualization, two virtual cpus could end up running on the same physical cpu which would affect the carefull SMP optimizations put in place are all-over the kernel - not to mention specialized apps which want to bind to CPUs for performance reasons may behave badly in such a virtualized environment. Hence I have been pursuing more simpler approaches like in this patch. Your comments/views on this are highly appreciated. -- Regards, vatsa ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys -- and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ ckrm-tech mailing list https://lists.sourceforge.net/lists/listinfo/ckrm-tech