Martin wrote:
> The way cpusets uses the current cpus_allowed mechanism is, to me, the most
> worrying thing about it. Frankly, the cpus_allowed thing is kind of tacked
> onto the existing scheduler, and not at all integrated into it, and doesn't
> work well if you use it heavily (eg bind all the processes to a few CPUs,
> and watch the rest of the system kill itself). 

True.  One detail of what you say I'm unclear on -- how will the rest of
the system kill itself?  Why wouldn't the unemployed CPUs just idle
around, waiting for something to do?

As I recall, Ingo added task->cpus_allowed for the Tux in-kernel web
server a few years back, and I piggy backed the cpuset stuff on that, to
keep my patch size small.

Likely your same concerns apply to the task->mems_allowed field that
I added, in the same fashion, in my cpuset patch of recent.

We need a mechanism that the cpuset apparatus respects that maps each
CPU to a sched_domain, exactly one sched_domain for any given CPU at any
point in time, regardless of which task it is considering running at the
moment.  Somewhat like dual-channeled disks, having more than one
sched_domain apply at the same time to a given CPU leads to confusions
best avoided unless desparately needed.  Unlike dual-channeled disks, I
don't see the desparate need here for multi-channel sched_domains ;).

And of course, for the vast majority of normal systems in the world
not configured with cpusets, this has to collapse back to something
sensible "just like it is now."

-- 
                          I won't rest till it's the best ...
                          Programmer, Linux Scalability
                          Paul Jackson <[EMAIL PROTECTED]> 1.650.933.1373


-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
ckrm-tech mailing list
https://lists.sourceforge.net/lists/listinfo/ckrm-tech

Reply via email to