Toralf, for me the group scheduler offers superior interactivity on my 
laptop for a number of reasons. The biggest practical effect is because 
it splits the CPU time between Xorg (root UID) and desktop apps. This 
helps particularly well when there's compile jobs going on, etc. - Xorg 
still gets a guaranteed share of CPU time which is a nice touch. The 
mouse does not lag that much under load, etc. It's not always possible 
to renice every aspect of my destop.

i wasnt using dnetc myself, so i never triggered your particular issue - 
but i met a similar issue with the distcc user. I think it's more robust 
in general to isolate the dnetc user a bit from the rest of the system - 
even at nice +19 dnetc can interact with your desktop apps.

( In the long run, dnetc (and distcc, and all the other batch/clustering
  apps) would automatically set their uid to a lower cpu_share value, so
  this manual tweaking would not be needed. )

So if you have some time to play with this, could you please try the 
following experiment. Put the following line into your 
/etc/rc.d/rc.local file:

 echo 2 > /sys/kernel/uids/`grep -w dnetc /etc/passwd | cut -d: -f3`/cpu_share

with group scheduling (CONFIG_FAIR_GROUP_SCHED=y) enabled. Also apply 
the patch attached below as well - which fixes some interactivity 
problems with group scheduling.

Could you try that kernel and compare it to a FAIR_GROUP_SCHED-disabled 
kernel's interactivity, and send us your observations?

the group scheduler needs tuning in your case, but in the end, i believe 
it can offer even better interactivity than what we had before - so it 
would be nice if you could try it and compare.

If this still doesnt do the trick and the group scheduler is worse in 
your testing then there's something else going on as well which we need 
to fix. (even if you ultimately decide to disable the group scheduler) 
At minimum we should be able to reach a "works just as well as with 
group scheduling disabled" state. Thanks,

        Ingo

Index: linux/kernel/sched_fair.c
===================================================================
--- linux.orig/kernel/sched_fair.c
+++ linux/kernel/sched_fair.c
@@ -520,7 +520,7 @@ place_entity(struct cfs_rq *cfs_rq, stru
 
        if (!initial) {
                /* sleeps upto a single latency don't count. */
-               if (sched_feat(NEW_FAIR_SLEEPERS) && entity_is_task(se))
+               if (sched_feat(NEW_FAIR_SLEEPERS))
                        vruntime -= sysctl_sched_latency;
 
                /* ensure we never gain time by being placed backwards. */
@@ -1106,7 +1106,11 @@ static void check_preempt_wakeup(struct 
        }
 
        gran = sysctl_sched_wakeup_granularity;
-       if (unlikely(se->load.weight != NICE_0_LOAD))
+       /*
+        * More easily preempt - nice tasks, while not making
+        * it harder for + nice tasks.
+        */
+       if (unlikely(se->load.weight > NICE_0_LOAD))
                gran = calc_delta_fair(gran, &se->load);
 
        if (pse->vruntime + gran < se->vruntime)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to