On Mon, Jun 20, 2016 at 02:15:13PM +0200, Jiri Olsa wrote: > Sched domains are defined at the start and can't be changed > during runtime.
Not entirely true; you can change them using cpusets, although there are strict constraints on how and you can only make the 'problem' worse. > If user defines workload affinity settings > unevenly with sched domains, he could get unbalanced state > within his affinity group, like: > > Say we have following sched domains: > domain 0: (pairs) s/pairs/smt siblings/ > domain 1: 0-5,12-17 (group1) 6-11,18-23 (group2) this would typically be cache groups > domain 2: 0-23 level NUMA > > User runs workload with affinity setup that takes > one CPU from group1 (0) and the rest from group 2: > 0,6,7,8,9,10,11,18,19,20,21,22 But who would do something like that? I'm really missing a problem statement here. Who cares and why? sched_setaffinity() is an interface that says I know what I'm doing, and you seem to be solving a problem resulting from not actually knowing wth you're doing. I'm not saying we shouldn't look into it, but I really want more justification for this. > User will see idle CPUs within his affinity group, > because load balancer will balance tasks based on load > within group1 and group2, thus placing eqaul load > of tasks on CPU 0 and on the rest of CPUs. So afaict this thing only cares about idleness, and we should be able to fix that differently. The real problem is maintaining fairness in the overloaded case under such silly constraints. So why do you only care about this specific issue.

