On 9/6/19 7:43 PM, Valentin Schneider wrote:
> On 06/09/2019 13:45, Parth Shah wrote:> 
>> I guess there is some usecase in case of thermal throttling.
>> If a task is heating up the core then in ideal scenarios POWER systems 
>> throttle
>> down to rated frequency.
>> In such case, if the task is latency sensitive (min latency nice), we can 
>> move the
>> task around the chip to heat up the chip uniformly allowing me to gain more 
>> performance
>> with sustained higher frequency.
>> With this, we will require the help from active load balancer and 
>> latency-nice
>> classification on per task and/or group basis.
>>
>> Hopefully, this might be useful for other arch as well, right?
>>
> 
> Most of the functionality is already there, we're only really missing thermal
> pressure awareness. There was [1] but it seems to have died.
> 
> 
> At least with CFS load balancing, if thermal throttling is correctly
> reflected as a CPU capacity reduction you will tend to move things away from
> that CPU, since load is balanced over capacities.
> 

Right, CPU capacity can solve the problem of indicating the thermal throttle to 
the scheduler.
AFAIU, the patchset from Thara changes CPU capacity to reflect Thermal headroom 
of the CPU.
This is a nice mitigation but,
1. Sometimes a single task is responsible for the Thermal heatup of the core, 
reducing the
   CPU capacity of all the CPUs in the core is not optimal when just moving 
such single
   task to other core can allow us to remain within thermal headroom. This is 
important
   for the servers especially where there are upto 8 threads.
2. Given the implementation in the patches and its integration with EAS, it 
seems difficult
   to adapt to servers, where CPU capacity itself is in doubt.
   https://lkml.org/lkml/2019/5/15/1402

> 
> For active balance, we actually already have a condition that moves a task
> to a less capacity-pressured CPU (although it is somewhat specific). So if
> thermal pressure follows that task (e.g. it's doing tons of vector/float),
> it will be rotated around.

Agree. But this should break in certain conditions like when we have multiple 
tasks
in a core with almost equal utilization among which one is just doing vector 
operations.
LB can pick and move any task with equal probability if the capacity is reduced 
here.

> 
> However there should be a point made on latency vs throughput. If you
> care about latency you probably do not want to active balance your task. If

Can you please elaborate on why not to consider active balance for latency 
sensitive tasks?
Because, sometimes finding a thermally cool core is beneficial when Turbo 
frequency
range is around 20% above rated ones.

> you care about throughput, it should be specified in some way (util-clamp
> says hello!).
> 

yes I do care for latency and throughput both. :-)
but I'm wondering how uclamp can solve the problem for throughput.
If I make the thermally hot tasks to appear bigger than other tasks then 
reducing
CPU capacity can allow such tasks to move around the chip.
But this will require the utilization value to be relatively large compared to 
the other
tasks in the core. Or other task's uclamp.max can be lowered to make such task 
rotate.
If I got it right, then this will be a difficult UCLAMP usecase from user 
perspective, right?
I feel like I'm missing something here.

> It sort of feels like you'd want an extension of misfit migration (salesman
> hat goes on from here) - misfit moves tasks that are CPU bound (IOW their
> util is >= 80% of the CPU capacity) to CPUs of higher capacity. It's only
> enabled for systems with asymmetric capacities, but could be enabled globally
> for "dynamically-created asymmetric capacities" (IOW RT/IRQ/thermal pressure
> on SMP systems).> On top of that, if we make misfit consider e.g. uclamp.min 
> (I don't think
> that's already the case), then you have your throughput knob to have *some* 
> designated tasks move away from (thermal & else) pressure. 
> 
> 
> [1]: 
> https://lore.kernel.org/lkml/1555443521-579-1-git-send-email-thara.gopin...@linaro.org/
> 

Thanks,
Parth

Reply via email to