On 10/25/2018 8:43 AM, Vincent Guittot wrote:
> On Thu, 25 Oct 2018 at 13:29, Steven Sistare <steven.sist...@oracle.com> 
> wrote:
>>
>> On 10/25/2018 3:50 AM, Vincent Guittot wrote:
>>> Hi Steve,
>>>
>>> On Mon, 22 Oct 2018 at 17:10, Steve Sistare <steven.sist...@oracle.com> 
>>> wrote:
>>>>
>>>> When a CPU has no more CFS tasks to run, and idle_balance() fails to
>>>> find a task, then attempt to steal a task from an overloaded CPU in the
>>>> same LLC. Maintain and use a bitmap of overloaded CPUs to efficiently
>>>> identify candidates.  To minimize search time, steal the first migratable
>>>> task that is found when the bitmap is traversed.  For fairness, search
>>>> for migratable tasks on an overloaded CPU in order of next to run.
>>>>
>>>> This simple stealing yields a higher CPU utilization than idle_balance()
>>>> alone, because the search is cheap, so it may be called every time the CPU
>>>> is about to go idle.  idle_balance() does more work because it searches
>>>> widely for the busiest queue, so to limit its CPU consumption, it declines
>>>> to search if the system is too busy.  Simple stealing does not offload the
>>>> globally busiest queue, but it is much better than running nothing at all.
>>>>
>>>> The bitmap of overloaded CPUs is a new type of sparse bitmap, designed to
>>>> reduce cache contention vs the usual bitmap when many threads concurrently
>>>> set, clear, and visit elements.
>>>>
>>>> Patch 1 defines the sparsemask type and its operations.
>>>>
>>>> Patches 2, 3, and 4 implement the bitmap of overloaded CPUs.
>>>>
>>>> Patches 5 and 6 refactor existing code for a cleaner merge of later
>>>>   patches.
>>>>
>>>> Patches 7 and 8 implement task stealing using the overloaded CPUs bitmap.
>>>>
>>>> Patch 9 disables stealing on systems with more than 2 NUMA nodes for the
>>>> time being because of performance regressions that are not due to stealing
>>>> per-se.  See the patch description for details.
>>>>
>>>> Patch 10 adds schedstats for comparing the new behavior to the old, and
>>>>   provided as a convenience for developers only, not for integration.
>>>>
>>>> The patch series is based on kernel 4.19.0-rc7.  It compiles, boots, and
>>>> runs with/without each of CONFIG_SCHED_SMT, CONFIG_SMP, CONFIG_SCHED_DEBUG,
>>>> and CONFIG_PREEMPT.  It runs without error with CONFIG_DEBUG_PREEMPT +
>>>> CONFIG_SLUB_DEBUG + CONFIG_DEBUG_PAGEALLOC + CONFIG_DEBUG_MUTEXES +
>>>> CONFIG_DEBUG_SPINLOCK + CONFIG_DEBUG_ATOMIC_SLEEP.  CPU hot plug and CPU
>>>> bandwidth control were tested.
>>>>
>>>> Stealing imprroves utilization with only a modest CPU overhead in scheduler
>>>> code.  In the following experiment, hackbench is run with varying numbers
>>>> of groups (40 tasks per group), and the delta in /proc/schedstat is shown
>>>> for each run, averaged per CPU, augmented with these non-standard stats:
>>>>
>>>>   %find - percent of time spent in old and new functions that search for
>>>>     idle CPUs and tasks to steal and set the overloaded CPUs bitmap.
>>>>
>>>>   steal - number of times a task is stolen from another CPU.
>>>>
>>>> X6-2: 1 socket * 10 cores * 2 hyperthreads = 20 CPUs
>>>> Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
>>>> hackbench <grps> process 100000
>>>> sched_wakeup_granularity_ns=15000000
>>>
>>> Why do you mention this sched_wakeup_granularity_ns value ?
>>> It is something that you changed for you tests ?
>>> The comment for this tunable says that default value is 1ms *
>>> ilog(ncpus) = 4ms for 20CPUs
>>
>> I changed it for the test, and I explain why a few paragraphs later.
>> The value matches the one set by tuned.service, for those running it.
> 
> ok. I haven't noticed that later explanation.
> 
> You said " Note: for all hackbench runs, sched_wakeup_granularity_ns
> is set to 15 msec.  Otherwise, preemptions increase at higher loads and
> distort the comparison between baseline and new."
> 
> What do you mean exactly by distort ?

With the default value of sched_wakeup_granularity_ns and the load range
I tested, as load and CPU utilization increases, preemptions increase, average 
timeslice decreases, and time per task goes up.  For a given task count, 
stealing
achieves higher utilization than base for the same count, so is hit harder by 
the 
preemption effect than the base.  Raising sched_wakeup_granularity_ns factors 
this 
out of the comparison.

- Steve

>>>>   baseline
>>>>   grps  time  %busy  slice   sched   idle     wake %find  steal
>>>>   1    8.084  75.02   0.10  105476  46291    59183  0.31      0
>>>>   2   13.892  85.33   0.10  190225  70958   119264  0.45      0
>>>>   3   19.668  89.04   0.10  263896  87047   176850  0.49      0
>>>>   4   25.279  91.28   0.10  322171  94691   227474  0.51      0
>>>>   8   47.832  94.86   0.09  630636 144141   486322  0.56      0
>>>>
>>>>   new
>>>>   grps  time  %busy  slice   sched   idle     wake %find  steal  %speedup
>>>>   1    5.938  96.80   0.24   31255   7190    24061  0.63   7433  36.1
>>>>   2   11.491  99.23   0.16   74097   4578    69512  0.84  19463  20.9
>>>>   3   16.987  99.66   0.15  115824   1985   113826  0.77  24707  15.8
>>>>   4   22.504  99.80   0.14  167188   2385   164786  0.75  29353  12.3
>>>>   8   44.441  99.86   0.11  389153   1616   387401  0.67  38190   7.6
>>>>
>>>> Elapsed time improves by 8 to 36%, and CPU busy utilization is up
>>>> by 5 to 22% hitting 99% for 2 or more groups (80 or more tasks).
>>>> The cost is at most 0.4% more find time.
>>>
>>>>

Reply via email to