Hi Frederic,
Gave this series a spin on the same system as v1.

On 2/6/26 7:52 PM, Frederic Weisbecker wrote:
Hi,

After the issue reported here:

         
https://lore.kernel.org/all/[email protected]/

It occurs that the idle cputime accounting is a big mess that
accumulates within two concurrent statistics, each having their own
shortcomings:

* The accounting for online CPUs which is based on the delta between
   tick_nohz_start_idle() and tick_nohz_stop_idle().

   Pros:
        - Works when the tick is off

        - Has nsecs granularity

   Cons:
        - Account idle steal time but doesn't substract it from idle
          cputime.

        - Assumes CONFIG_IRQ_TIME_ACCOUNTING by not accounting IRQs but
          the IRQ time is simply ignored when
          CONFIG_IRQ_TIME_ACCOUNTING=n

        - The windows between 1) idle task scheduling and the first call
          to tick_nohz_start_idle() and 2) idle task between the last
          tick_nohz_stop_idle() and the rest of the idle time are
          blindspots wrt. cputime accounting (though mostly insignificant
          amount)

        - Relies on private fields outside of kernel stats, with specific
          accessors.

* The accounting for offline CPUs which is based on ticks and the
   jiffies delta during which the tick was stopped.

   Pros:
        - Handles steal time correctly

        - Handle CONFIG_IRQ_TIME_ACCOUNTING=y and
          CONFIG_IRQ_TIME_ACCOUNTING=n correctly.

        - Handles the whole idle task

        - Accounts directly to kernel stats, without midlayer accumulator.

    Cons:
        - Doesn't elapse when the tick is off, which doesn't make it
          suitable for online CPUs.

        - Has TICK_NSEC granularity (jiffies)

        - Needs to track the dyntick-idle ticks that were accounted and
          substract them from the total jiffies time spent while the tick
          was stopped. This is an ugly workaround.

Having two different accounting for a single context is not the only
problem: since those accountings are of different natures, it is
possible to observe the global idle time going backward after a CPU goes
offline, as reported by Xin Zhao.

Clean up the situation with introducing a hybrid approach that stays
coherent, fixes the backward jumps and works for both online and offline
CPUs:

* Tick based or native vtime accounting operate before the tick is
   stopped and resumes once the tick is restarted.

* When the idle loop starts, switch to dynticks-idle accounting as is
   done currently, except that the statistics accumulate directly to the
   relevant kernel stat fields.

* Private dyntick cputime accounting fields are removed.

* Works on both online and offline case.

* Move most of the relevant code to the common sched/cputime subsystem

* Handle CONFIG_IRQ_TIME_ACCOUNTING=n correctly such that the
   dynticks-idle accounting still elapses while on IRQs.

* Correctly substract idle steal cputime from idle time

Changes since v1:

- Fix deadlock involving double seq count lock on idle

- Fix build breakage on powerpc

- Fix build breakage on s390 (Heiko)

- Fix broken sysfs s390 idle time file (Heiko)

- Convert most ktime usage here into u64 (Peterz)

- Add missing (or too implicit) <linux/sched/clock.h> (Peterz)

- Fix whole idle time acccounting breakage due to missing TS_FLAG_ set
   on idle entry (Shrikanth Hegde)

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
        timers/core-v2

HEAD: 21458b98c80a0567d48131240317b7b73ba34c3c
Thanks,
        Frederic

idle and runtime utilization with mpstat while running stress-ng looks
correct now.

However, when running hackbench I am noticing the below data. hackbench shows
severe regressions.

base: tip/master at 9c61ebbdb587a3950072700ab74a9310afe3ad73.
(nit: patch 7 is already part of tip. so skipped applying it)
+-----------------------------------------------+-------+---------+-----------+
| Test                                          | base  | +series | % Diff    |
+-----------------------------------------------+-------+---------+-----------+
| HackBench Process 10 groups                   |  2.23 |  3.05   |   -36.77%  |
| HackBench Process 20 groups                   |  4.17 |  5.82   |   -39.57%  |
| HackBench Process 30 groups                   |  6.04 |  8.49   |   -40.56%  |
| HackBench Process 40 groups                   |  7.90 | 11.10   |   -40.51%  |
| HackBench thread 10                           |  2.44 |  3.36   |   -37.70%  |
| HackBench thread 20                           |  4.57 |  6.35   |   -38.95%  |
| HackBench Process(Pipe) 10                    |  1.76 |  2.29   |   -30.11%  |
| HackBench Process(Pipe) 20                    |  3.49 |  4.76   |   -36.39%  |
| HackBench Process(Pipe) 30                    |  5.21 |  7.13   |   -36.85%  |
| HackBench Process(Pipe) 40                    |  6.89 |  9.31   |   -35.12%  |
| HackBench thread(Pipe) 10                     |  1.91 |  2.50   |   -30.89%  |
| HackBench thread(Pipe) 20                     |  3.74 |  5.16   |   -37.97%  |
+-----------------------------------------------+-------+---------+-----------+

I have these in .config and I don't have nohz_full or isolated cpus.

CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
# CONFIG_NO_HZ_IDLE is not set
CONFIG_NO_HZ_FULL=y

# CPU/Task time and stats accounting
#
CONFIG_VIRT_CPU_ACCOUNTING=y
CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_SCHED_AVG_IRQ=y

I did a git bisect and below is what it says.

git bisect start
# status: waiting for both good and bad commits
# bad: [6821315886a3b5267ea31d29dba26fd34647fbbc] sched/cputime: Handle 
dyntick-idle steal time correctly
git bisect bad 6821315886a3b5267ea31d29dba26fd34647fbbc
# status: waiting for good commit(s), bad commit known
# good: [9c61ebbdb587a3950072700ab74a9310afe3ad73] Merge branch into 
tip/master: 'x86/sev'
git bisect good 9c61ebbdb587a3950072700ab74a9310afe3ad73
# good: [dc8bb3c84d162f7d9aa6becf9f8392474f92655a] tick/sched: Remove nohz 
disabled special case in cputime fetch
git bisect good dc8bb3c84d162f7d9aa6becf9f8392474f92655a
# good: [5070a778a581cd668f5d717f85fb22b078d8c20c] tick/sched: Account tickless 
idle cputime only when tick is stopped
git bisect good 5070a778a581cd668f5d717f85fb22b078d8c20c
# bad: [1e0ccc25a9a74b188b239c4de716fde279adbf8e] sched/cputime: Provide 
get_cpu_[idle|iowait]_time_us() off-case
git bisect bad 1e0ccc25a9a74b188b239c4de716fde279adbf8e
# bad: [ee7c735b76071000d401869fc2883c451ee3fa61] tick/sched: Consolidate idle 
time fetching APIs
git bisect bad ee7c735b76071000d401869fc2883c451ee3fa61
# first bad commit: [ee7c735b76071000d401869fc2883c451ee3fa61] tick/sched: 
Consolidate idle time fetching APIs


I did a perf diff between the two (collected perf record -a for hackbench 60 
process 10000 loops)

perf diff base series:
# Baseline  Delta Abs  Shared Object                Symbol
# ........  .........  ...........................  
................................................
#
               +5.43%  [kernel.kallsyms]            [k] __update_freelist_slow
     0.00%     +4.55%  [kernel.kallsyms]            [k] _raw_spin_lock
               +3.35%  [kernel.kallsyms]            [k] __memcg_slab_free_hook
     0.55%     +2.58%  [kernel.kallsyms]            [k] sock_wfree
               +2.51%  [kernel.kallsyms]            [k] __account_obj_stock
     2.29%     -2.29%  [kernel.kallsyms]            [k] _raw_write_lock_irq
               +2.25%  [kernel.kallsyms]            [k] _copy_from_iter
               +1.96%  [kernel.kallsyms]            [k] fdget_pos
               +1.87%  [kernel.kallsyms]            [k] _copy_to_iter
               +1.69%  [kernel.kallsyms]            [k] sock_def_readable
     2.32%     -1.68%  [kernel.kallsyms]            [k] mod_memcg_lruvec_state
     0.82%     +1.67%  [kernel.kallsyms]            [k] skb_set_owner_w
     0.08%     +1.65%  [kernel.kallsyms]            [k] vfs_read
     0.42%     +1.57%  [kernel.kallsyms]            [k] 
kmem_cache_alloc_node_noprof
     1.53%     -1.53%  [kernel.kallsyms]            [k] 
kmem_cache_alloc_lru_noprof
     1.56%     -1.41%  [kernel.kallsyms]            [k] simple_copy_to_iter
     0.27%     +1.32%  [kernel.kallsyms]            [k] kfree
     0.01%     +1.25%  [kernel.kallsyms]            [k] __slab_free
     0.19%     +1.24%  [kernel.kallsyms]            [k] kmem_cache_free
     1.23%     -1.23%  [kernel.kallsyms]            [k] __pcs_replace_full_main
     0.35%     +1.21%  [kernel.kallsyms]            [k] __skb_datagram_iter
     0.21%     +1.13%  [kernel.kallsyms]            [k] sock_alloc_send_pskb
               +1.09%  [kernel.kallsyms]            [k] mutex_lock
               +0.98%  [kernel.kallsyms].head.text  [k] 0x0000000000013004


I haven't gone through the series yet. trying to go through meanwhile.
maybe different allocation scheme or more allocation/free everytime instead of
pre-allocated percpu variables?

First thought of reporting it. Let me know if you need any additional data.

Reply via email to