Hello,

kernel test robot noticed a 23.3% regression of 
stress-ng.unlink.unlink_calls_per_sec on:


commit: 1af3331764b9356fadc4652af77bbbc97f3d7f78 ("super: add filesystem 
freezing helpers for suspend and hibernate")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master

[test failed on linux-next/master 8566fc3b96539e3235909d6bdda198e1282beaed]

testcase: stress-ng
config: x86_64-rhel-9.4
compiler: gcc-12
test machine: 64 threads 2 sockets Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz 
(Ice Lake) with 256G memory
parameters:

        nr_threads: 100%
        testtime: 60s
        test: unlink
        cpufreq_governor: performance




If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.s...@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202505191143.59950d28-...@intel.com


Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20250519/202505191143.59950d28-...@intel.com

=========================================================================================
compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
  
gcc-12/performance/x86_64-rhel-9.4/100%/debian-12-x86_64-20240206.cgz/lkp-icl-2sp8/unlink/stress-ng/60s

commit: 
  62a2175ddf ("gfs2: pass through holder from the VFS for freeze/thaw")
  1af3331764 ("super: add filesystem freezing helpers for suspend and 
hibernate")

62a2175ddf7e7294 1af3331764b9356fadc4652af77 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
     22349 ±  2%      +6.5%      23809        vmstat.system.cs
    159467            +2.1%     162851        vmstat.system.in
     67494           +12.8%      76141 ± 20%  proc-vmstat.nr_shmem
   1367488            -2.8%    1329138        proc-vmstat.nr_slab_reclaimable
    541527            -1.8%     531651        proc-vmstat.nr_slab_unreclaimable
    316736           +17.4%     371828        
stress-ng.time.voluntary_context_switches
     47192            +1.4%      47854        stress-ng.unlink.ops
    712.42            +1.8%     725.38        stress-ng.unlink.ops_per_sec
     12343           -23.3%       9464        
stress-ng.unlink.unlink_calls_per_sec
 1.376e+10            -1.5%  1.355e+10        perf-stat.i.branch-instructions
     50.73            -1.6       49.16        perf-stat.i.cache-miss-rate%
 2.702e+08            +3.7%  2.802e+08        perf-stat.i.cache-references
     23174 ±  2%      +6.2%      24600        perf-stat.i.context-switches
      1565           +16.8%       1828        perf-stat.i.cpu-migrations
 6.418e+10            -1.5%  6.321e+10        perf-stat.i.instructions
      0.29            -1.5%       0.29        perf-stat.i.ipc
      2.11            +1.9%       2.15        perf-stat.overall.MPKI
     50.08            -1.6       48.47        perf-stat.overall.cache-miss-rate%
      3.52            +1.5%       3.58        perf-stat.overall.cpi
      0.28            -1.5%       0.28        perf-stat.overall.ipc
 1.356e+10            -1.5%  1.336e+10        perf-stat.ps.branch-instructions
 2.664e+08            +3.7%  2.763e+08        perf-stat.ps.cache-references
     22843 ±  2%      +6.2%      24252        perf-stat.ps.context-switches
      1544           +16.8%       1803        perf-stat.ps.cpu-migrations
 6.328e+10            -1.5%  6.233e+10        perf-stat.ps.instructions
 4.344e+12            -1.8%  4.268e+12        perf-stat.total.instructions
      7.93 ±  3%     -20.9%       6.27 ±  8%  
perf-sched.sch_delay.avg.ms.__cond_resched.__dentry_kill.dput.lookup_one_qstr_excl.do_unlinkat
      5.39 ±  2%     -24.0%       4.10 ±  3%  
perf-sched.sch_delay.avg.ms.__cond_resched.dput.do_unlinkat.__x64_sys_unlink.do_syscall_64
      7.22 ±  5%     -14.8%       6.15 ±  3%  
perf-sched.sch_delay.avg.ms.__cond_resched.kmem_cache_alloc_lru_noprof.__d_alloc.d_alloc.lookup_one_qstr_excl
      7.73 ±  3%      -8.9%       7.04        
perf-sched.sch_delay.avg.ms.__cond_resched.mnt_want_write.do_unlinkat.__x64_sys_unlink.do_syscall_64
      0.73 ±  5%     -22.1%       0.57 ±  8%  
perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
      6.43 ±  2%     -14.6%       5.49 ±  3%  
perf-sched.sch_delay.avg.ms.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
     15.68 ± 18%     +25.0%      19.60 ±  9%  
perf-sched.sch_delay.max.ms.__cond_resched.down_read.unmap_mapping_range.simple_setattr.notify_change
     24.13 ± 10%     +44.0%      34.75 ± 23%  
perf-sched.sch_delay.max.ms.__cond_resched.down_write.do_truncate.do_open.path_openat
     11.91 ±  2%     -22.2%       9.26 ±  3%  
perf-sched.wait_and_delay.avg.ms.__cond_resched.dput.do_unlinkat.__x64_sys_unlink.do_syscall_64
     71.98 ± 14%     -30.4%      50.09 ± 15%  
perf-sched.wait_and_delay.avg.ms.anon_pipe_read.fifo_pipe_read.vfs_read.ksys_read
     14.94 ±  5%     -14.2%      12.82 ±  3%  
perf-sched.wait_and_delay.avg.ms.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
     27151           -18.5%      22138        
perf-sched.wait_and_delay.count.__cond_resched.__dentry_kill.dput.__fput.__x64_sys_close
      1001 ±  2%     +77.4%       1776 ±  2%  
perf-sched.wait_and_delay.count.__cond_resched.down_write.vfs_unlink.do_unlinkat.__x64_sys_unlink
      2584 ±  2%     +19.3%       3082 ±  3%  
perf-sched.wait_and_delay.count.__cond_resched.dput.do_unlinkat.__x64_sys_unlink.do_syscall_64
      1046 ±  2%     -21.0%     826.67 ±  3%  
perf-sched.wait_and_delay.count.__cond_resched.dput.lookup_one_qstr_excl.do_unlinkat.__x64_sys_unlink
     30647           -14.9%      26094        
perf-sched.wait_and_delay.count.__cond_resched.dput.open_last_lookups.path_openat.do_filp_open
    802.83 ±  3%     +94.0%       1557 ±  3%  
perf-sched.wait_and_delay.count.__cond_resched.dput.simple_unlink.vfs_unlink.do_unlinkat
      2993           +83.0%       5479 ±  2%  
perf-sched.wait_and_delay.count.__cond_resched.dput.terminate_walk.path_openat.do_filp_open
      1952 ±  2%     +19.9%       2341        
perf-sched.wait_and_delay.count.__cond_resched.kmem_cache_alloc_lru_noprof.alloc_inode.new_inode.ramfs_get_inode
      1026 ±  2%     +59.1%       1632 ±  2%  
perf-sched.wait_and_delay.count.__cond_resched.kmem_cache_alloc_noprof.security_inode_alloc.inode_init_always_gfp.alloc_inode
      1831 ±  2%     +70.4%       3120        
perf-sched.wait_and_delay.count.__cond_resched.mnt_want_write.do_unlinkat.__x64_sys_unlink.do_syscall_64
      1932 ±  2%     +38.5%       2677 ±  2%  
perf-sched.wait_and_delay.count.__cond_resched.mnt_want_write.open_last_lookups.path_openat.do_filp_open
    612.17 ± 15%     +45.3%     889.33 ± 19%  
perf-sched.wait_and_delay.count.anon_pipe_read.fifo_pipe_read.vfs_read.ksys_read
      7589 ±  2%     +21.7%       9237 ±  2%  
perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.do_unlinkat
      3569 ±  2%     +20.4%       4299 ±  2%  
perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.filename_create
      6.52 ±  2%     -20.8%       5.16 ±  3%  
perf-sched.wait_time.avg.ms.__cond_resched.dput.do_unlinkat.__x64_sys_unlink.do_syscall_64
     71.91 ± 14%     -30.4%      50.04 ± 15%  
perf-sched.wait_time.avg.ms.anon_pipe_read.fifo_pipe_read.vfs_read.ksys_read
      8.52 ±  7%     -13.9%       7.33 ±  3%  
perf-sched.wait_time.avg.ms.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
     23.62 ±  9%     +26.8%      29.96 ±  8%  
perf-sched.wait_time.max.ms.__cond_resched.down_read.unmap_mapping_range.truncate_pagecache.simple_setattr




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

Reply via email to