The idea behind the change is to transition from the existing spatial
vCPU handling approach that introduces costly modification to the
scheduling logic to ensure the requested CPU count is obeyed (10%+
performance drop in some tests, see below) to temporal isolation that can be
provided by the cgroup2 cpu.max.

Reference test results:

1. Clean setup, no vCPU related modifications:
~/at_process_ctxswitch_pipe -w -p 2 -t 15
rate_total: 856509.625000, avg: 428254.812500

2. vCPU related modifications (present state):
~/at_process_ctxswitch_pipe -w -p 2 -t 15
rate_total: 735626.812500, avg: 367813.406250

3. Cleaned-up vCPU handling:
~/at_process_ctxswitch_pipe -w -p 2 -t 15
rate_total: 840074.750000, avg: 420037.375000

Changes in v3:
 - Remove more dead code
 - Make nr_cpus unavailable on cgroup root
 - If nr_cpus has been set, don't allow cpu_max values to exceed nr_cpus


Dmitry Sepp (2):
  sched: Clean up vCPU handling logic
  sched: Support nr_cpus in cgroup2 as well

 include/linux/sched.h          |   6 -
 include/linux/sched/topology.h |   5 -
 kernel/sched/core.c            | 106 ++-------
 kernel/sched/fair.c            | 408 ---------------------------------
 kernel/sched/sched.h           |  10 -
 5 files changed, 20 insertions(+), 515 deletions(-)


base-commit: 9dc72d17b6abbe7353f01f6ac44551f75299a560
-- 
2.47.1

_______________________________________________
Devel mailing list
[email protected]
https://lists.openvz.org/mailman/listinfo/devel

Reply via email to