[tip:irq/core] softirq: Use __this_cpu_write() in takeover_tasklets()
Commit-ID: 8afecaa68df1e94a9d634f1f961533a925f239fc Gitweb: https://git.kernel.org/tip/8afecaa68df1e94a9d634f1f961533a925f239fc Author: Muchun Song AuthorDate: Tue, 18 Jun 2019 22:33:05 +0800 Committer: Thomas Gleixner CommitDate: Sun, 23 Jun 2019 18:14:27 +0200 softirq: Use __this_cpu_write() in takeover_tasklets() The code is executed with interrupts disabled, so it's safe to use __this_cpu_write(). [ tglx: Massaged changelog ] Signed-off-by: Muchun Song Signed-off-by: Thomas Gleixner Cc: j...@joelfernandes.org Cc: rost...@goodmis.org Cc: frede...@kernel.org Cc: paul...@linux.vnet.ibm.com Cc: alexander.le...@verizon.com Cc: pet...@infradead.org Link: https://lkml.kernel.org/r/20190618143305.2038-1-smuc...@gmail.com --- kernel/softirq.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/softirq.c b/kernel/softirq.c index 2c3382378d94..eaf3bdf7c749 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -650,7 +650,7 @@ static int takeover_tasklets(unsigned int cpu) /* Find end, append list for that CPU. */ if (_cpu(tasklet_vec, cpu).head != per_cpu(tasklet_vec, cpu).tail) { *__this_cpu_read(tasklet_vec.tail) = per_cpu(tasklet_vec, cpu).head; - this_cpu_write(tasklet_vec.tail, per_cpu(tasklet_vec, cpu).tail); + __this_cpu_write(tasklet_vec.tail, per_cpu(tasklet_vec, cpu).tail); per_cpu(tasklet_vec, cpu).head = NULL; per_cpu(tasklet_vec, cpu).tail = _cpu(tasklet_vec, cpu).head; }
[tip:sched/core] sched/fair: Make some variables static
Commit-ID: ed8885a14433aec04067463493051eaaeef3255f Gitweb: https://git.kernel.org/tip/ed8885a14433aec04067463493051eaaeef3255f Author: Muchun Song AuthorDate: Sat, 10 Nov 2018 15:52:02 +0800 Committer: Ingo Molnar CommitDate: Mon, 12 Nov 2018 06:18:15 +0100 sched/fair: Make some variables static The variables are local to the source and do not need to be in global scope, so make them static. Signed-off-by: Muchun Song Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/20181110075202.61172-1-smuc...@gmail.com Signed-off-by: Ingo Molnar --- kernel/sched/fair.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d1f91e6efe51..e30dea59d215 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -38,7 +38,7 @@ * (default: 6ms * (1 + ilog(ncpus)), units: nanoseconds) */ unsigned int sysctl_sched_latency = 600ULL; -unsigned int normalized_sysctl_sched_latency = 600ULL; +static unsigned int normalized_sysctl_sched_latency= 600ULL; /* * The initial- and re-scaling of tunables is configurable @@ -58,8 +58,8 @@ enum sched_tunable_scaling sysctl_sched_tunable_scaling = SCHED_TUNABLESCALING_L * * (default: 0.75 msec * (1 + ilog(ncpus)), units: nanoseconds) */ -unsigned int sysctl_sched_min_granularity = 75ULL; -unsigned int normalized_sysctl_sched_min_granularity = 75ULL; +unsigned int sysctl_sched_min_granularity = 75ULL; +static unsigned int normalized_sysctl_sched_min_granularity= 75ULL; /* * This value is kept at sysctl_sched_latency/sysctl_sched_min_granularity @@ -81,8 +81,8 @@ unsigned int sysctl_sched_child_runs_first __read_mostly; * * (default: 1 msec * (1 + ilog(ncpus)), units: nanoseconds) */ -unsigned int sysctl_sched_wakeup_granularity = 100UL; -unsigned int normalized_sysctl_sched_wakeup_granularity= 100UL; +unsigned int sysctl_sched_wakeup_granularity = 100UL; +static unsigned int normalized_sysctl_sched_wakeup_granularity = 100UL; const_debug unsigned int sysctl_sched_migration_cost = 50UL; @@ -116,7 +116,7 @@ unsigned int sysctl_sched_cfs_bandwidth_slice = 5000UL; * * (default: ~20%) */ -unsigned int capacity_margin = 1280; +static unsigned int capacity_margin= 1280; static inline void update_load_add(struct load_weight *lw, unsigned long inc) {
[tip:sched/core] sched/fair: Make some variables static
Commit-ID: ed8885a14433aec04067463493051eaaeef3255f Gitweb: https://git.kernel.org/tip/ed8885a14433aec04067463493051eaaeef3255f Author: Muchun Song AuthorDate: Sat, 10 Nov 2018 15:52:02 +0800 Committer: Ingo Molnar CommitDate: Mon, 12 Nov 2018 06:18:15 +0100 sched/fair: Make some variables static The variables are local to the source and do not need to be in global scope, so make them static. Signed-off-by: Muchun Song Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/20181110075202.61172-1-smuc...@gmail.com Signed-off-by: Ingo Molnar --- kernel/sched/fair.c | 12 ++-- 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d1f91e6efe51..e30dea59d215 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -38,7 +38,7 @@ * (default: 6ms * (1 + ilog(ncpus)), units: nanoseconds) */ unsigned int sysctl_sched_latency = 600ULL; -unsigned int normalized_sysctl_sched_latency = 600ULL; +static unsigned int normalized_sysctl_sched_latency= 600ULL; /* * The initial- and re-scaling of tunables is configurable @@ -58,8 +58,8 @@ enum sched_tunable_scaling sysctl_sched_tunable_scaling = SCHED_TUNABLESCALING_L * * (default: 0.75 msec * (1 + ilog(ncpus)), units: nanoseconds) */ -unsigned int sysctl_sched_min_granularity = 75ULL; -unsigned int normalized_sysctl_sched_min_granularity = 75ULL; +unsigned int sysctl_sched_min_granularity = 75ULL; +static unsigned int normalized_sysctl_sched_min_granularity= 75ULL; /* * This value is kept at sysctl_sched_latency/sysctl_sched_min_granularity @@ -81,8 +81,8 @@ unsigned int sysctl_sched_child_runs_first __read_mostly; * * (default: 1 msec * (1 + ilog(ncpus)), units: nanoseconds) */ -unsigned int sysctl_sched_wakeup_granularity = 100UL; -unsigned int normalized_sysctl_sched_wakeup_granularity= 100UL; +unsigned int sysctl_sched_wakeup_granularity = 100UL; +static unsigned int normalized_sysctl_sched_wakeup_granularity = 100UL; const_debug unsigned int sysctl_sched_migration_cost = 50UL; @@ -116,7 +116,7 @@ unsigned int sysctl_sched_cfs_bandwidth_slice = 5000UL; * * (default: ~20%) */ -unsigned int capacity_margin = 1280; +static unsigned int capacity_margin= 1280; static inline void update_load_add(struct load_weight *lw, unsigned long inc) {
[tip:sched/core] sched/core: Introduce set_next_task() helper for better code readability
Commit-ID: ff1cdc94de4d336be45336d70709dfcf3d682514 Gitweb: https://git.kernel.org/tip/ff1cdc94de4d336be45336d70709dfcf3d682514 Author: Muchun Song AuthorDate: Fri, 26 Oct 2018 21:17:43 +0800 Committer: Ingo Molnar CommitDate: Sun, 4 Nov 2018 00:59:24 +0100 sched/core: Introduce set_next_task() helper for better code readability When we pick the next task, we will do the following for the task: 1) p->se.exec_start = rq_clock_task(rq); 2) dequeue_pushable(_dl)_task(rq, p); When we call set_curr_task(), we also need to do the same thing above. In rt.c, the code at 1) is in the _pick_next_task_rt() and the code at 2) is in the pick_next_task_rt(). If we put two operations in one function, maybe better. So, we introduce a new function set_next_task(), which is responsible for doing the above. By introducing the function we can get rid of calling the dequeue_pushable(_dl)_task() directly(We can call set_next_task()) in pick_next_task() and have better code readability and reuse. In set_curr_task_rt(), we also can call set_next_task(). Do this things such that we end up with: static struct task_struct *pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) { /* do something else ... */ put_prev_task(rq, prev); /* pick next task p */ set_next_task(rq, p); /* do something else ... */ } put_prev_task() can match set_next_task(), which can make the code more readable. Signed-off-by: Muchun Song Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/20181026131743.21786-1-smuc...@gmail.com Signed-off-by: Ingo Molnar --- kernel/sched/deadline.c | 19 ++- kernel/sched/rt.c | 24 +++- 2 files changed, 21 insertions(+), 22 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 91e4202b0634..470ba6b464fe 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1695,6 +1695,14 @@ static void start_hrtick_dl(struct rq *rq, struct task_struct *p) } #endif +static inline void set_next_task(struct rq *rq, struct task_struct *p) +{ + p->se.exec_start = rq_clock_task(rq); + + /* You can't push away the running task */ + dequeue_pushable_dl_task(rq, p); +} + static struct sched_dl_entity *pick_next_dl_entity(struct rq *rq, struct dl_rq *dl_rq) { @@ -1750,10 +1758,8 @@ pick_next_task_dl(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) BUG_ON(!dl_se); p = dl_task_of(dl_se); - p->se.exec_start = rq_clock_task(rq); - /* Running task will never be pushed. */ - dequeue_pushable_dl_task(rq, p); + set_next_task(rq, p); if (hrtick_enabled(rq)) start_hrtick_dl(rq, p); @@ -1808,12 +1814,7 @@ static void task_fork_dl(struct task_struct *p) static void set_curr_task_dl(struct rq *rq) { - struct task_struct *p = rq->curr; - - p->se.exec_start = rq_clock_task(rq); - - /* You can't push away the running task */ - dequeue_pushable_dl_task(rq, p); + set_next_task(rq, rq->curr); } #ifdef CONFIG_SMP diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index a21ea6021929..9aa3287ce301 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1498,6 +1498,14 @@ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flag #endif } +static inline void set_next_task(struct rq *rq, struct task_struct *p) +{ + p->se.exec_start = rq_clock_task(rq); + + /* The running task is never eligible for pushing */ + dequeue_pushable_task(rq, p); +} + static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, struct rt_rq *rt_rq) { @@ -1518,7 +1526,6 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, static struct task_struct *_pick_next_task_rt(struct rq *rq) { struct sched_rt_entity *rt_se; - struct task_struct *p; struct rt_rq *rt_rq = >rt; do { @@ -1527,10 +1534,7 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq) rt_rq = group_rt_rq(rt_se); } while (rt_rq); - p = rt_task_of(rt_se); - p->se.exec_start = rq_clock_task(rq); - - return p; + return rt_task_of(rt_se); } static struct task_struct * @@ -1573,8 +1577,7 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) p = _pick_next_task_rt(rq); - /* The running task is never eligible for pushing */ - dequeue_pushable_task(rq, p); + set_next_task(rq, p); rt_queue_push_tasks(rq); @@ -2355,12 +2358,7 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued) static void
[tip:sched/core] sched/core: Introduce set_next_task() helper for better code readability
Commit-ID: ff1cdc94de4d336be45336d70709dfcf3d682514 Gitweb: https://git.kernel.org/tip/ff1cdc94de4d336be45336d70709dfcf3d682514 Author: Muchun Song AuthorDate: Fri, 26 Oct 2018 21:17:43 +0800 Committer: Ingo Molnar CommitDate: Sun, 4 Nov 2018 00:59:24 +0100 sched/core: Introduce set_next_task() helper for better code readability When we pick the next task, we will do the following for the task: 1) p->se.exec_start = rq_clock_task(rq); 2) dequeue_pushable(_dl)_task(rq, p); When we call set_curr_task(), we also need to do the same thing above. In rt.c, the code at 1) is in the _pick_next_task_rt() and the code at 2) is in the pick_next_task_rt(). If we put two operations in one function, maybe better. So, we introduce a new function set_next_task(), which is responsible for doing the above. By introducing the function we can get rid of calling the dequeue_pushable(_dl)_task() directly(We can call set_next_task()) in pick_next_task() and have better code readability and reuse. In set_curr_task_rt(), we also can call set_next_task(). Do this things such that we end up with: static struct task_struct *pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) { /* do something else ... */ put_prev_task(rq, prev); /* pick next task p */ set_next_task(rq, p); /* do something else ... */ } put_prev_task() can match set_next_task(), which can make the code more readable. Signed-off-by: Muchun Song Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/20181026131743.21786-1-smuc...@gmail.com Signed-off-by: Ingo Molnar --- kernel/sched/deadline.c | 19 ++- kernel/sched/rt.c | 24 +++- 2 files changed, 21 insertions(+), 22 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 91e4202b0634..470ba6b464fe 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1695,6 +1695,14 @@ static void start_hrtick_dl(struct rq *rq, struct task_struct *p) } #endif +static inline void set_next_task(struct rq *rq, struct task_struct *p) +{ + p->se.exec_start = rq_clock_task(rq); + + /* You can't push away the running task */ + dequeue_pushable_dl_task(rq, p); +} + static struct sched_dl_entity *pick_next_dl_entity(struct rq *rq, struct dl_rq *dl_rq) { @@ -1750,10 +1758,8 @@ pick_next_task_dl(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) BUG_ON(!dl_se); p = dl_task_of(dl_se); - p->se.exec_start = rq_clock_task(rq); - /* Running task will never be pushed. */ - dequeue_pushable_dl_task(rq, p); + set_next_task(rq, p); if (hrtick_enabled(rq)) start_hrtick_dl(rq, p); @@ -1808,12 +1814,7 @@ static void task_fork_dl(struct task_struct *p) static void set_curr_task_dl(struct rq *rq) { - struct task_struct *p = rq->curr; - - p->se.exec_start = rq_clock_task(rq); - - /* You can't push away the running task */ - dequeue_pushable_dl_task(rq, p); + set_next_task(rq, rq->curr); } #ifdef CONFIG_SMP diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index a21ea6021929..9aa3287ce301 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1498,6 +1498,14 @@ static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flag #endif } +static inline void set_next_task(struct rq *rq, struct task_struct *p) +{ + p->se.exec_start = rq_clock_task(rq); + + /* The running task is never eligible for pushing */ + dequeue_pushable_task(rq, p); +} + static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, struct rt_rq *rt_rq) { @@ -1518,7 +1526,6 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq, static struct task_struct *_pick_next_task_rt(struct rq *rq) { struct sched_rt_entity *rt_se; - struct task_struct *p; struct rt_rq *rt_rq = >rt; do { @@ -1527,10 +1534,7 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq) rt_rq = group_rt_rq(rt_se); } while (rt_rq); - p = rt_task_of(rt_se); - p->se.exec_start = rq_clock_task(rq); - - return p; + return rt_task_of(rt_se); } static struct task_struct * @@ -1573,8 +1577,7 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) p = _pick_next_task_rt(rq); - /* The running task is never eligible for pushing */ - dequeue_pushable_task(rq, p); + set_next_task(rq, p); rt_queue_push_tasks(rq); @@ -2355,12 +2358,7 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued) static void
[tip:sched/urgent] sched/rt: Update comment in pick_next_task_rt()
Commit-ID: a68d75081aeccfb169575bea6f452a5a12b9f49b Gitweb: https://git.kernel.org/tip/a68d75081aeccfb169575bea6f452a5a12b9f49b Author: Muchun Song AuthorDate: Sat, 27 Oct 2018 11:05:17 +0800 Committer: Ingo Molnar CommitDate: Mon, 29 Oct 2018 07:18:04 +0100 sched/rt: Update comment in pick_next_task_rt() Commit: f4ebcbc0d7e0 ("sched/rt: Substract number of tasks of throttled queues from rq->nr_running") added a new rt_rq->rt_queued field, which is used to indicate the status of rq->rt enqueue or dequeue. So, the ->rt_nr_running check was removed and we now check ->rt_queued instead. Fix the comment in pick_next_task_rt() as well, which was still referencing the old logic. Signed-off-by: Muchun Song Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/20181027030517.23292-1-smuc...@gmail.com Signed-off-by: Ingo Molnar --- kernel/sched/rt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 2e2955a8cf8f..a21ea6021929 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1561,7 +1561,7 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) /* * We may dequeue prev's rt_rq in put_prev_task(). -* So, we update time before rt_nr_running check. +* So, we update time before rt_queued check. */ if (prev->sched_class == _sched_class) update_curr_rt(rq);
[tip:sched/urgent] sched/rt: Update comment in pick_next_task_rt()
Commit-ID: a68d75081aeccfb169575bea6f452a5a12b9f49b Gitweb: https://git.kernel.org/tip/a68d75081aeccfb169575bea6f452a5a12b9f49b Author: Muchun Song AuthorDate: Sat, 27 Oct 2018 11:05:17 +0800 Committer: Ingo Molnar CommitDate: Mon, 29 Oct 2018 07:18:04 +0100 sched/rt: Update comment in pick_next_task_rt() Commit: f4ebcbc0d7e0 ("sched/rt: Substract number of tasks of throttled queues from rq->nr_running") added a new rt_rq->rt_queued field, which is used to indicate the status of rq->rt enqueue or dequeue. So, the ->rt_nr_running check was removed and we now check ->rt_queued instead. Fix the comment in pick_next_task_rt() as well, which was still referencing the old logic. Signed-off-by: Muchun Song Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/20181027030517.23292-1-smuc...@gmail.com Signed-off-by: Ingo Molnar --- kernel/sched/rt.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 2e2955a8cf8f..a21ea6021929 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1561,7 +1561,7 @@ pick_next_task_rt(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) /* * We may dequeue prev's rt_rq in put_prev_task(). -* So, we update time before rt_nr_running check. +* So, we update time before rt_queued check. */ if (prev->sched_class == _sched_class) update_curr_rt(rq);