On 24/01/17(Tue) 13:35, Martin Pieuchot wrote:
> Userland threads are preempt()'d when hogging a CPU or when processing
> an AST. Currently when such a thread is preempted the scheduler looks
> for an idle CPU and puts it on its run queue. That means the number of
> involuntary context switch often result in a migration.
>
> This is not a problem per se and one could argue that if another CPU
> is idle it makes sense to move. However with the KERNEL_LOCK() moving
> to another CPU won't necessarily allows the preempt()'d thread to run.
> It's even worse, it increases contention.
>
> If you add to this behavior the fact that sched_choosecpu() prefers idle
> CPUs in a linear order, meaning CPU0 > CPU1 > .. > CPUN, you'll
> understand that the set of idle CPUs will change every time preempt() is
> called.
>
> I believe this behavior affects kernel threads by side effect, since
> the set of idle CPU changes every time a thread is preempted. With this
> diff the 'softnet' thread didn't move on a 2 CPUs machine during simple
> benchmarks. Without, it plays ping-pong between CPU.
>
> The goal of this diff is to reduce the number of migrations. You
> can compare the value of 'sched_nomigrations' and 'sched_nmigrations'
> with and without it.
>
> As usual, I'd like to know what's the impact of this diff on your
> favorite benchmark. Please test and report back.
I only got positive test results so I'd like to commit the diff below.
ok?
Index: kern/sched_bsd.c
===================================================================
RCS file: /cvs/src/sys/kern/sched_bsd.c,v
retrieving revision 1.44
diff -u -p -r1.44 sched_bsd.c
--- kern/sched_bsd.c 25 Jan 2017 06:15:50 -0000 1.44
+++ kern/sched_bsd.c 6 Feb 2017 14:47:28 -0000
@@ -329,7 +329,6 @@ preempt(struct proc *newp)
SCHED_LOCK(s);
p->p_priority = p->p_usrpri;
p->p_stat = SRUN;
- p->p_cpu = sched_choosecpu(p);
setrunqueue(p);
p->p_ru.ru_nivcsw++;
mi_switch();