On Mon, Feb 12, 2018 at 06:37:43PM +0100, Peter Zijlstra wrote:
> On Mon, Feb 12, 2018 at 05:11:31PM +, Mel Gorman wrote:
> > +static void
> > +update_wa_numa_placement(struct task_struct *p, int prev_cpu, int target)
> > +{
> > + unsigned long interval;
> > +
> > + if (!static_branch_likel
On Mon, Feb 12, 2018 at 06:34:49PM +0100, Peter Zijlstra wrote:
> On Mon, Feb 12, 2018 at 05:11:31PM +, Mel Gorman wrote:
>
> > However, the benefit in other cases is large. This is the result for NAS
> > with the D class sizing on a 4-socket machine
> >
> > 4.15.0
On Mon, Feb 12, 2018 at 05:11:31PM +, Mel Gorman wrote:
> +static void
> +update_wa_numa_placement(struct task_struct *p, int prev_cpu, int target)
> +{
> + unsigned long interval;
> +
> + if (!static_branch_likely(&sched_numa_balancing))
> + return;
> +
> + /* If balanc
On Mon, Feb 12, 2018 at 05:11:31PM +, Mel Gorman wrote:
> However, the benefit in other cases is large. This is the result for NAS
> with the D class sizing on a 4-socket machine
>
> 4.15.0 4.15.0
> sdnuma-v1r23 delayretry-v1
If wake_affine pulls a task to another node for any reason and the node is
no longer preferred then temporarily stop automatic NUMA balancing pulling
the task back. Otherwise, tasks with a strong waker/wakee relationship
may constantly fight automatic NUMA balancing over where a task should
be plac
5 matches
Mail list logo