On Wed, Nov 25, 2020 at 12:15:41PM +0100, Peter Zijlstra wrote:
> On Tue, Nov 17, 2020 at 06:19:52PM -0500, Joel Fernandes (Google) wrote:
> 
> > +/*
> > + * Ensure that the task has been requeued. The stopper ensures that the 
> > task cannot
> > + * be migrated to a different CPU while its core scheduler queue state is 
> > being updated.
> > + * It also makes sure to requeue a task if it was running actively on 
> > another CPU.
> > + */
> > +static int sched_core_task_join_stopper(void *data)
> > +{
> > +   struct sched_core_task_write_tag *tag = (struct 
> > sched_core_task_write_tag *)data;
> > +   int i;
> > +
> > +   for (i = 0; i < 2; i++)
> > +           sched_core_tag_requeue(tag->tasks[i], tag->cookies[i], false /* 
> > !group */);
> > +
> > +   return 0;
> > +}
> > +
> > +static int sched_core_share_tasks(struct task_struct *t1, struct 
> > task_struct *t2)
> > +{
> 
> > +   stop_machine(sched_core_task_join_stopper, (void *)&wr, NULL);
> 
> > +}
> 
> This is *REALLY* terrible...

I pulled this bit from your original patch. Are you concerned about the
stop_machine? Sharing a core is a slow path for our usecases (and as far as I
know, for everyone else's). We can probably do something different if that
requirement changes.

Reply via email to