On Thu, Nov 17, 2022 at 02:45:29PM +0000, Valentin Schneider wrote:

> > +   if (trace_ipi_send_cpumask_enabled()) {
> > +           call_single_data_t *csd;
> > +           smp_call_func_t func;
> > +
> > +           csd = container_of(node, call_single_data_t, node.llist);
> > +
> > +           func = sched_ttwu_pending;
> > +           if (CSD_TYPE(csd) != CSD_TYPE_TTWU)
> > +                   func = csd->func;
> > +
> > +           if (raw_smp_call_single_queue(cpu, node))
> > +                   trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, func);
> 
> So I went with the tracepoint being placed *before* the actual IPI gets
> sent to have a somewhat sane ordering between trace_ipi_send_cpumask() and
> e.g. trace_call_function_single_entry().
> 
> Packaging the call_single_queue logic makes the code less horrible, but it
> does mix up the event ordering...

Keeps em sharp ;-)

> > +           return;
> > +   }
> > +
> > +   raw_smp_call_single_queue(cpu, node);
> >  }
> >
> >  /*
> > @@ -983,10 +1017,13 @@ static void smp_call_function_many_cond(
> >                * number of CPUs might be zero due to concurrent changes to 
> > the
> >                * provided mask.
> >                */
> > -           if (nr_cpus == 1)
> > +           if (nr_cpus == 1) {
> > +                   trace_ipi_send_cpumask(cpumask_of(last_cpu), _RET_IP_, 
> > func);
> >                       send_call_function_single_ipi(last_cpu);
> 
> This'll yield an IPI event even if no IPI is sent due to the idle task
> polling, no?

Oh, right..

Reply via email to