On Tue, Mar 07, 2023 at 02:35:58PM +0000, Valentin Schneider wrote:

> @@ -477,6 +490,25 @@ static __always_inline void csd_unlock(struct 
> __call_single_data *csd)
>       smp_store_release(&csd->node.u_flags, 0);
>  }
>  
> +static __always_inline void
> +raw_smp_call_single_queue(int cpu, struct llist_node *node, smp_call_func_t 
> func)
> +{
> +     /*
> +      * The list addition should be visible to the target CPU when it pops
> +      * the head of the list to pull the entry off it in the IPI handler
> +      * because of normal cache coherency rules implied by the underlying
> +      * llist ops.
> +      *
> +      * If IPIs can go out of order to the cache coherency protocol
> +      * in an architecture, sufficient synchronisation should be added
> +      * to arch code to make it appear to obey cache coherency WRT
> +      * locking and barrier primitives. Generic code isn't really
> +      * equipped to do the right thing...
> +      */
> +     if (llist_add(node, &per_cpu(call_single_queue, cpu)))
> +             send_call_function_single_ipi(cpu, func);
> +}
> +
>  static DEFINE_PER_CPU_SHARED_ALIGNED(call_single_data_t, csd_data);
>  
>  void __smp_call_single_queue(int cpu, struct llist_node *node)
> @@ -493,21 +525,25 @@ void __smp_call_single_queue(int cpu, struct llist_node 
> *node)
>               }
>       }
>  #endif
>       /*
> +      * We have to check the type of the CSD before queueing it, because
> +      * once queued it can have its flags cleared by
> +      *   flush_smp_call_function_queue()
> +      * even if we haven't sent the smp_call IPI yet (e.g. the stopper
> +      * executes migration_cpu_stop() on the remote CPU).
>        */
> +     if (trace_ipi_send_cpumask_enabled()) {
> +             call_single_data_t *csd;
> +             smp_call_func_t func;
> +
> +             csd = container_of(node, call_single_data_t, node.llist);
> +             func = CSD_TYPE(csd) == CSD_TYPE_TTWU ?
> +                     sched_ttwu_pending : csd->func;
> +
> +             raw_smp_call_single_queue(cpu, node, func);
> +     } else {
> +             raw_smp_call_single_queue(cpu, node, NULL);
> +     }
>  }

Hurmph... so we only really consume @func when we IPI. Would it not be
more useful to trace this thing for *every* csd enqeued?




_______________________________________________
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc

Reply via email to