Re: [PATCH v1 1/1] treewide: Align match_string() with sysfs_match_string()

2024-06-04 Thread Daniel Bristot de Oliveira
On 6/2/24 17:57, Andy Shevchenko wrote:
> diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
> index a8e28f9b9271..7bed499effd3 100644
> --- a/kernel/trace/trace_osnoise.c
> +++ b/kernel/trace/trace_osnoise.c
> @@ -2230,9 +2230,9 @@ static ssize_t osnoise_options_write(struct file *filp, 
> const char __user *ubuf,
>   enable = false;
>   }
>  
> - option = match_string(osnoise_options_str, OSN_MAX, option_str);
> + option = match_string(osnoise_options_str, option_str);
>   if (option < 0)
> - return -EINVAL;
> + return option;
>  
>   /*
>* trace_types_lock is taken to avoid concurrency on start/stop.

Acked-by: Daniel Bristot de Oliveira 

Thanks!
-- Daniel


Re: [RFC PATCH v2 8/8] sched, smp: Trace smp callback causing an IPI

2022-11-18 Thread Daniel Bristot de Oliveira
On 11/18/22 10:12, Peter Zijlstra wrote:
> On Thu, Nov 17, 2022 at 02:45:29PM +, Valentin Schneider wrote:
> 
>>> +   if (trace_ipi_send_cpumask_enabled()) {
>>> +   call_single_data_t *csd;
>>> +   smp_call_func_t func;
>>> +
>>> +   csd = container_of(node, call_single_data_t, node.llist);
>>> +
>>> +   func = sched_ttwu_pending;
>>> +   if (CSD_TYPE(csd) != CSD_TYPE_TTWU)
>>> +   func = csd->func;
>>> +
>>> +   if (raw_smp_call_single_queue(cpu, node))
>>> +   trace_ipi_send_cpumask(cpumask_of(cpu), _RET_IP_, func);
>> So I went with the tracepoint being placed *before* the actual IPI gets
>> sent to have a somewhat sane ordering between trace_ipi_send_cpumask() and
>> e.g. trace_call_function_single_entry().
>>
>> Packaging the call_single_queue logic makes the code less horrible, but it
>> does mix up the event ordering...
> Keeps em sharp ;-)
> 

Having the trace before the IPI avoids the (non ideal) case where the trace 
stops because of
an IPI execution before we have trace about who sent it... :-(.

-- Daniel



Re: [RFC PATCH 0/5] Generic IPI sending tracepoint

2022-10-11 Thread Daniel Bristot de Oliveira
On 10/11/22 18:17, Valentin Schneider wrote:
> Thinking out loud, it makes way more sense to record a cpumask in the
> tracepoint, but perhaps we could have a postprocessing step to transform
> those into N events each targeting a single CPU?

My approach on the tracers/rtla is to make the simple things in kernel, and 
beautify
things in user-space.

You could keep the tracepoint as a mask, and then make it pretty, like 
cpus=3-5,8
in user-space. For example with a trace-cmd/perf loadable plugin, libtracefs 
helper.

For rtla I was thinking to make a new tool to parse them. and make it pretty 
there.

-- Daniel



Re: [patch RFC 00/15] mm/highmem: Provide a preemptible variant of kmap_atomic & friends

2020-09-24 Thread Daniel Bristot de Oliveira
On 9/24/20 10:27 AM, pet...@infradead.org wrote:
> So my current todo list is:
> 
>  - Change RT PULL
>  - Change DL PULL
>  - Add migrate_disable() tracer; exactly like preempt/irqoff, except
>measuring task-runtime instead of cpu-time.
>  - Add a mode that measures actual interference.
>  - Add a traceevent to detect preemption in migrate_disable().
> 
> 
> And then I suppose I should twist Daniel's arm to update his model to
> include these scenarios and numbers.

Challenge accepted :-)

-- Daniel