Replying to Sashiko's comments:

  
https://sashiko.dev/?list=org.kernel.vger.linux-trace-kernel#/patchset/20260513150007.3b280e87%40gandalf.local.home

> commit b3956023bf5f1ccf5838187c1c14f3dcf5e37c3d
> Author: Steven Rostedt <[email protected]>
> tracing: Allow perf to read synthetic events
> This commit adds support for perf to read synthetic events. It extracts the
> field sizing and entry writing logic into separate helper functions, and
> implements a new probe function to attach perf events.
> > diff --git a/kernel/trace/trace_events_synth.c 
> > b/kernel/trace/trace_events_synth.c
> > index 39ac4eba07024..e6871230bde96 100644
> > --- a/kernel/trace/trace_events_synth.c
> > +++ b/kernel/trace/trace_events_synth.c
> > @@ -499,28 +499,19 @@ static unsigned int trace_stack(struct 
> > synth_trace_event *entry,
> [ ... ]
> > +static __always_inline void write_synth_entry(struct synth_event *event,
> > +                                         struct synth_trace_event *entry,
> > +                                         u64 *var_ref_vals,
> > +                                         unsigned int *var_ref_idx)
> > +{
> > +   int data_size = 0;
> Since get_field_size() and write_synth_entry() are being refactored here, is
> there a risk of a time-of-check to time-of-use buffer overflow with
> dynamic string fields?
> get_field_size() computes the string length for allocation by calling
> fetch_store_strlen(). The ring buffer event slot is then allocated based on
> this length.
> write_synth_entry() populates the buffer by calling trace_string(), which
> calls fetch_store_strlen() a second time to recalculate the string's length,
> assigning it to data->as_dynamic.len.
> If a string from user space grows between the two fetch_store_strlen() calls,
> fetch_store_string() will blindly copy more bytes than the allocated buffer
> space. Can this cause out-of-bounds writes and corrupt kernel memory?
> This regression wasn't introduced by this patch, but pre-existed in the
> ftrace implementation.

The string cannot grow between the two calls. When a dynamic event is
enabled (as it is required to be here) deleting or modifying it will be
denied via an -EBUSY return. That's the only way I'm aware of that can have
user space change the size of the string.

> [ ... ]
> > +#ifdef CONFIG_PERF_EVENTS
> > +static void perf_event_raw_event_synth(void *__data,
> > +                                  u64 *var_ref_vals,
> > +                                  unsigned int *var_ref_idx)
> > +{
> [ ... ]
> > +   size = ALIGN(sizeof(*entry) + fields_size, 8);
> > +
> > +   entry = perf_trace_buf_alloc(size, &regs, &context);
> > +
> > +   if (unlikely(!entry))
> > +           return;
> > +
> > +   write_synth_entry(event, entry, var_ref_vals, var_ref_idx);
> > +
> > +   perf_fetch_caller_regs(regs);
> > +
> > +   perf_trace_buf_submit(entry, size, context,
> > +                         call->event.type, 1, regs,
> > +                         perf_head, NULL);
> > +}
> > +#endif
> Could this leak uninitialized per-CPU kernel memory to userspace?
> perf_trace_buf_alloc() and trace_event_buffer_reserve() allocate memory from
> per-CPU buffers without zero-initializing it (except for trailing alignment
> padding).

Both of theses per-CPU memory locations are zeroed when created. It is only
populated with trace data. Yeah, it may leak previous trace data, but that
data should also be visible for whoever created it in the first place.

-- Steve


> write_synth_entry() populates an array of 8-byte union trace_synth_field.
> When a field is smaller than 8 bytes (e.g., as_u8), only those specific bytes
> are written, leaving the remaining padding bytes uninitialized. Similarly, for
> empty stacktraces, get_field_size() reserves 8 bytes, but trace_stack()
> terminates immediately and writes 0 bytes, leaving the 8-byte gap completely
> uninitialized.
> perf_trace_buf_submit() copies the entire requested buffer size to the perf
> ring buffer. Can userspace read this buffer, leaking uninitialized kernel
> memory from previous events or kernel operations?
> This regression also wasn't introduced by this patch, but pre-existed in the
> ftrace implementation.


Reply via email to