On Mon, Sep 08, 2025 at 07:36:05PM -0400, Steven Rostedt wrote: > On Thu, 21 Aug 2025 09:13:51 +0100 > Vincent Donnefort <vdonnef...@google.com> wrote: > > > A trace remote relies on ring-buffer remotes to read and control > > compatible tracing buffers, written by entity such as firmware or > > hypervisor. > > > > Add a Tracefs directory remotes/ that contains all instances of trace > > remotes. Each instance follows the same hierarchy as any other to ease > > the support by existing user-space tools. > > > > This currently does not provide any event support, which will come > > later. > > > > Signed-off-by: Vincent Donnefort <vdonnef...@google.com> > > > > diff --git a/include/linux/trace_remote.h b/include/linux/trace_remote.h > > new file mode 100644 > > index 000000000000..de043a6f2fe0 > > --- /dev/null > > +++ b/include/linux/trace_remote.h > > @@ -0,0 +1,21 @@ > > +/* SPDX-License-Identifier: GPL-2.0 */ > > + > > +#ifndef _LINUX_TRACE_REMOTE_H > > +#define _LINUX_TRACE_REMOTE_H > > + > > +#include <linux/ring_buffer.h> > > + > > +struct trace_remote_callbacks { > > + struct trace_buffer_desc * > > + (*load_trace_buffer)(unsigned long size, void *priv); > > I believe this is one of those cases where the 80 char limit is more of a > guildline than a rule. It looks better to keep the above as one line. > > > + void (*unload_trace_buffer)(struct trace_buffer_desc *desc, void > > *priv); > > Heck, this is already passed 80 characters ;-) > > > + int (*enable_tracing)(bool enable, void *priv); > > + int (*swap_reader_page)(unsigned int cpu, void *priv); > > +}; > > + > > +int trace_remote_register(const char *name, struct trace_remote_callbacks > > *cbs, void *priv); > > +int trace_remote_alloc_buffer(struct trace_buffer_desc *desc, size_t size, > > + const struct cpumask *cpumask); > > +void trace_remote_free_buffer(struct trace_buffer_desc *desc); > > + > > +#endif > > diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig > > index d2c79da81e4f..99af56d39eaf 100644 > > --- a/kernel/trace/Kconfig > > +++ b/kernel/trace/Kconfig > > @@ -1238,4 +1238,7 @@ config HIST_TRIGGERS_DEBUG > > > > > > + > > +int trace_remote_register(const char *name, struct trace_remote_callbacks > > *cbs, void *priv) > > +{ > > + struct trace_remote *remote; > > + > > + remote = kzalloc(sizeof(*remote), GFP_KERNEL); > > + if (!remote) > > + return -ENOMEM; > > + > > + remote->cbs = cbs; > > + remote->priv = priv; > > + remote->trace_buffer_size = 7 << 10; > > + remote->poll_ms = 100; > > What's with the magic numbers?
The 7KiB value can be modified with the tracefs file buffer_size_kb. the 100ms can't be modified. I could either make a tracefs file or just a trace_remote_set_poll_ms() function so it can be modified after registration. In all cases, I'll add a TRACE_REMOTE_DEFAULT_XXXX definition at the start of the file. > > > + mutex_init(&remote->lock); > > + > > + if (trace_remote_init_tracefs(name, remote)) { > > + kfree(remote); > > + return -ENOMEM; > > + } > > + > > + return 0; > > +} > > + > > +void trace_remote_free_buffer(struct trace_buffer_desc *desc) > > +{ > > + struct ring_buffer_desc *rb_desc; > > + int cpu; > > + > > + for_each_ring_buffer_desc(rb_desc, cpu, desc) { > > + unsigned int id; > > + > > + free_page(rb_desc->meta_va); > > + > > + for (id = 0; id < rb_desc->nr_page_va; id++) > > + free_page(rb_desc->page_va[id]); > > + } > > +} > > + > > +int trace_remote_alloc_buffer(struct trace_buffer_desc *desc, size_t size, > > + const struct cpumask *cpumask) > > +{ > > + int nr_pages = (PAGE_ALIGN(size) / PAGE_SIZE) + 1; > > + struct ring_buffer_desc *rb_desc; > > + int cpu; > > + > > + desc->nr_cpus = 0; > > + desc->struct_len = offsetof(struct trace_buffer_desc, __data); > > The above is better as: > > desc->struct_len = struct_size(desc, __data, 0); > > As it also does some other checks, like make sure __data is a flexible > array. > > > + > > + rb_desc = (struct ring_buffer_desc *)&desc->__data[0]; > > + > > + for_each_cpu(cpu, cpumask) { > > + unsigned int id; > > + > > + rb_desc->cpu = cpu; > > + rb_desc->nr_page_va = 0; > > + rb_desc->meta_va = (unsigned long)__get_free_page(GFP_KERNEL); > > + if (!rb_desc->meta_va) > > + goto err; > > + > > + for (id = 0; id < nr_pages; id++) { > > + rb_desc->page_va[id] = (unsigned > > long)__get_free_page(GFP_KERNEL); > > + if (!rb_desc->page_va[id]) > > What exactly are these pages allocated for? Is this what the remote will > use to write to? There should be more comments about how this is used. Those are the actual ring-buffer data pages. I'll add a comment here. > > > + goto err; > > + > > + rb_desc->nr_page_va++; > > + } > > + desc->nr_cpus++; > > + desc->struct_len += offsetof(struct ring_buffer_desc, page_va); > > + desc->struct_len += sizeof(rb_desc->page_va[0]) * > > rb_desc->nr_page_va; > > Shouldn't the above be: > > desc->struct_len += struct_size(rb_desc, page_va, > rb_desc->nr_page_va); > > ? Yes, that'd look way better. > > > + rb_desc = __next_ring_buffer_desc(rb_desc); > > Is there no check to make sure that the cpu mask matches what the rb_desc > will have? The function is filling rb_desc[], based on the cpumask input, so both will match when returning from this function. It is then easy to handle the case where some CPUs are not part of the cpumask. See remote_test_load() where for_each_ring_buffer_desc() iterates over all the CPUs from the trace_desc but uses rb_desc->cpu. Is it what you meant? > > -- Steve > > > + } > > + > > + return 0; > > + > > +err: > > + trace_remote_free_buffer(desc); > > + return -ENOMEM; > > +} >