On Wed, 27 Mar 2024 16:50:57 +0800
Tio Zhang wrote:
> By doing this, we are able to filter tasks by tgid while we are
> tracing wakeup events by ebpf or other methods.
>
> For example, when we care about tracing a user space process (which has
> uncertain number of LWPs, i.e, pids) to monitor
On Tue, 26 Mar 2024 09:16:33 -0700
Andrii Nakryiko wrote:
> > It's no different than lockdep. Test boxes should have it enabled, but
> > there's no reason to have this enabled in a production system.
> >
>
> I tend to agree with Steven here (which is why I sent this patch as it
> is), but I'm
On Tue, 26 Mar 2024 15:53:38 +0100
Arnd Bergmann wrote:
> -const char *
> +int
> ftrace_mod_address_lookup(unsigned long addr, unsigned long *size,
> unsigned long *off, char **modname, char *sym)
> {
> struct ftrace_mod_map *mod_map;
> - const char *ret = NULL;
> +
that it *does cause
overhead* with function tracing.
I believe we found pretty much all locations that were an issue, and we
should now just make it an option for developers.
It's no different than lockdep. Test boxes should have it enabled, but
there's no reason to have this enabled in a production
On Tue, 12 Mar 2024 13:42:28 +
Mark Rutland wrote:
> There are ways around that, but they're complicated and/or expensive, e.g.
>
> * Use a sequence of multiple patches, starting with replacing the JALR with an
> exception-generating instruction with a fixup handler, which is sort-of what
On Fri, 22 Mar 2024 00:28:05 +0900
Masami Hiramatsu (Google) wrote:
> On Fri, 22 Mar 2024 00:07:59 +0900
> Masami Hiramatsu (Google) wrote:
>
> > > What would be really useful is if we had a way to expose BTF here.
> > > Something like:
> > >
> > > "%pB::"
> > >
> > > The "%pB" would mean
On Fri, 22 Mar 2024 00:07:59 +0900
Masami Hiramatsu (Google) wrote:
> > What would be really useful is if we had a way to expose BTF here.
> > Something like:
> >
> > "%pB::"
> >
> > The "%pB" would mean to look up the struct/field offsets and types via BTF,
> > and create the appropriate
On Tue, 12 Mar 2024 13:42:28 +
Mark Rutland wrote:
> > It would be interesting to see how the per-call performance would
> > improve on x86 with CALL_OPS! ;-)
>
> Heh. ;)
But this would require adding -fpatchable-function-entry on x86, which
would increase the size of text, which could
On Wed, 20 Mar 2024 21:29:20 +0800
Ye Bin wrote:
> Support print type '%pd' for print dentry's name.
>
The above is not a very detailed change log. A change log should state not
only what the change is doing, but also why.
Having examples of before and after would be useful in the change
On Thu, 21 Mar 2024 10:45:00 +0800
Jason Xing wrote:
> The format of the whole patch looks strange... Did you send this patch
> by using 'git send-email' instead of pasting the text and sending?
Yeah, it's uuencoded.
Subject:
On Wed, 20 Mar 2024 20:46:11 -0400
Waiman Long wrote:
> I have no objection to that. However, there are now 2 function call
> overhead in each iteration if either CONFIG_IRQSOFF_TRACER or
> CONFIG_PREEMPT_TRACER is on. Is it possible to do it with just one
> function call? IOW, make
On Wed, 20 Mar 2024 13:15:39 -0400
Mathieu Desnoyers wrote:
> > I would like to introduce restart_critical_timings() and place it in
> > locations that have this behavior.
>
> Is there any way you could move this to need_resched() rather than
> sprinkle those everywhere ?
Because
From: Steven Rostedt (Google)
I'm debugging some latency issues on a Chromebook and the preemptirqsoff
tracer hit this:
# tracer: preemptirqsoff
#
# preemptirqsoff latency trace v1.1.5 on 5.15.148-21853-g165fd2387469-dirty
On Wed, 20 Mar 2024 13:41:12 +0100
Daniel Bristot de Oliveira wrote:
> On 3/20/24 00:02, Steven Rostedt wrote:
> > On Mon, 18 Mar 2024 18:41:13 +0100
> > Daniel Bristot de Oliveira wrote:
> >
> >> Steven,
> >>
> >> Tracing tooling updates
On Wed, 20 Mar 2024 17:10:38 +0900
"Masami Hiramatsu (Google)" wrote:
> From: Masami Hiramatsu (Google)
>
> Fix to initialize 'val' local variable with zero.
> Dan reported that Smatch static code checker reports an error that a local
> 'val' variable needs to be initialized. Actually, the
Fixes: 25f00e40ce79 ("tracing/probes: Support $argN in return probe (kprobe
> and fprobe)")
> Signed-off-by: Masami Hiramatsu (Google)
> ---
Reviewed-by: Steven Rostedt (Google)
-- Steve
> kernel/trace/trace_probe.c |2 +-
> 1 file changed, 1 insertion(+), 1
On Wed, 20 Mar 2024 12:44:23 +0900
Masami Hiramatsu (Google) wrote:
> > > kernel/trace/trace_probe.c
> > > 846 return;
> > > 847
> > > 848 for (i = 0; i < earg->size; i++) {
> > > 849 struct fetch_insn *code = >code[i];
> > > 850
> >
On Sat, 9 Mar 2024 12:40:51 -0800
Kees Cook wrote:
> The part I'd like to get wired up sanely is having pstore find the
> nvdimm area automatically, but it never quite happened:
> https://lore.kernel.org/lkml/CAGXu5jLtmb3qinZnX3rScUJLUFdf+pRDVPjy=cs4kutw9tl...@mail.gmail.com/
The automatic
On Tue, 19 Mar 2024 17:30:41 -0700
Justin Stitt wrote:
> > diff --git a/include/trace/stages/stage6_event_callback.h
> > b/include/trace/stages/stage6_event_callback.h
> > index 83da83a0c14f..56a4eea5a48e 100644
> > --- a/include/trace/stages/stage6_event_callback.h
> > +++
On Mon, 18 Mar 2024 18:41:13 +0100
Daniel Bristot de Oliveira wrote:
> Steven,
>
> Tracing tooling updates for 6.9
>
> Tracing:
> - Update makefiles for latency-collector and RTLA,
> using tools/build/ makefiles like perf does, inheriting
> its benefits. For
On Tue, 19 Mar 2024 09:07:51 -0700
Nathan Chancellor wrote:
> Hi all,
>
> This series fully resolves the new instance of -Wstring-compare from
> within the __assign_str() macro. The first patch resolves a build
> failure with GCC that would be seen with just the second patch applied.
> The
From: "Steven Rostedt (Google)"
As __assign_str() no longer uses its "src" parameter, there's a check to
make sure nothing depends on it being different than what was passed to
__string(). It originally just compared the pointer passed to __string()
with the pointer pass
On Tue, 19 Mar 2024 20:13:52 +0800 (CST)
wrote:
> From: Peilin He
>
> Introduce a tracepoint for icmp_send, which can help users to get more
> detail information conveniently when icmp abnormal events happen.
>
> 1. Giving an usecase example:
> =
> When an
On Tue, 19 Mar 2024 10:19:09 +0300
Dan Carpenter wrote:
> Hello Masami Hiramatsu (Google),
>
> Commit 25f00e40ce79 ("tracing/probes: Support $argN in return probe
> (kprobe and fprobe)") from Mar 4, 2024 (linux-next), leads to the
> following Smatch static checker warning:
>
>
On Mon, 18 Mar 2024 16:43:07 +0100
Luca Ceresoli wrote:
> Indeed I was on an older version, apologies.
>
> I upgraded both libtraceevent and trace-cmd to master and applied your
> patch, now the %c is formatted correctly.
>
> However the arrows are still reversed.
>
> Is this what you were
On Fri, 15 Mar 2024 19:03:12 +0100
Luca Ceresoli wrote:
> > >
> > > I've come across an unexpected behaviour in the kernel tracing
> > > infrastructure that looks like a bug, or maybe two.
> > >
> > > Cc-ing ASoC maintainers for as it appeared using ASoC traces, but it
> > > does not look
On Fri, 15 Mar 2024 17:49:00 +0100
Luca Ceresoli wrote:
> Hello Linux tracing maintainers,
Hi Luca!
>
> I've come across an unexpected behaviour in the kernel tracing
> infrastructure that looks like a bug, or maybe two.
>
> Cc-ing ASoC maintainers for as it appeared using ASoC traces, but
From: "Steven Rostedt (Google)"
The default behavior of ring_buffer_wait() when passed a NULL "cond"
parameter is to exit the function the first time it is woken up. The
current implementation uses a counter that starts at zero and when it is
great
From: "Steven Rostedt (Google)"
The __string() helper macro of the TRACE_EVENT() macro is used to
determine how much of the ring buffer needs to be allocated to fit the
given source string. Some trace events have a string that is dependent on
another variable that could be NULL, an
On Thu, 14 Mar 2024 09:57:57 -0700
Alison Schofield wrote:
> On Fri, Feb 23, 2024 at 12:56:34PM -0500, Steven Rostedt wrote:
> > From: "Steven Rostedt (Google)"
> >
> > [
> >This is a treewide change. I will likely re-create this patch again in
>
On Thu, 14 Mar 2024 15:39:28 +0100
Paolo Abeni wrote:
> On Wed, 2024-03-13 at 09:34 -0400, Steven Rostedt wrote:
> > From: "Steven Rostedt (Google)"
> >
> > [
> >Note, I need to take this patch through my tree, so I'm looking for
> > acks.
&g
On Wed, 13 Mar 2024 13:45:50 -0400
Steven Rostedt wrote:
> Let me test to make sure that when src is a string "like this" that it does
> the strcmp(). Otherwise, we may have to always do the strcmp(), which I
> really would like to avoid.
I added the below patch and e
On Wed, 13 Mar 2024 09:59:03 -0700
Nathan Chancellor wrote:
> > Reported-by: kernel test robot
> > Closes:
> > https://lore.kernel.org/oe-kbuild-all/202402292111.kidexylu-...@intel.com/
> > Fixes: 433e1d88a3be ("tracing: Add warning if string in __assign_str() does
> > not match __string()")
From: "Steven Rostedt (Google)"
[
Note, I need to take this patch through my tree, so I'm looking for acks.
This causes the build to fail when I add the __assign_str() check, which
I was about to push to Linus, but it breaks allmodconfig due to this error.
]
Th
From: "Steven Rostedt (Google)"
While testing libtracefs on the mmapped ring buffer, the test that checks
if missed events are accounted for failed when using the mapped buffer.
This is because the mapped page does not update the missed events that
were dropped because the writer
From: "Steven Rostedt (Google)"
The rb_watermark_hit() checks if the amount of data in the ring buffer is
above the percentage level passed in by the "full" variable. If it is, it
returns true.
But it also sets the "shortest_full" field of the cpu_buffer that
On Wed, 13 Mar 2024 00:38:42 +0900
Masami Hiramatsu (Google) wrote:
> On Tue, 12 Mar 2024 09:19:21 -0400
> Steven Rostedt wrote:
>
> > From: "Steven Rostedt (Google)"
> >
> > The check for knowing if the poll should wait or not is basically the
>
On Wed, 13 Mar 2024 00:22:10 +0900
Masami Hiramatsu (Google) wrote:
> On Tue, 12 Mar 2024 09:19:20 -0400
> Steven Rostedt wrote:
>
> > From: "Steven Rostedt (Google)"
> >
> > If a reader of the ring buffer is doing a poll, and waiting for the ring
&
From: "Steven Rostedt (Google)"
The WARN_ON() check in __assign_str() to catch where the source variable
to the macro doesn't match the source variable to __string() gives an
error in clang:
>> include/trace/events/sunrpc.h:703:4: warning: result of comparison against a
&
From: "Steven Rostedt (Google)"
If a reader of the ring buffer is doing a poll, and waiting for the ring
buffer to hit a specific watermark, there could be a case where it gets
into an infinite ping-pong loop.
The poll code has:
rbwork->full_waiters_pending = true;
if
nd !full wakeups. But since poll uses the same logic for
full wakeups it can just call that function with full set.
Changes since v1:
https://lore.kernel.org/all/20240312115455.666920...@goodmis.org/
- Removed unused 'flags' in ring_buffer_poll_wait() as the spin_lock
is now in rb_watermark_hit().
Steve
From: "Steven Rostedt (Google)"
The check for knowing if the poll should wait or not is basically the
exact same logic as rb_watermark_hit(). The only difference is that
rb_watermark_hit() also handles the !full case. But for the full case, the
logic is the same. Just call th
From: "Steven Rostedt (Google)"
When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.
CPU 0
tps://lore.kernel.org/lkml/20240308183816.676883...@goodmis.org/
- My tests triggered a warning about calling a mutex_lock() after a
prepare_to_wait() that changed the task's state. Convert the affected
mutex over to a spinlock.
Steven Rostedt (Google) (2):
ring-buffer: Use wait_even
From: "Steven Rostedt (Google)"
Convert ring_buffer_wait() over to wait_event_interruptible(). The default
condition is to execute the wait loop inside __wait_event() just once.
This does not change the ring_buffer_wait() prototype yet, but
restructures the code so that it can ta
From: "Steven Rostedt (Google)"
When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.
CPU 0
mutex_lock() after a
prepare_to_wait() that changed the task's state. Convert the affected
mutex over to a spinlock.
Steven Rostedt (Google) (2):
ring-buffer: Use wait_event_interruptible() in ring_buffer_wait()
tracing/ring-buffer: Fix wait_on_pipe() race
include/linux/
From: "Steven Rostedt (Google)"
Convert ring_buffer_wait() over to wait_event_interruptible(). The default
condition is to execute the wait loop inside __wait_event() just once.
This does not change the ring_buffer_wait() prototype yet, but
restructures the code so that it can ta
From: "Steven Rostedt (Google)"
The check for knowing if the poll should wait or not is basically the
exact same logic as rb_watermark_hit(). The only difference is that
rb_watermark_hit() also handles the !full case. But for the full case, the
logic is the same. Just call th
From: "Steven Rostedt (Google)"
If a reader of the ring buffer is doing a poll, and waiting for the ring
buffer to hit a specific watermark, there could be a case where it gets
into an infinite ping-pong loop.
The poll code has:
rbwork->full_waiters_pending = true;
if
nd !full wakeups. But since poll uses the same logic for
full wakeups it can just call that function with full set.
Steven Rostedt (Google) (2):
ring-buffer: Fix full_waiters_pending in poll
ring-buffer: Reuse rb_watermark_hit() for the poll logic
kernel/trace/ring_buffer.c | 30 +++---
1 file changed, 19 insertions(+), 11 deletions(-)
On Fri, 8 Mar 2024 13:41:59 -0800
Linus Torvalds wrote:
> On Fri, 8 Mar 2024 at 13:39, Linus Torvalds
> wrote:
> >
> > So the above "complexity" is *literally* just changing the
> >
> > (new = atomic_read_acquire(>seq)) != old
> >
> > condition to
> >
> >
On Sat, 9 Mar 2024 10:27:47 -0800
Kees Cook wrote:
> On Tue, Mar 05, 2024 at 08:59:10PM -0500, Steven Rostedt wrote:
> > This is a way to map a ring buffer instance across reboots.
>
> As mentioned on Fedi, check out the persistent storage subsystem
> (pstore)[1]. It alread
On Fri, 8 Mar 2024 12:39:10 -0800
Linus Torvalds wrote:
> On Fri, 8 Mar 2024 at 10:38, Steven Rostedt wrote:
> >
> > A patch was sent to "fix" the wait_index variable that is used to help with
> > waking of waiters on the ring buffer. The patch was reje
From: "Steven Rostedt (Google)"
When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.
CPU 0
From: "Steven Rostedt (Google)"
The ring_buffer_wait() needs to be broken into three functions for proper
synchronization from the context of the callers:
ring_buffer_prepare_to_wait()
ring_buffer_wait()
ring_buffer_finish_wait()
To simplify the process, pull out the logic f
From: "Steven Rostedt (Google)"
When the tracing_pipe_raw file is closed, if there are readers still
blocked on it, they need to be woken up. Currently a wait_index is used.
When the readers need to be woken, the index is updated and they are all
woken up.
But there is a race where a
From: "Steven Rostedt (Google)"
The .release() function does not get called until all readers of a file
descriptor are finished.
If a thread is blocked on reading a file descriptor in ring_buffer_wait(),
and another thread closes the file descriptor, it will not wake up the
ot
From: "Steven Rostedt (Google)"
The "shortest_full" variable is used to keep track of the waiter that is
waiting for the smallest amount on the ring buffer before being woken up.
When a tasks waits on the ring buffer, it passes in a "full" value that is
a percentag
From: "Steven Rostedt (Google)"
A task can wait on a ring buffer for when it fills up to a specific
watermark. The writer will check the minimum watermark that waiters are
waiting for and if the ring buffer is past that, it will wake up all the
waiters.
The waiters are in a
r a
prepare_to_wait() that changed the task's state. Convert the affected
mutex over to a spinlock.
Steven Rostedt (Google) (6):
ring-buffer: Fix waking up ring buffer readers
ring-buffer: Fix resetting of shortest_full
tracing: Use .flush() call to wake up readers
tra
On Fri, 08 Mar 2024 13:38:20 -0500
Steven Rostedt wrote:
> +static DEFINE_MUTEX(wait_mutex);
> +
> +static bool wait_woken_prepare(struct trace_iterator *iter, int *wait_index)
> +{
> + bool woken = false;
> +
> + mutex_lock(_mutex);
> + if (iter->waking)
&
From: "Steven Rostedt (Google)"
The ring_buffer_wait() needs to be broken into three functions for proper
synchronization from the context of the callers:
ring_buffer_prepare_to_wait()
ring_buffer_wait()
ring_buffer_finish_wait()
To simplify the process, pull out the logic f
From: "Steven Rostedt (Google)"
When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.
CPU 0
From: "Steven Rostedt (Google)"
The .release() function does not get called until all readers of a file
descriptor are finished.
If a thread is blocked on reading a file descriptor in ring_buffer_wait(),
and another thread closes the file descriptor, it will not wake up the
ot
From: "Steven Rostedt (Google)"
The "shortest_full" variable is used to keep track of the waiter that is
waiting for the smallest amount on the ring buffer before being woken up.
When a tasks waits on the ring buffer, it passes in a "full" value that is
a percentag
From: "Steven Rostedt (Google)"
When the tracing_pipe_raw file is closed, if there are readers still
blocked on it, they need to be woken up. Currently a wait_index is used.
When the readers need to be woken, the index is updated and they are all
woken up.
But there is a race where a
From: "Steven Rostedt (Google)"
A task can wait on a ring buffer for when it fills up to a specific
watermark. The writer will check the minimum watermark that waiters are
waiting for and if the ring buffer is past that, it will wake up all the
waiters.
The waiters are in a
if its own condition has been set (in this case: iter->waking)
and then sleep. Follows the same semantics as any other wait logic.
Steven Rostedt (Google) (6):
ring-buffer: Fix waking up ring buffer readers
ring-buffer: Fix resetting of shortest_full
tracing: Use .flush()
> Signed-off-by: Kassey Li
> ---
> Changelog:
> v1:
> https://lore.kernel.org/all/20240308010929.1955339-1-quic_yinga...@quicinc.com/
> v1->v2:
> - do not follow checkpatch in TRACE_EVENT() macros
> - add sample "workqueue_activate_work: work struct ff80413a78b
On Fri, 8 Mar 2024 09:09:29 +0800
Kassey Li wrote:
> The trace event "workqueue_activate_work" only print work struct.
> However, function is the region of interest in a full sequence of work.
> Current workqueue_activate_work trace event output:
>
> workqueue_activate_work: work struct
On Wed, 6 Mar 2024 10:55:34 +0800
linke li wrote:
> Mark data races to work->wait_index as benign using READ_ONCE and WRITE_ONCE.
> These accesses are expected to be racy.
Are we now to the point that every single access of a variable (long size
or less) needs a READ_ONCE/WRITE_ONCE even with
I forgot to add [POC] to the topic.
All these patches are a proof of concept.
-- Steve
From: "Steven Rostedt (Google)"
Make sure all the events in each of the sub-buffers that were mapped in a
memory region are valid. This moves the code that walks the buffers for
time-stamp validation out of the CONFIG_RING_BUFFER_VALIDATE_TIME_DELTAS
ifdef block and is used t
From: "Steven Rostedt (Google)"
Add a test against the ring buffer memory range to see if it has valid
data. The ring_buffer_meta structure is given a new field called
"first_buffer" which holds the address of the first sub-buffer. This is
used to both determine if the ot
From: "Steven Rostedt (Google)"
Populate the ring_buffer_meta array. It holds the pointer to the
head_buffer (next to read), the commit_buffer (next to write) the size of
the sub-buffers, number of sub-buffers and an array that keeps track of
the order of the sub-buffers.
This i
From: "Steven Rostedt (Google)"
Add a buffer_meta per-cpu file for the trace instance that is mapped to
boot memory. This shows the current meta-data and can be used by user
space tools to record off the current mappings to help reconstruct the
ring buffer after a reboot.
It does not
From: "Steven Rostedt (Google)"
Add two global variables trace_buffer_start and trace_buffer_size. If they
are both set, then a "boot_mapped" instance will be created using the
memory specified by these variables as its ring buffer.
The instance will exist in:
/sys/kern
From: "Steven Rostedt (Google)"
Do not submit!
This is for testing purposes only. It hard codes an address that I was
using to store the ring buffer range. How the memory actually gets mapped
will be another project.
Signed-off-by: Steven Rostedt (Google)
---
arch/x86/kernel/se
From: "Steven Rostedt (Google)"
In preparation to allowing the trace ring buffer to be allocated in a
range of memory that is persistent across reboots, add
ring_buffer_alloc_range(). It takes a contiguous range of memory and will
split it up evening for the per CPU ring buffers.
trace
and it will have the trace.
I'm sure there's still some gotchas here, which is why this is currently
still just a POC.
Enjoy...
Steven Rostedt (Google) (8):
ring-buffer: Allow mapped field to be set without mapping
ring-buffer: Add ring_buffer_alloc_range()
traci
From: "Steven Rostedt (Google)"
In preparation for having the ring buffer mapped to a dedicated location,
which will have the same restrictions as user space memory mapped buffers,
allow it to use the "mapped" field of the ring_buffer_per_cpu structure
without having the
From: "Steven Rostedt (Google)"
Limit the max print event of trace_marker to just 4K string size. This must
also be less than the amount that can be held by a trace_seq along with
the text that is before the output (like the task name, PID, CPU, state,
etc). As trace_seq is made to ha
On Mon, 4 Mar 2024 21:48:44 -0500
Mathieu Desnoyers wrote:
> On 2024-03-04 21:37, Steven Rostedt wrote:
> > On Mon, 4 Mar 2024 21:35:38 -0500
> > Steven Rostedt wrote:
> >
> >>> And it's not for debugging, it's for validation of assumptions
> >>
On Mon, 4 Mar 2024 21:35:38 -0500
Steven Rostedt wrote:
> > And it's not for debugging, it's for validation of assumptions
> > made about an upper bound limit defined for a compile-time
> > check, so as the code evolves issues are caught early.
>
> validating is debug
On Mon, 4 Mar 2024 21:18:13 -0500
Mathieu Desnoyers wrote:
> On 2024-03-04 20:59, Steven Rostedt wrote:
> > On Mon, 4 Mar 2024 20:42:39 -0500
> > Mathieu Desnoyers wrote:
> >
> >> #define TRACE_OUTPUT_META_DATA_MAX_LEN 80
> >>
> >
On Mon, 4 Mar 2024 20:42:39 -0500
Mathieu Desnoyers wrote:
> #define TRACE_OUTPUT_META_DATA_MAX_LEN80
>
> and a runtime check in the code generating this header.
>
> This would avoid adding an unchecked upper limit.
That would be a lot of complex code that is for debugging
On Mon, 4 Mar 2024 20:36:28 -0500
Mathieu Desnoyers wrote:
> > <...>-999 [001] . 2296.140373: tracing_mark_write:
> > hello
> > ^^^
> > This is the meta data that is added to trace_seq
>
> If this
On Mon, 4 Mar 2024 20:35:16 -0500
Steven Rostedt wrote:
> > BUILD_BUG_ON(TRACING_MARK_MAX_SIZE + sizeof(meta data stuff...) >
> > TRACE_SEQ_SIZE);
>
> That's not the meta size I'm worried about. The sizeof(meta data) is the
> raw event binary data, which is
On Mon, 4 Mar 2024 20:15:57 -0500
Mathieu Desnoyers wrote:
> On 2024-03-04 19:27, Steven Rostedt wrote:
> > From: "Steven Rostedt (Google)"
> >
> > Since the size of trace_seq's buffer is the max an event can output, have
> > the trace_marker be half of t
On Mon, 4 Mar 2024 16:43:46 -0800
Randy Dunlap wrote:
> > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> > index 8198bfc54b58..d68544aef65f 100644
> > --- a/kernel/trace/trace.c
> > +++ b/kernel/trace/trace.c
> > @@ -7320,6 +7320,17 @@ tracing_mark_write(struct file *filp, const char
From: "Steven Rostedt (Google)"
Since the size of trace_seq's buffer is the max an event can output, have
the trace_marker be half of the entire TRACE_SEQ_SIZE, which is 4K. That
will keep writes that has meta data written from being dropped (but
reported), because the total output of
On Mon, 4 Mar 2024 18:55:00 -0500
Steven Rostedt wrote:
> On Mon, 4 Mar 2024 18:23:41 -0500
> Mathieu Desnoyers wrote:
>
> > It appears to currently be limited by
> >
> > #define TRACE_SEQ_BUFFER_SIZE (PAGE_SIZE * 2 - \
> > (sizeof(struct seq_b
From: "Steven Rostedt (Google)"
The trace_seq buffer is used to print out entire events. It's typically
set to PAGE_SIZE * 2 as there's some events that can be quite large.
As a side effect, writes to trace_marker is limited by both the size of the
trace_seq buffer as well as the rin
On Mon, 4 Mar 2024 18:23:41 -0500
Mathieu Desnoyers wrote:
> It appears to currently be limited by
>
> #define TRACE_SEQ_BUFFER_SIZE (PAGE_SIZE * 2 - \
> (sizeof(struct seq_buf) + sizeof(size_t) + sizeof(int)))
>
> checked within tracing_mark_write().
Yeah, I can hard code this to
From: "Steven Rostedt (Google)"
This reverts 60be76eeabb3d ("tracing: Add size check when printing
trace_marker output"). The only reason the precision check was added
was because of a bug that miscalculated the write size of the string into
the ring buffer and it t
On Fri, 1 Mar 2024 12:25:10 -0800
"Paul E. McKenney" wrote:
> > That would work for me. If there are no objections, I will make this
> > change.
>
> But I did check the latency of synchronize_rcu_tasks_rude() (about 100ms)
> and synchronize_rcu() (about 20ms). This is on a
On Fri, 1 Mar 2024 11:37:54 -0500
Mathieu Desnoyers wrote:
> On 2024-03-01 10:49, Steven Rostedt wrote:
> > On Fri, 1 Mar 2024 13:37:18 +0800
> > linke wrote:
> >
> >>> So basically you are worried about read-tearing?
> >>>
> >>>
On Fri, 1 Mar 2024 13:37:18 +0800
linke wrote:
> > So basically you are worried about read-tearing?
> >
> > That wasn't mentioned in the change log.
>
> Yes. Sorry for making this confused, I am not very familiar with this and
> still learning.
No problem. We all have to learn this anyway.
On Wed, 31 Jan 2024 14:47:31 +
David Howells wrote:
> Hi Steven,
Hi David,
Sorry, I just noticed this email as it was buried in other unread emails :-p
>
> I have a tracepoint in AF_RXRPC that displays information about a timeout I'm
> going to set. I have the timeout in a ktime_t as an
1 - 100 of 34046 matches
Mail list logo