On 2021-02-10 05:34, Steven Rostedt wrote:
On Tue, 9 Feb 2021 14:55:31 -0500
Steven Rostedt <rost...@goodmis.org> wrote:

> [for-next][PATCH 2/2] tracing: Use temp buffer when filtering events
> https://lore.kernel.org/lkml/f16b14066317f6a926b6636df6974...@codeaurora.org/

Note, that is only used when filtering happens, which doesn't appear to be
the case here.

I was basing this off of the original commands, but the stack dump says
otherwise. But it should still work.


>
> It seems like we should still try to get that fixed somehow, even if
> the below change is fine on its own (it probably doesn't make sense to
> such a large amount of data via tracepoints). It would be unfortunate
> for next poor soul to hit the same issues, just because they wanted to
> dump a few KB.

Yeah, it was a design decision to cap the max size of events to just under
PAGE_SIZE. The ring buffer is broken up into pages (for zero copy
transfers to file systems and the network). Thus, no event is allowed to be
bigger than a page (and actually a bit smaller)

That said, it shouldn't have crashed, it should have just failed to record.

I'll test it out and see why it crashed.

Looking at the original report, I see:

 CPU: 1 PID: 141 Comm: kworker/u16:1 Not tainted 4.19.139 #162

Does this still crash on the latest kernel?

-- Steve
Not tested with latest kernel.
The reason is below which I said in https://lore.kernel.org/lkml/b504b3d7e989cae108669a0cd3072...@codeaurora.org/

the per cpu buffer seems it is initilized in trace_buffered_event_enable,
it is only 1 page size as below:
void trace_buffered_event_enable(void)
{
...
        for_each_tracing_cpu(cpu) {
                page = alloc_pages_node(cpu_to_node(cpu),
                                        GFP_KERNEL | __GFP_NORETRY, 0);
If the size of buffer to trace is more than 1 page, such as 46680, then
it trigger kernel crash/panic in my case while run trace-cmd.
After debugging, the trace_file->flags in
trace_event_buffer_lock_reserve is 0x40b while run trace-cmd, and it is
0x403 while collecting ftrace log.

_______________________________________________
ath10k mailing list
ath10k@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/ath10k

Reply via email to