On Sat, Jul 22, 2017 at 4:09 PM, Namhyung Kim <namhy...@kernel.org> wrote: > On Fri, Jul 21, 2017 at 01:02:50PM -0700, David Carrillo-Cisneros wrote: >> On Fri, Jul 21, 2017 at 12:44 AM, Jiri Olsa <jo...@redhat.com> wrote: >> > On Thu, Jul 20, 2017 at 10:11:57PM -0700, David Carrillo-Cisneros wrote: >> >> Fixes bug noted by Jiri in https://lkml.org/lkml/2017/6/13/755 and caused >> >> by commit d49dadea7862 ("perf tools: Make 'trace' or 'trace_fields' sort >> >> key default for tracepoint events") >> >> not taking into account that evlist is empty in pipe-mode. >> >> >> >> Before this commit, pipe mode will only show bogus "100.00% N/A" instead >> >> of correct output as follows: >> >> >> >> $ perf record -o - sleep 1 | perf report -i - >> >> # To display the perf.data header info, please use >> >> --header/--header-only options. >> >> # >> >> [ perf record: Woken up 1 times to write data ] >> >> [ perf record: Captured and wrote 0.000 MB - ] >> >> # >> >> # Total Lost Samples: 0 >> >> # >> >> # Samples: 8 of event 'cycles:ppH' >> >> # Event count (approx.): 145658 >> >> # >> >> # Overhead Trace output >> >> # ........ ............ >> >> # >> >> 100.00% N/A >> >> >> >> Correct output, after patch: >> >> >> >> $ perf record -o - sleep 1 | perf report -i - >> >> # To display the perf.data header info, please use >> >> --header/--header-only options. >> >> # >> >> [ perf record: Woken up 1 times to write data ] >> >> [ perf record: Captured and wrote 0.000 MB - ] >> >> # >> >> # Total Lost Samples: 0 >> >> # >> >> # Samples: 8 of event 'cycles:ppH' >> >> # Event count (approx.): 191331 >> >> # >> >> # Overhead Command Shared Object Symbol >> >> # ........ ....... ................. >> >> ................................. >> >> # >> >> 81.63% sleep libc-2.19.so [.] _exit >> >> 13.58% sleep ld-2.19.so [.] do_lookup_x >> >> 2.34% sleep [kernel.kallsyms] [k] context_switch >> >> 2.34% sleep libc-2.19.so [.] __GI___libc_nanosleep >> >> 0.11% perf [kernel.kallsyms] [k] __intel_pmu_enable_a >> >> >> > >> > I wonder we could reinit the sortorder once we know what >> > events we have in pipe, and recognize the tracepoint output >> > properly: >> >> I see this hard to do since, at any given point while traversing the >> pipe's content, the best we can do is guess that we've seen all event >> types. Then we'd need to fall back and redo the output whenever a new >> sample refutes our last guess. > > After reading feature event, you could know the number of events, no?
True. Also the tool could extract the events from HEADER_EVENT_DESC feature event and adjust output accordingly. That'd be a nice to have feature. Thanks, David