Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-21 Thread Arnaldo Carvalho de Melo
Em Wed, Jun 21, 2017 at 04:19:11PM +0200, Milian Wolff escreveu:
> On Mittwoch, 21. Juni 2017 14:48:29 CEST Arnaldo Carvalho de Melo wrote:
> > Em Wed, Jun 21, 2017 at 10:16:56AM +0200, Milian Wolff escreveu:
> > > On Mittwoch, 21. Juni 2017 03:07:39 CEST Arnaldo Carvalho de Melo wrote:
> > > > Hi Millian, can I take this as an Acked-by or Tested-by?
> > > 
> > > I have no access to any PowerPC hardware. In principle the code looks
> > > fine, but that's all I can say here.
> > 
> > Ok, that would count as an Acked-by, i.e. from
> > Documentation/process/submitting-patches.rst:
> > 
> > -
> > 
> > Acked-by: is not as formal as Signed-off-by:.  It is a record that the acker
> > has at least reviewed the patch and has indicated acceptance.  Hence patch
> > mergers will sometimes manually convert an acker's "yep, looks good to me"
> > into an Acked-by: (but note that it is usually better to ask for an
> > explicit ack).
> > 
> > -
> > 
> > If you had a ppc machine _and_ had applied and tested the patch, that
> > would allow us to use a Tested-by tag.
> 
> I see, I'm still unfamiliar with this process. But yes, do consider it an 
> `Acked-by` from my side then.

Right, then there is another tag there that is relevant to this
discussion:

Link: 
http://lkml.kernel.org/r/1496312681-20133-1-git-send-email-pbonz...@redhat.com

which will has the Message-ID of the message with this patch, embedded
in a URL that when clicked will bring you to the thread where the patch
was submitted and the acks, tested-by, reviewed-by, etc were provided,
so that we can go back and check the history of the patch.

- Arnaldo


Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-21 Thread Milian Wolff
On Mittwoch, 21. Juni 2017 14:48:29 CEST Arnaldo Carvalho de Melo wrote:
> Em Wed, Jun 21, 2017 at 10:16:56AM +0200, Milian Wolff escreveu:
> > On Mittwoch, 21. Juni 2017 03:07:39 CEST Arnaldo Carvalho de Melo wrote:
> > > Hi Millian, can I take this as an Acked-by or Tested-by?
> > 
> > I have no access to any PowerPC hardware. In principle the code looks
> > fine, but that's all I can say here.
> 
> Ok, that would count as an Acked-by, i.e. from
> Documentation/process/submitting-patches.rst:
> 
> -
> 
> Acked-by: is not as formal as Signed-off-by:.  It is a record that the acker
> has at least reviewed the patch and has indicated acceptance.  Hence patch
> mergers will sometimes manually convert an acker's "yep, looks good to me"
> into an Acked-by: (but note that it is usually better to ask for an
> explicit ack).
> 
> -
> 
> If you had a ppc machine _and_ had applied and tested the patch, that
> would allow us to use a Tested-by tag.

I see, I'm still unfamiliar with this process. But yes, do consider it an 
`Acked-by` from my side then.

Cheers

-- 
Milian Wolff | milian.wo...@kdab.com | Senior Software Engineer
KDAB (Deutschland) GmbH KG, a KDAB Group company
Tel: +49-30-521325470
KDAB - The Qt Experts


Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-21 Thread Arnaldo Carvalho de Melo
Em Wed, Jun 21, 2017 at 10:16:56AM +0200, Milian Wolff escreveu:
> On Mittwoch, 21. Juni 2017 03:07:39 CEST Arnaldo Carvalho de Melo wrote:
> > Hi Millian, can I take this as an Acked-by or Tested-by?
 
> I have no access to any PowerPC hardware. In principle the code looks
> fine, but that's all I can say here.

Ok, that would count as an Acked-by, i.e. from
Documentation/process/submitting-patches.rst:

-

Acked-by: is not as formal as Signed-off-by:.  It is a record that the acker
has at least reviewed the patch and has indicated acceptance.  Hence patch
mergers will sometimes manually convert an acker's "yep, looks good to me"
into an Acked-by: (but note that it is usually better to ask for an
explicit ack).

-

If you had a ppc machine _and_ had applied and tested the patch, that
would allow us to use a Tested-by tag.

Ok?

- Arnaldo


Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-21 Thread Milian Wolff
On Mittwoch, 21. Juni 2017 03:07:39 CEST Arnaldo Carvalho de Melo wrote:
> Em Thu, Jun 15, 2017 at 10:46:16AM +0200, Milian Wolff escreveu:
> > On Tuesday, June 13, 2017 5:55:09 PM CEST Ravi Bangoria wrote:
> > Just a quick question: Have you guys applied my recent patch:
> > 
> > commit 5ea0416f51cc93436bbe497c62ab49fd9cb245b6
> > Author: Milian Wolff 
> > Date:   Thu Jun 1 23:00:21 2017 +0200
> > 
> > perf report: Include partial stacks unwound with libdw
> > 
> > So far the whole stack was thrown away when any error occurred before
> > the maximum stack depth was unwound. This is actually a very common
> > scenario though. The stacks that got unwound so far are still
> > interesting. This removes a large chunk of differences when comparing
> > perf script output for libunwind and libdw perf unwinding.
> > 
> > If not, then this could explain the issue you are seeing.
> 
> Hi Millian, can I take this as an Acked-by or Tested-by?

I have no access to any PowerPC hardware. In principle the code looks fine, 
but that's all I can say here.

Cheers

-- 
Milian Wolff | milian.wo...@kdab.com | Senior Software Engineer
KDAB (Deutschland) GmbH KG, a KDAB Group company
Tel: +49-30-521325470
KDAB - The Qt Experts


Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-21 Thread Mark Wielaard
On Tue, Jun 20, 2017 at 10:06:35PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Thu, Jun 15, 2017 at 01:16:32PM +0200, Mark Wielaard escreveu:
> > On Thu, 2017-06-15 at 10:46 +0200, Milian Wolff wrote:
> > > Just a quick question: Have you guys applied my recent patch:
> > > 
> > > commit 5ea0416f51cc93436bbe497c62ab49fd9cb245b6
> > > Author: Milian Wolff 
> > > Date:   Thu Jun 1 23:00:21 2017 +0200
> > > 
> > > perf report: Include partial stacks unwound with libdw
> > > 
> > > So far the whole stack was thrown away when any error occurred before
> > > the maximum stack depth was unwound. This is actually a very common
> > > scenario though. The stacks that got unwound so far are still
> > > interesting. This removes a large chunk of differences when comparing
> > > perf script output for libunwind and libdw perf unwinding.
> > > 
> > > If not, then this could explain the issue you are seeing.
> > 
> > Thanks! No, I didn't have that patch (*) yet. It makes a huge
> > difference. With that, Paolo's patch and the elfutils libdw powerpc64
> > fallback unwinder patch, it looks like I get user stack traces for
> > everything now on ppc64le.
> 
> Can I take that as a Tested-by: you?

Sure.
Tested-by: Mark Wielaard 

Thanks,

Mark


Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-21 Thread Arnaldo Carvalho de Melo
Em Thu, Jun 15, 2017 at 10:46:16AM +0200, Milian Wolff escreveu:
> On Tuesday, June 13, 2017 5:55:09 PM CEST Ravi Bangoria wrote:
> Just a quick question: Have you guys applied my recent patch:
 
> commit 5ea0416f51cc93436bbe497c62ab49fd9cb245b6
> Author: Milian Wolff 
> Date:   Thu Jun 1 23:00:21 2017 +0200
 
> perf report: Include partial stacks unwound with libdw
 
> So far the whole stack was thrown away when any error occurred before
> the maximum stack depth was unwound. This is actually a very common
> scenario though. The stacks that got unwound so far are still
> interesting. This removes a large chunk of differences when comparing
> perf script output for libunwind and libdw perf unwinding.
> 
> If not, then this could explain the issue you are seeing.

Hi Millian, can I take this as an Acked-by or Tested-by?

- Arnaldo


Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-21 Thread Arnaldo Carvalho de Melo
Em Thu, Jun 15, 2017 at 01:16:32PM +0200, Mark Wielaard escreveu:
> On Thu, 2017-06-15 at 10:46 +0200, Milian Wolff wrote:
> > Just a quick question: Have you guys applied my recent patch:
> > 
> > commit 5ea0416f51cc93436bbe497c62ab49fd9cb245b6
> > Author: Milian Wolff 
> > Date:   Thu Jun 1 23:00:21 2017 +0200
> > 
> > perf report: Include partial stacks unwound with libdw
> > 
> > So far the whole stack was thrown away when any error occurred before
> > the maximum stack depth was unwound. This is actually a very common
> > scenario though. The stacks that got unwound so far are still
> > interesting. This removes a large chunk of differences when comparing
> > perf script output for libunwind and libdw perf unwinding.
> > 
> > If not, then this could explain the issue you are seeing.
> 
> Thanks! No, I didn't have that patch (*) yet. It makes a huge
> difference. With that, Paolo's patch and the elfutils libdw powerpc64
> fallback unwinder patch, it looks like I get user stack traces for
> everything now on ppc64le.

Can I take that as a Tested-by: you?

- Arnaldo
 
> Cheers,
> 
> Mark
> 
> (*) It just this one-liner, but what a difference that makes:
> 
> --- a/tools/perf/util/unwind-libdw.c
> +++ b/tools/perf/util/unwind-libdw.c
> @@ -224,7 +224,7 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
>  
> err = dwfl_getthread_frames(ui->dwfl, thread->tid, frame_callback, 
> ui);
>  
> -   if (err && !ui->max_stack)
> +   if (err && ui->max_stack != max_stack)
> err = 0;
>  
> /*


Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-15 Thread Ravi Bangoria
Works like a charm with Milian's patch.

Acked-by: Ravi Bangoria 

Note:
I still see very minor differences between libunwind and libdw. Also, second 
last
function gets repeated two times in every callchain but it can be fixed later 
on.
Otherwise all looks good!

Thanks,
-Ravi

On Thursday 15 June 2017 04:46 PM, Mark Wielaard wrote:
> On Thu, 2017-06-15 at 10:46 +0200, Milian Wolff wrote:
>> Just a quick question: Have you guys applied my recent patch:
>>
>> commit 5ea0416f51cc93436bbe497c62ab49fd9cb245b6
>> Author: Milian Wolff 
>> Date:   Thu Jun 1 23:00:21 2017 +0200
>>
>> perf report: Include partial stacks unwound with libdw
>> 
>> So far the whole stack was thrown away when any error occurred before
>> the maximum stack depth was unwound. This is actually a very common
>> scenario though. The stacks that got unwound so far are still
>> interesting. This removes a large chunk of differences when comparing
>> perf script output for libunwind and libdw perf unwinding.
>>
>> If not, then this could explain the issue you are seeing.
> Thanks! No, I didn't have that patch (*) yet. It makes a huge
> difference. With that, Paolo's patch and the elfutils libdw powerpc64
> fallback unwinder patch, it looks like I get user stack traces for
> everything now on ppc64le.
>
> Cheers,
>
> Mark
>
> (*) It just this one-liner, but what a difference that makes:
>
> --- a/tools/perf/util/unwind-libdw.c
> +++ b/tools/perf/util/unwind-libdw.c
> @@ -224,7 +224,7 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
>  
> err = dwfl_getthread_frames(ui->dwfl, thread->tid, frame_callback, 
> ui);
>  
> -   if (err && !ui->max_stack)
> +   if (err && ui->max_stack != max_stack)
> err = 0;
>  
> /*
>



Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-15 Thread Mark Wielaard
On Thu, 2017-06-15 at 10:46 +0200, Milian Wolff wrote:
> Just a quick question: Have you guys applied my recent patch:
> 
> commit 5ea0416f51cc93436bbe497c62ab49fd9cb245b6
> Author: Milian Wolff 
> Date:   Thu Jun 1 23:00:21 2017 +0200
> 
> perf report: Include partial stacks unwound with libdw
> 
> So far the whole stack was thrown away when any error occurred before
> the maximum stack depth was unwound. This is actually a very common
> scenario though. The stacks that got unwound so far are still
> interesting. This removes a large chunk of differences when comparing
> perf script output for libunwind and libdw perf unwinding.
> 
> If not, then this could explain the issue you are seeing.

Thanks! No, I didn't have that patch (*) yet. It makes a huge
difference. With that, Paolo's patch and the elfutils libdw powerpc64
fallback unwinder patch, it looks like I get user stack traces for
everything now on ppc64le.

Cheers,

Mark

(*) It just this one-liner, but what a difference that makes:

--- a/tools/perf/util/unwind-libdw.c
+++ b/tools/perf/util/unwind-libdw.c
@@ -224,7 +224,7 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
 
err = dwfl_getthread_frames(ui->dwfl, thread->tid, frame_callback, ui);
 
-   if (err && !ui->max_stack)
+   if (err && ui->max_stack != max_stack)
err = 0;
 
/*



Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-15 Thread Milian Wolff
On Tuesday, June 13, 2017 5:55:09 PM CEST Ravi Bangoria wrote:
> Hi Mark,
> 
> On Tuesday 13 June 2017 05:14 PM, Mark Wielaard wrote:
> > I see the same on very short runs. But when doing a slightly longer run,
> > even just using ls -lahR, which does some more work, then I do see user
> > backtraces. They are still missing for some of the early samples though.
> > It is as if there is a stack/memory address mismatch when the probe is
> > "too early" in ld.so.
> > 
> > Could you do a test run on some program that does some more work to see
> > if you never get any user stack traces, or if you only not get them for
> > some specific probes?
> 
> Thanks for checking. I tried a proper workload this time, but I still
> don't see any userspace callchain getting unwound.
> 
>   $ ./perf record --call-graph=dwarf -- zip -q -r temp.zip .
>   [ perf record: Woken up 2891 times to write data ]
>   [ perf record: Captured and wrote 723.290 MB perf.data (87934 samples) ]
> 
> 
> With libdw:
> 
>  $ LD_LIBRARY_PATH=/home/ravi/elfutils-git/usr/local/lib:\
> /home/ravi/elfutils-git/usr/local/lib/elfutils/:$LD_LIBRARY_PATH\
> ./perf script
> 
>   zip 16699  6857.354633:  37371 cycles:u:
>ecedc xmon_core
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 8c4fc
> __hash_page_64K (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 83450 hash_preload
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 7cc34
> update_mmu_cache (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 330064 alloc_set_pte
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 330efc do_fault
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 334580
> __handle_mm_fault (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 335040 handle_mm_fault
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 7bf94
> do_page_fault (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 7bec4 do_page_fault
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux) 7be78
> do_page_fault (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 1a4f8 handle_page_fault
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 
>   zip 16699  6857.354663: 300677 cycles:u:
> 
>   zip 16699  6857.354895: 584131 cycles:u:
> 
>   zip 16699  6857.355312: 589687 cycles:u:
> 
>   zip 16699  6857.355606: 560142 cycles:u:

Just a quick question: Have you guys applied my recent patch:

commit 5ea0416f51cc93436bbe497c62ab49fd9cb245b6
Author: Milian Wolff 
Date:   Thu Jun 1 23:00:21 2017 +0200

perf report: Include partial stacks unwound with libdw

So far the whole stack was thrown away when any error occurred before
the maximum stack depth was unwound. This is actually a very common
scenario though. The stacks that got unwound so far are still
interesting. This removes a large chunk of differences when comparing
perf script output for libunwind and libdw perf unwinding.

If not, then this could explain the issue you are seeing.

Cheers

-- 
Milian Wolff | milian.wo...@kdab.com | Software Engineer
KDAB (Deutschland) GmbH KG, a KDAB Group company
Tel: +49-30-521325470
KDAB - The Qt Experts

smime.p7s
Description: S/MIME cryptographic signature


Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-13 Thread Ravi Bangoria
Hi Mark,

On Tuesday 13 June 2017 05:14 PM, Mark Wielaard wrote:
> I see the same on very short runs. But when doing a slightly longer run,
> even just using ls -lahR, which does some more work, then I do see user
> backtraces. They are still missing for some of the early samples though.
> It is as if there is a stack/memory address mismatch when the probe is
> "too early" in ld.so.
>
> Could you do a test run on some program that does some more work to see
> if you never get any user stack traces, or if you only not get them for
> some specific probes?

Thanks for checking. I tried a proper workload this time, but I still
don't see any userspace callchain getting unwound.

  $ ./perf record --call-graph=dwarf -- zip -q -r temp.zip .
  [ perf record: Woken up 2891 times to write data ]
  [ perf record: Captured and wrote 723.290 MB perf.data (87934 samples) ]


With libdw:

 $ LD_LIBRARY_PATH=/home/ravi/elfutils-git/usr/local/lib:\
/home/ravi/elfutils-git/usr/local/lib/elfutils/:$LD_LIBRARY_PATH\
./perf script

  zip 16699  6857.354633:  37371 cycles:u:
   ecedc xmon_core 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   8c4fc __hash_page_64K 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   83450 hash_preload 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   7cc34 update_mmu_cache 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  330064 alloc_set_pte 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  330efc do_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  334580 __handle_mm_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  335040 handle_mm_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   7bf94 do_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   7bec4 do_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   7be78 do_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   1a4f8 handle_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)

  zip 16699  6857.354663: 300677 cycles:u:

  zip 16699  6857.354895: 584131 cycles:u:

  zip 16699  6857.355312: 589687 cycles:u:

  zip 16699  6857.355606: 560142 cycles:u:


With libunwind:

$ ./perf script

  zip 16699  6857.354633:  37371 cycles:u:
 ecedc xmon_core 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
 8c4fc __hash_page_64K 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
 83450 hash_preload 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
 7cc34 update_mmu_cache 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
330064 alloc_set_pte 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
330efc do_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
334580 __handle_mm_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
335040 handle_mm_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
 7bf94 do_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
 7bec4 do_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
 7be78 do_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
 1a4f8 handle_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  1920 _start (/usr/lib64/ld-2.17.so)

  zip 16699  6857.354663: 300677 cycles:u:
  fa38 _dl_new_object (/usr/lib64/ld-2.17.so)
  3073 dl_main (/usr/lib64/ld-2.17.so)
 2045b _dl_sysdep_start (/usr/lib64/ld-2.17.so)
  1c7f _dl_start_final (/usr/lib64/ld-2.17.so)
  5ce7 _dl_start (/usr/lib64/ld-2.17.so)
  1937 _start (/usr/lib64/ld-2.17.so)

  zip 16699  6857.354895: 584131 cycles:u:
 103d0 _dl_relocate_object (/usr/lib64/ld-2.17.so)

  zip 16699  6857.355312: 589687 cycles:u:
  df68 do_lookup_x (/usr/lib64/ld-2.17.so)
  e8d7 _dl_lookup_symbol_x (/usr/lib64/ld-2.17.so)
 14bb3 _dl_fixup (/usr/lib64/ld-2.17.so)
 1ef37 _dl_runtime_resolve (/usr/lib64/ld-2.17.so)
 20bf7 copy_args (/usr/bin/zip)
  286f main (/usr/bin/zip)
 2497f generic_start_main.isra.0 (/usr/lib64/libc-2.17.so)
 24b73 __libc_start_main (/usr/lib64/libc-2.17.so)

  zip 16699  6857.355606: 560142 cycles:u:
 84764 _IO_getc 

Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-13 Thread Mark Wielaard
Hi Ravi,

On Mon, 2017-06-12 at 17:28 +0530, Ravi Bangoria wrote:
> So, I tested this patch along with Mark's patch[1] on elfutils an looks
> like it's not working. Steps on what I did:
> 
> After applying Mark's patch on upstream elfutils:
> 
>   $ aclocal
>   $ autoheader
>   $ autoconf
>   $ automake --add-missing
>   $ ./configure
>   $ make
>   $ make install DESTDIR=/home/ravi/elfutils-git
> 
> After applying your patch on upstream perf:
> 
>   $ make
>   $ ./perf record --call-graph=dwarf ls
>   $ LD_LIBRARY_PATH=/home/ravi/elfutils-git/usr/local/lib:\
> /home/ravi/elfutils-git/usr/local/lib/elfutils/:$LD_LIBRARY_PATH \
> ./perf script
> 
> ls 44159  1800.878468: 191408 cycles:u:
> 
> ls 44159  1800.878673: 419356 cycles:u:
>8a97c hpte_need_flush 
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
>835f4 flush_hash_page 
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
>8acec hpte_need_flush 
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
>   3468f4 ptep_clear_flush 
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
>   328b10 wp_page_copy 
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
>   32ebe4 do_wp_page 
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
>   33434c __handle_mm_fault 
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
>   335040 handle_mm_fault 
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
>7bf94 do_page_fault 
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
>1a4f8 handle_page_fault 
> (/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
> 
> ls 44159  1800.878961: 430876 cycles:u:
> 
> ls 44159  1800.879195: 423785 cycles:u:
> 
> ls 44159  1800.879360: 427359 cycles:u:
> 
> Here I don't see userspace callchain getting unwound. Please let me know
> if I'm doing anything wrong.

I see the same on very short runs. But when doing a slightly longer run,
even just using ls -lahR, which does some more work, then I do see user
backtraces. They are still missing for some of the early samples though.
It is as if there is a stack/memory address mismatch when the probe is
"too early" in ld.so.

Could you do a test run on some program that does some more work to see
if you never get any user stack traces, or if you only not get them for
some specific probes?

Thanks,

Mark


Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-12 Thread Ravi Bangoria
Hi Paolo,

Thanks for the patch and really sorry for being late. I was quite busy
with few other things.

On Friday 09 June 2017 06:00 PM, Paolo Bonzini wrote:
>
> On 01/06/2017 12:24, Paolo Bonzini wrote:
>> Porting PPC to libdw only needs an architecture-specific hook to move
>> the register state from perf to libdw.
>>
>> The ARM and x86 architectures already use libdw, and it is useful to
>> have as much common code for the unwinder as possible.  Mark Wielaard
>> has contributed a frame-based unwinder to libdw, so that unwinding works
>> even for binaries that do not have CFI information.  In addition,
>> libunwind is always preferred to libdw by the build machinery so this
>> cannot introduce regressions on machines that have both libunwind and
>> libdw installed.
>>
>> Cc: a...@kernel.org
>> Cc: Naveen N. Rao 
>> Cc: Ravi Bangoria 
>> Cc: linuxppc-dev@lists.ozlabs.org
>> Signed-off-by: Paolo Bonzini 
>> ---
>>  v1->v2: fix for 4.11->4.12 changes
> Ravi, Naveen, any reviews?

So, I tested this patch along with Mark's patch[1] on elfutils an looks
like it's not working. Steps on what I did:

After applying Mark's patch on upstream elfutils:

  $ aclocal
  $ autoheader
  $ autoconf
  $ automake --add-missing
  $ ./configure
  $ make
  $ make install DESTDIR=/home/ravi/elfutils-git

After applying your patch on upstream perf:

  $ make
  $ ./perf record --call-graph=dwarf ls
  $ LD_LIBRARY_PATH=/home/ravi/elfutils-git/usr/local/lib:\
/home/ravi/elfutils-git/usr/local/lib/elfutils/:$LD_LIBRARY_PATH \
./perf script

ls 44159  1800.878468: 191408 cycles:u:

ls 44159  1800.878673: 419356 cycles:u:
   8a97c hpte_need_flush 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   835f4 flush_hash_page 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   8acec hpte_need_flush 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  3468f4 ptep_clear_flush 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  328b10 wp_page_copy 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  32ebe4 do_wp_page 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  33434c __handle_mm_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  335040 handle_mm_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   7bf94 do_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   1a4f8 handle_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)

ls 44159  1800.878961: 430876 cycles:u:

ls 44159  1800.879195: 423785 cycles:u:

ls 44159  1800.879360: 427359 cycles:u:

Here I don't see userspace callchain getting unwound. Please let me know
if I'm doing anything wrong. Same perf.data with libunwind:

ls 44159  1800.878468: 191408 cycles:u:
   20380 _dl_sysdep_start (/usr/lib64/ld-2.17.so)
1c7f _dl_start_final (/usr/lib64/ld-2.17.so)
5ce7 _dl_start (/usr/lib64/ld-2.17.so)
1937 _start (/usr/lib64/ld-2.17.so)

ls 44159  1800.878673: 419356 cycles:u:
   8a97c hpte_need_flush 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   835f4 flush_hash_page 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   8acec hpte_need_flush 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  3468f4 ptep_clear_flush 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  328b10 wp_page_copy 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  32ebe4 do_wp_page 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  33434c __handle_mm_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
  335040 handle_mm_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   7bf94 do_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
   1a4f8 handle_page_fault 
(/usr/lib/debug/lib/modules/4.11.0-3.el7.ppc64le/vmlinux)
7cd4 _dl_map_object_from_fd (/usr/lib64/ld-2.17.so)
b24b _dl_map_object (/usr/lib64/ld-2.17.so)
   12b3b openaux (/usr/lib64/ld-2.17.so)
   159bf _dl_catch_error (/usr/lib64/ld-2.17.so)
   13323 _dl_map_object_deps (/usr/lib64/ld-2.17.so)
3feb dl_main (/usr/lib64/ld-2.17.so)
   2045b _dl_sysdep_start (/usr/lib64/ld-2.17.so)
1c7f _dl_start_final (/usr/lib64/ld-2.17.so)
  

Re: [PATCH v2] perf: libdw support for powerpc [ping]

2017-06-09 Thread Paolo Bonzini


On 01/06/2017 12:24, Paolo Bonzini wrote:
> Porting PPC to libdw only needs an architecture-specific hook to move
> the register state from perf to libdw.
> 
> The ARM and x86 architectures already use libdw, and it is useful to
> have as much common code for the unwinder as possible.  Mark Wielaard
> has contributed a frame-based unwinder to libdw, so that unwinding works
> even for binaries that do not have CFI information.  In addition,
> libunwind is always preferred to libdw by the build machinery so this
> cannot introduce regressions on machines that have both libunwind and
> libdw installed.
> 
> Cc: a...@kernel.org
> Cc: Naveen N. Rao 
> Cc: Ravi Bangoria 
> Cc: linuxppc-dev@lists.ozlabs.org
> Signed-off-by: Paolo Bonzini 
> ---
>   v1->v2: fix for 4.11->4.12 changes

Ravi, Naveen, any reviews?

Thanks,

Paolo

> 
>  tools/perf/Makefile.config  |  2 +-
>  tools/perf/arch/powerpc/util/Build  |  2 +
>  tools/perf/arch/powerpc/util/unwind-libdw.c | 73 
> +
>  3 files changed, 76 insertions(+), 1 deletion(-)
>  create mode 100644 tools/perf/arch/powerpc/util/unwind-libdw.c
> 
> diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
> index 8354d04b392f..e7b04a729417 100644
> --- a/tools/perf/Makefile.config
> +++ b/tools/perf/Makefile.config
> @@ -61,7 +61,7 @@ endif
>  # Disable it on all other architectures in case libdw unwind
>  # support is detected in system. Add supported architectures
>  # to the check.
> -ifneq ($(ARCH),$(filter $(ARCH),x86 arm))
> +ifneq ($(ARCH),$(filter $(ARCH),x86 arm powerpc))
>NO_LIBDW_DWARF_UNWIND := 1
>  endif
>  
> diff --git a/tools/perf/arch/powerpc/util/Build 
> b/tools/perf/arch/powerpc/util/Build
> index 90ad64b231cd..2e6595310420 100644
> --- a/tools/perf/arch/powerpc/util/Build
> +++ b/tools/perf/arch/powerpc/util/Build
> @@ -5,4 +5,6 @@ libperf-y += perf_regs.o
>  
>  libperf-$(CONFIG_DWARF) += dwarf-regs.o
>  libperf-$(CONFIG_DWARF) += skip-callchain-idx.o
> +
>  libperf-$(CONFIG_LIBUNWIND) += unwind-libunwind.o
> +libperf-$(CONFIG_LIBDW_DWARF_UNWIND) += unwind-libdw.o
> diff --git a/tools/perf/arch/powerpc/util/unwind-libdw.c 
> b/tools/perf/arch/powerpc/util/unwind-libdw.c
> new file mode 100644
> index ..3a24b3c43273
> --- /dev/null
> +++ b/tools/perf/arch/powerpc/util/unwind-libdw.c
> @@ -0,0 +1,73 @@
> +#include 
> +#include "../../util/unwind-libdw.h"
> +#include "../../util/perf_regs.h"
> +#include "../../util/event.h"
> +
> +/* See backends/ppc_initreg.c and backends/ppc_regs.c in elfutils.  */
> +static const int special_regs[3][2] = {
> + { 65, PERF_REG_POWERPC_LINK },
> + { 101, PERF_REG_POWERPC_XER },
> + { 109, PERF_REG_POWERPC_CTR },
> +};
> +
> +bool libdw__arch_set_initial_registers(Dwfl_Thread *thread, void *arg)
> +{
> + struct unwind_info *ui = arg;
> + struct regs_dump *user_regs = >sample->user_regs;
> + Dwarf_Word dwarf_regs[32], dwarf_nip;
> + size_t i;
> +
> +#define REG(r) ({\
> + Dwarf_Word val = 0; \
> + perf_reg_value(, user_regs, PERF_REG_POWERPC_##r);  \
> + val;\
> +})
> +
> + dwarf_regs[0]  = REG(R0);
> + dwarf_regs[1]  = REG(R1);
> + dwarf_regs[2]  = REG(R2);
> + dwarf_regs[3]  = REG(R3);
> + dwarf_regs[4]  = REG(R4);
> + dwarf_regs[5]  = REG(R5);
> + dwarf_regs[6]  = REG(R6);
> + dwarf_regs[7]  = REG(R7);
> + dwarf_regs[8]  = REG(R8);
> + dwarf_regs[9]  = REG(R9);
> + dwarf_regs[10] = REG(R10);
> + dwarf_regs[11] = REG(R11);
> + dwarf_regs[12] = REG(R12);
> + dwarf_regs[13] = REG(R13);
> + dwarf_regs[14] = REG(R14);
> + dwarf_regs[15] = REG(R15);
> + dwarf_regs[16] = REG(R16);
> + dwarf_regs[17] = REG(R17);
> + dwarf_regs[18] = REG(R18);
> + dwarf_regs[19] = REG(R19);
> + dwarf_regs[20] = REG(R20);
> + dwarf_regs[21] = REG(R21);
> + dwarf_regs[22] = REG(R22);
> + dwarf_regs[23] = REG(R23);
> + dwarf_regs[24] = REG(R24);
> + dwarf_regs[25] = REG(R25);
> + dwarf_regs[26] = REG(R26);
> + dwarf_regs[27] = REG(R27);
> + dwarf_regs[28] = REG(R28);
> + dwarf_regs[29] = REG(R29);
> + dwarf_regs[30] = REG(R30);
> + dwarf_regs[31] = REG(R31);
> + if (!dwfl_thread_state_registers(thread, 0, 32, dwarf_regs))
> + return false;
> +
> + dwarf_nip = REG(NIP);
> + dwfl_thread_state_register_pc(thread, dwarf_nip);
> + for (i = 0; i < ARRAY_SIZE(special_regs); i++) {
> + Dwarf_Word val = 0;
> + perf_reg_value(, user_regs, special_regs[i][1]);
> + if (!dwfl_thread_state_registers(thread,
> +  special_regs[i][0], 1,
> +  ))
> +