On Thu, Jun 02, 2016 at 09:55:15AM +, He Kuang wrote:
SNIP
> @@ -680,3 +680,52 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
>
> return get_entries(, cb, arg, max_stack);
> }
> +
> +static struct unwind_libunwind_ops
> +_unwind_libunwind_ops = {
> + .prepare_access
On Thu, Jun 02, 2016 at 09:55:15AM +, He Kuang wrote:
SNIP
> @@ -680,3 +680,52 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
>
> return get_entries(, cb, arg, max_stack);
> }
> +
> +static struct unwind_libunwind_ops
> +_unwind_libunwind_ops = {
> + .prepare_access
Currently, libunwind operations are fixed, and they are chosen
according to the host architecture. This will lead a problem that if a
thread is run as x86_32 on x86_64 machine, perf will use libunwind
methods for x86_64 to parse the callchain and get wrong result.
This patch changes the fixed
Currently, libunwind operations are fixed, and they are chosen
according to the host architecture. This will lead a problem that if a
thread is run as x86_32 on x86_64 machine, perf will use libunwind
methods for x86_64 to parse the callchain and get wrong result.
This patch changes the fixed
4 matches
Mail list logo