on x86, and to help prevent bugs caused
by this surprising difference (and simplify callers, which mostly want
to know if the number of uncopied bytes is nonzero).
Signed-off-by: Jed Davis
---
arch/x86/kernel/cpu/perf_event.c | 8 ++--
arch/x86/kernel/cpu/perf_event_intel_ds.c | 6
on x86, and to help prevent bugs caused
by this surprising difference (and simplify callers, which mostly want
to know if the number of uncopied bytes is nonzero).
Signed-off-by: Jed Davis j...@mozilla.com
---
arch/x86/kernel/cpu/perf_event.c | 8 ++--
arch/x86/kernel/cpu
of uncopied bytes is nonzero).
Signed-off-by: Jed Davis
---
arch/x86/include/asm/perf_event.h | 9 +
arch/x86/kernel/cpu/perf_event.c | 8 ++--
arch/x86/kernel/cpu/perf_event_intel_ds.c | 6 ++
arch/x86/kernel/cpu/perf_event_intel_lbr.c | 4 +---
arch/x86/lib
adds a second wrapper to
re-reverse it for perf; the next patch in this series will clean it up.
Signed-off-by: Jed Davis
---
arch/x86/include/asm/perf_event.h | 9 -
kernel/events/internal.h | 11 ++-
2 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/arch
On Mon, Jul 22, 2013 at 07:52:39PM +0100, Dave Martin wrote:
> On Sun, Jul 21, 2013 at 10:37:53PM +0100, Will Deacon wrote:
> > Ok, I think I'm with you now. I also think that a better solution would be
> > to try and limit the r7/fp confusion to one place, perhaps behind something
> > like:
> >
On Mon, Jul 22, 2013 at 07:52:39PM +0100, Dave Martin wrote:
On Sun, Jul 21, 2013 at 10:37:53PM +0100, Will Deacon wrote:
Ok, I think I'm with you now. I also think that a better solution would be
to try and limit the r7/fp confusion to one place, perhaps behind something
like:
void
adds a second wrapper to
re-reverse it for perf; the next patch in this series will clean it up.
Signed-off-by: Jed Davis j...@mozilla.com
---
arch/x86/include/asm/perf_event.h | 9 -
kernel/events/internal.h | 11 ++-
2 files changed, 18 insertions(+), 2 deletions
of uncopied bytes is nonzero).
Signed-off-by: Jed Davis j...@mozilla.com
---
arch/x86/include/asm/perf_event.h | 9 +
arch/x86/kernel/cpu/perf_event.c | 8 ++--
arch/x86/kernel/cpu/perf_event_intel_ds.c | 6 ++
arch/x86/kernel/cpu/perf_event_intel_lbr.c | 4
On Mon, Jul 15, 2013 at 02:54:20PM +0100, Will Deacon wrote:
> On Sat, Jul 13, 2013 at 04:18:20AM +0100, Jed Davis wrote:
[...]
> > Effects of this are probably limited to failure of EHABI unwinding when
> > starting from a function that uses r7 to restore it
On Mon, Jul 15, 2013 at 02:53:42PM +0100, Will Deacon wrote:
> On Sat, Jul 13, 2013 at 04:17:14AM +0100, Jed Davis wrote:
[...]
> > +#ifdef CONFIG_THUMB2_KERNEL
> > +#define perf_arch_fetch_caller_regs(regs, ip)
&
On Mon, Jul 15, 2013 at 02:53:42PM +0100, Will Deacon wrote:
On Sat, Jul 13, 2013 at 04:17:14AM +0100, Jed Davis wrote:
[...]
+#ifdef CONFIG_THUMB2_KERNEL
+#define perf_arch_fetch_caller_regs(regs, ip)
\
+ do
On Mon, Jul 15, 2013 at 02:54:20PM +0100, Will Deacon wrote:
On Sat, Jul 13, 2013 at 04:18:20AM +0100, Jed Davis wrote:
[...]
Effects of this are probably limited to failure of EHABI unwinding when
starting from a function that uses r7 to restore its stack pointer, but
the possibility
e its stack pointer, but
the possibility for further breakage (which would be invisible on
non-Thumb kernels) is worrying.
With this change, it is hoped, r7 is consistently referred to as "r7",
and "fp" always means r11; this costs a few extra ifdefs, but it should
help prevent future
at which perf_arch_fetch_caller_regs
is expanded, instead of that function activation's call site, because we
need SP and PC to be consistent for EHABI unwinding; hopefully nothing
will be inconvenienced by the extra stack frame.
Signed-off-by: Jed Davis
---
arch/arm/include/asm/perf_event.h | 43
breakage (which would be invisible on
non-Thumb kernels) is worrying.
With this change, it is hoped, r7 is consistently referred to as r7,
and fp always means r11; this costs a few extra ifdefs, but it should
help prevent future issues.
Signed-off-by: Jed Davis j...@mozilla.com
---
arch/arm/include
at which perf_arch_fetch_caller_regs
is expanded, instead of that function activation's call site, because we
need SP and PC to be consistent for EHABI unwinding; hopefully nothing
will be inconvenienced by the extra stack frame.
Signed-off-by: Jed Davis j...@mozilla.com
---
arch/arm/include/asm
On Tue, Jun 18, 2013 at 02:13:19PM +0100, Will Deacon wrote:
> On Fri, Jun 14, 2013 at 12:21:11AM +0100, Jed Davis wrote:
> > With this change, we no longer lose the innermost entry in the user-mode
> > part of the call chain. See also the x86 port, which includes the ip.
> &g
With this change, we no longer lose the innermost entry in the user-mode
part of the call chain. See also the x86 port, which includes the ip,
and the corresponding change in arch/arm.
Signed-off-by: Jed Davis
---
arch/arm64/kernel/perf_event.c |1 +
1 file changed, 1 insertion(+)
diff
With this change, we no longer lose the innermost entry in the user-mode
part of the call chain. See also the x86 port, which includes the ip,
and the corresponding change in arch/arm.
Signed-off-by: Jed Davis j...@mozilla.com
---
arch/arm64/kernel/perf_event.c |1 +
1 file changed, 1
On Tue, Jun 18, 2013 at 02:13:19PM +0100, Will Deacon wrote:
On Fri, Jun 14, 2013 at 12:21:11AM +0100, Jed Davis wrote:
With this change, we no longer lose the innermost entry in the user-mode
part of the call chain. See also the x86 port, which includes the ip.
It's possible
in the kernel when the sample was taken.
Signed-off-by: Jed Davis
---
arch/arm/kernel/perf_event.c |1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c
index 8c3094d..d9f5cd4 100644
--- a/arch/arm/kernel/perf_event.c
+++ b/arch/arm
in the kernel when the sample was taken.
Signed-off-by: Jed Davis j...@mozilla.com
---
arch/arm/kernel/perf_event.c |1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c
index 8c3094d..d9f5cd4 100644
--- a/arch/arm/kernel/perf_event.c
22 matches
Mail list logo