:
trace_kprobe: Could not probe notrace function
update_sd_lb_stats.constprop.0
Signed-off-by: Naveen N Rao
---
v4: Use printk format specifier %ps with probe address to lookup the
symbol, as suggested by Masami.
kernel/trace/trace_kprobe.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions
On Thu, Dec 14, 2023 at 08:02:10AM +0900, Masami Hiramatsu wrote:
> On Wed, 13 Dec 2023 20:09:14 +0530
> Naveen N Rao wrote:
>
> > Trying to probe update_sd_lb_stats() using perf results in the below
> > message in the kernel log:
> > trace_kprobe: Could not p
not probe notrace function update_sd_lb_stats.constprop.0
Signed-off-by: Naveen N Rao
---
v3: Remove tk parameter from within_notrace_func() as suggested by
Masami
kernel/trace/trace_kprobe.c | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/kernel/trace/trace_kprobe.c
Christophe Leroy wrote:
For that, create a 32 bits version of patch_imm64_load_insns()
and create a patch_imm_load_insns() which calls
patch_imm32_load_insns() on PPC32 and patch_imm64_load_insns()
on PPC64.
Adapt optprobes_head.S for PPC32. Use PPC_LL/PPC_STL macros instead
of raw ld/std, opt
er maps a
> > relevant page via mmap(), instruction is replaced via mmap() code
> > path. But because Uprobe is invalid, entire mmap() operation can
> > not be stopped. In this case just print an error and continue.
> >
> > Signed-off-by: Ravi Bangoria
> > Acked-by: N
cy on
CONFIG_PPC64, and I don't think we need to confirm if we're running on a
ISA V3.1 for the below check.
With that:
Acked-by: Naveen N. Rao
> +
> + if (ppc_inst_prefixed(auprobe->insn) && (addr & 0x3F) == 0x3C) {
> + pr_info_ratelimited("Cannot register a uprobe on 64 byte
> unaligned prefixed instruction\n");
> + return -EINVAL;
> + }
> +
- Naveen
On 2021/02/04 06:38PM, Naveen N. Rao wrote:
> On 2021/02/04 04:17PM, Ravi Bangoria wrote:
> > Don't allow Uprobe on 2nd word of a prefixed instruction. As per
> > ISA 3.1, prefixed instruction should not cross 64-byte boundary.
> > So don't allow Uprobe on such prefixe
On 2021/02/04 04:19PM, Ravi Bangoria wrote:
>
>
> On 2/4/21 4:17 PM, Ravi Bangoria wrote:
> > Don't allow Uprobe on 2nd word of a prefixed instruction. As per
> > ISA 3.1, prefixed instruction should not cross 64-byte boundary.
> > So don't allow Uprobe on such prefixed instruction as well.
> >
On 2021/02/04 04:17PM, Ravi Bangoria wrote:
> Don't allow Uprobe on 2nd word of a prefixed instruction. As per
> ISA 3.1, prefixed instruction should not cross 64-byte boundary.
> So don't allow Uprobe on such prefixed instruction as well.
>
> There are two ways probed instruction is changed in
++-
> 1 file changed, 8 insertions(+), 5 deletions(-)
Suggested-by: Naveen N. Rao
Acked-by: Naveen N. Rao
Thanks,
Naveen
Masami Hiramatsu wrote:
On Tue, 5 Jan 2021 19:01:56 +0900
Masami Hiramatsu wrote:
On Tue, 5 Jan 2021 12:27:30 +0530
"Naveen N. Rao" wrote:
> Not all symbols are blacklisted on powerpc. Disable multiple_kprobes
> test until that is sorted, so that rest of ftrace and kprobe
Masami Hiramatsu wrote:
On Tue, 5 Jan 2021 12:27:30 +0530
"Naveen N. Rao" wrote:
Not all symbols are blacklisted on powerpc. Disable multiple_kprobes
test until that is sorted, so that rest of ftrace and kprobe selftests
can be run.
This looks good to me, but could you t
Not all symbols are blacklisted on powerpc. Disable multiple_kprobes
test until that is sorted, so that rest of ftrace and kprobe selftests
can be run.
Signed-off-by: Naveen N. Rao
---
.../testing/selftests/ftrace/test.d/kprobe/multiple_kprobes.tc | 2 +-
1 file changed, 1 insertion(+), 1
Arnaldo Carvalho de Melo wrote:
Em Fri, Dec 18, 2020 at 08:08:56PM +0530, Naveen N. Rao escreveu:
Hi Arnaldo,
Arnaldo Carvalho de Melo wrote:
> Em Fri, Dec 18, 2020 at 08:26:59AM -0300, Arnaldo Carvalho de Melo escreveu:
> > Em Fri, Dec 18, 2020 at 03:59:23PM +0800, Tiezhu Yang
erpc/g :-)
notified when the copy drifts, so that we can see if it still continues
working and we can get new syscalls to be supported in things like 'perf
trace'?
Yes, this looks good to me:
Reviewed-by: Naveen N. Rao
FWIW, I had posted a similar patch back in April, but glad to have this
go in ;)
Steven Rostedt wrote:
On Thu, 26 Nov 2020 23:38:38 +0530
"Naveen N. Rao" wrote:
On powerpc, kprobe-direct.tc triggered FTRACE_WARN_ON() in
ftrace_get_addr_new() followed by the below message:
Bad trampoline accounting at: 4222522f (wake_up_process+0xc/0x20)
(f001
.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 75 +-
1 file changed, 33 insertions(+), 42 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c
b/arch/powerpc/kernel/trace/ftrace.c
index 14b39f7797d455..7ddb6e4b527c39 100644
--- a/arch
, and it isn't evident that the graph caller has too
deep a call stack to cause issues.
Signed-off-by: Naveen N. Rao
---
.../powerpc/kernel/trace/ftrace_64_mprofile.S | 28 +--
1 file changed, 7 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/kernel/trace
Add a simple powerpc trampoline to demonstrate use of ftrace direct on
powerpc.
Signed-off-by: Naveen N. Rao
---
samples/Kconfig | 2 +-
samples/ftrace/ftrace-direct-modify.c | 58 +++
samples/ftrace/ftrace-direct-too.c| 48
We currently assume that ftrace locations are patched to go to either
ftrace_caller or ftrace_regs_caller. Drop this assumption in preparation
for supporting ftrace direct calls.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 107 +++--
1 file
.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/ftrace.h | 14 ++
arch/powerpc/kernel/trace/ftrace.c| 140 +-
.../powerpc/kernel/trace/ftrace_64_mprofile.S | 40 -
4 files changed, 182
, this is not required. Drop it.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index bbe871b47ade58..c5602e9b07faa3
Use FTRACE_REGS_ADDR instead of keying off
CONFIG_DYNAMIC_FTRACE_WITH_REGS to identify the proper ftrace trampoline
address to use.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 12 ++--
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc
ftrace_plt_tramps[] was intended to speed up skipping plt branches, but
the code wasn't completed. It is also not significantly better than
reading and decoding the instruction. Remove the same.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 8
1 file changed, 8
Add register_get_kernel_argument() for a rudimentary way to access
kernel function arguments.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/ptrace.h | 31 +++
2 files changed, 32 insertions(+)
diff --git a/arch
We need to remove hash entry if register_ftrace_function() fails.
Consolidate the cleanup to be done after register_ftrace_function() at
the end.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/trace/ftrace.c b
DYNAMIC_FTRACE_WITH_DIRECT_CALLS should depend on
DYNAMIC_FTRACE_WITH_REGS since we need ftrace_regs_caller().
Fixes: 763e34e74bb7d5c ("ftrace: Add register_ftrace_direct()")
Signed-off-by: Naveen N. Rao
---
kernel/trace/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Architectures may want to do some validation (such as to ensure that the
trampoline code is reachable from the provided ftrace location) before
accepting ftrace direct registration. Add helpers for the same.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 2 ++
kernel/trace/ftrace.c
ture all trampolines.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 5 ---
kernel/trace/ftrace.c | 84 ++
2 files changed, 4 insertions(+), 85 deletions(-)
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 1bd3a0356ae478..4
ect module is going away. This
happens because we are checking if any ftrace_ops has the
FTRACE_FL_TRAMP flag set _before_ updating the filter hash.
The fix for this is to look for any _other_ ftrace_ops that also needs
FTRACE_FL_TRAMP.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.c |
upstream issue since I am able to reproduce the lockup without these
patches. I will be looking into that to see if I can figure out the
cause of those lockups.
In the meantime, I would appreciate a review of these patches.
- Naveen
Naveen N. Rao (14):
ftrace: Fix updating FTRACE_FL_TRAMP
Hi Steven,
Steven Rostedt wrote:
From: "Steven Rostedt (VMware)"
In preparation to have arguments of a function passed to callbacks attached
to functions as default, change the default callback prototype to receive a
struct ftrace_regs as the forth parameter instead of a pt_regs.
For
[+ Maddy]
Leo Yan wrote:
If user specifies event type "ldst", PowerPC's perf_mem_events__name()
will wrongly return the store event name "cpu/mem-stores/".
This patch changes to return NULL for the event "ldst" on PowerPC.
Signed-off-by: Leo Yan
---
tools/perf/arch/powerpc/util/mem-events.c
-by: Naveen N. Rao
- Naveen
probes.rst
Adjust the entry to the new file location.
Signed-off-by: Lukas Bulwahn
---
Naveen, Masami-san, please ack.
Jonathan, please pick this minor non-urgent patch into docs-next.
applies cleanly on next-20200724
Ah, sorry. Hadn't noticed this change from Mauro.
Acked-by: Naveen N. Ra
Kprobes references are currently listed right after kretprobes example,
and appears to be part of the same section. Move this out to a separate
appendix for clarity.
Signed-off-by: Naveen N. Rao
---
Documentation/staging/kprobes.rst | 14 +-
1 file changed, 9 insertions(+), 5
Kprobes contitutes a dynamic tracing technology and as such can be
moved alongside documentation of other tracing technologies.
Signed-off-by: Naveen N. Rao
---
Documentation/staging/index.rst | 1 -
Documentation/trace/index.rst| 1 +
Documentation/{staging
This series updates some of the URLs in the kprobes document and moves
the same under trace/ directory.
- Naveen
Naveen N. Rao (3):
docs: staging/kprobes.rst: Update some of the references
docs: staging/kprobes.rst: Move references to a separate appendix
docs: Move kprobes.rst from
Some of the kprobes references are not valid anymore. Update the URLs to
point to their changed locations, where appropriate. Drop two URLs which
do not exist anymore.
Reported-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
Documentation/staging/kprobes.rst | 6 ++
1 file changed, 2
Masami Hiramatsu wrote:
On Tue, 14 Jul 2020 00:02:49 +0200
"Alexander A. Klimov" wrote:
Am 13.07.20 um 16:20 schrieb Masami Hiramatsu:
> Hi Naveen and Alexander,
>
> On Fri, 10 Jul 2020 19:14:47 +0530
> "Naveen N. Rao" wrote:
>
>> Masami Hirama
Masami Hiramatsu wrote:
On Tue, 7 Jul 2020 21:49:59 +0200
"Alexander A. Klimov" wrote:
Rationale:
Reduces attack surface on kernel devs opening the links for MITM
as HTTPS traffic is much harder to manipulate.
Deterministic algorithm:
For each file:
If not .svg:
For each line:
: Sandipan Das
Leo, Naveen, can you comment on this?
Shoot -- this is a bad miss, I should have caught it. FWIW:
Reviewed-by: Naveen N. Rao
Thanks,
Naveen
Masami Hiramatsu wrote:
On Fri, 1 May 2020 17:37:56 +0200
Mauro Carvalho Chehab wrote:
There are several files that I was unable to find a proper place
for them, and 3 ones that are still in plain old text format.
Let's place those stuff behind the carpet, as we'd like to keep the
root
Michael Ellerman wrote:
"Naveen N. Rao" writes:
Michael Ellerman wrote:
"Gautham R. Shenoy" writes:
From: "Gautham R. Shenoy"
Currently on Pseries Linux Guests, the offlined CPU can be put to one
of the following two states:
- Long term processor c
Michael Ellerman wrote:
"Gautham R. Shenoy" writes:
From: "Gautham R. Shenoy"
Currently on Pseries Linux Guests, the offlined CPU can be put to one
of the following two states:
- Long term processor cede (also called extended cede)
- Returned to the Hypervisor via RTAS "stop-self"
t on instruction that can't be emulated.
"
"Breakpoint at 0x%lx will be disabled.\n",
addr);
Otherwise:
Acked-by: Naveen N. Rao
- Naveen
+ goto disable;
+ }
/* Do not emulate user-space instructions, instead single-step them */
if (user_mode(regs)) {
@@
] return_to_handler+0x0/0x40
(vfs_read+0xb8/0x1b0)
[c000d1e33dd0] [c006ab58] return_to_handler+0x0/0x40
(ksys_read+0x7c/0x140)
[c000d1e33e20] [c006ab58] return_to_handler+0x0/0x40
(system_call+0x5c/0x68)
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/process.c
This associates entries in the ftrace_ret_stack with corresponding stack
frames, enabling more robust stack unwinding. Also update the only user
of ftrace_graph_ret_addr() to pass the stack pointer.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/include/asm/asm-prototypes.h | 3 ++-
arch
Enable HAVE_FUNCTION_GRAPH_RET_ADDR_PTR for more robust stack unwinding
when function graph tracer is in use. Convert powerpc show_stack() to
use ftrace_graph_ret_addr() for better stack unwinding.
- Naveen
Naveen N. Rao (3):
ftrace: Look up the address of return_to_handler() using helpers
This ensures that we use the right address on architectures that use
function descriptors.
Signed-off-by: Naveen N. Rao
---
kernel/trace/fgraph.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index 8dfd5021b933
Ravi Bangoria wrote:
On Powerpc64, watchpoint match range is double-word granular. On
a watchpoint hit, DAR is set to the first byte of overlap between
actual access and watched range. And thus it's quite possible that
DAR does not point inside user specified range. Ex, say user creates
a
g.h |5 +
kernel/kprobes.c|3 ++-
2 files changed, 7 insertions(+), 1 deletion(-)
Acked-by: Naveen N. Rao
- Naveen
Steven Rostedt wrote:
On Thu, 4 Jul 2019 20:04:41 +0530
"Naveen N. Rao" wrote:
kernel/trace/ftrace.c | 4
1 file changed, 4 insertions(+)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 7b037295a1f1..0791eafb693d 100644
--- a/kernel/trace/ftrace.c
+++ b/ke
The following commit has been merged into the perf/core branch of tip:
Commit-ID: 0a56e0603fa13af08816d673f6f71b68cda2fb2e
Gitweb:
https://git.kernel.org/tip/0a56e0603fa13af08816d673f6f71b68cda2fb2e
Author:Naveen N. Rao
AuthorDate:Tue, 27 Aug 2019 12:44:58 +05:30
cccd0 ("y2038: rename old time and utime syscalls")
commit 00bf25d693e7 ("y2038: use time32 syscall names on 32-bit")
commit 8dabe7245bbc ("y2038: syscalls: rename y2038 compat syscalls")
commit 0d6040d46817 ("arch: add split IPC system calls where needed"
Jiong Wang wrote:
Naveen N. Rao writes:
Since BPF constant blinding is performed after the verifier pass, the
ALU32 instructions inserted for doubleword immediate loads don't have a
corresponding zext instruction. This is causing a kernel oops on powerpc
and can be reproduced by running
this by emitting BPF_ZEXT during constant blinding if
prog->aux->verifier_zext is set.
Fixes: a4b1d3c1ddf6cb ("bpf: verifier: insert zero extension according to
analysis result")
Reported-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
Changes since RFC:
- Removed
Jiong Wang wrote:
Michael Ellerman writes:
"Naveen N. Rao" writes:
Since BPF constant blinding is performed after the verifier pass, there
are certain ALU32 instructions inserted which don't have a corresponding
zext instruction inserted after. This is causing a kernel oops
Naveen N. Rao wrote:
Since BPF constant blinding is performed after the verifier pass, there
are certain ALU32 instructions inserted which don't have a corresponding
zext instruction inserted after. This is causing a kernel oops on
powerpc and can be reproduced by running 'test_cgroup_storage
Nick Desaulniers wrote:
Reported-by: Sedat Dilek
Suggested-by: Josh Poimboeuf
Signed-off-by: Nick Desaulniers
---
Acked-by: Naveen N. Rao
- Naveen
Jisheng Zhang wrote:
This patch implements KPROBES_ON_FTRACE for arm64.
~ # mount -t debugfs debugfs /sys/kernel/debug/
~ # cd /sys/kernel/debug/
/sys/kernel/debug # echo 'p _do_fork' > tracing/kprobe_events
before the patch:
/sys/kernel/debug # cat kprobes/list
ff801009ff7c k
Jisheng Zhang wrote:
For KPROBES_ON_FTRACE case, we need to adjust the kprobe's addr
correspondingly.
Signed-off-by: Jisheng Zhang
---
kernel/kprobes.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 9873fc627d61..f8400753a8a9 100644
---
.
Fix this by emitting BPF_ZEXT during constant blinding if
prog->aux->verifier_zext is set.
Fixes: a4b1d3c1ddf6cb ("bpf: verifier: insert zero extension according to
analysis result")
Reported-by: Michael Ellerman
Signed-off-by: Naveen N. Rao
---
This approach (the location whe
Naveen N. Rao wrote:
Two patches addressing bugs in ftrace function probe handling. The first
patch addresses a NULL pointer dereference reported by LTP tests, while
the second one is a trivial patch to address a missing check for return
value, found by code inspection.
Steven,
Can you
In register_ftrace_function_probe(), we are not checking the return
value of alloc_and_copy_ftrace_hash(). The subsequent call to
ftrace_match_records() may end up dereferencing the same. Add a check to
ensure this doesn't happen.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.c | 5
a NULL
filter_hash.
Fix this by just checking for a NULL filter_hash in t_probe_next(). If
the filter_hash is NULL, then this probe is just being added and we can
simply return from here.
Signed-off-by: Naveen N. Rao
---
kernel/trace/ftrace.c | 4
1 file changed, 4 insertions(+)
diff --
Two patches addressing bugs in ftrace function probe handling. The first
patch addresses a NULL pointer dereference reported by LTP tests, while
the second one is a trivial patch to address a missing check for return
value, found by code inspection.
- Naveen
Naveen N. Rao (2):
ftrace: Fix
Steven Rostedt wrote:
On Thu, 27 Jun 2019 20:58:20 +0530
"Naveen N. Rao" wrote:
> But interesting, I don't see a synchronize_rcu_tasks() call
> there.
We felt we don't need it in this case. We patch the branch to ftrace
with a nop first. Other cpus should see that first.
Hi Steven,
Thanks for the review!
Steven Rostedt wrote:
On Thu, 27 Jun 2019 16:53:52 +0530
"Naveen N. Rao" wrote:
With -mprofile-kernel, gcc emits 'mflr r0', followed by 'bl _mcount' to
enable function tracing and profiling. So far, with dynamic ftrace, we
used to only patch out
Naveen N. Rao wrote:
With -mprofile-kernel, gcc emits 'mflr r0', followed by 'bl _mcount' to
enable function tracing and profiling. So far, with dynamic ftrace, we
used to only patch out the branch to _mcount(). However, mflr is
executed by the branch unit that can only execute one per cycle
Steven Rostedt wrote:
On Thu, 27 Jun 2019 16:53:50 +0530
"Naveen N. Rao" wrote:
In commit a0572f687fb3c ("ftrace: Allow ftrace_replace_code() to be
schedulable), the generic ftrace_replace_code() function was modified to
accept a flags argument in place of a single 'enable
Naveen N. Rao wrote:
In commit a0572f687fb3c ("ftrace: Allow ftrace_replace_code() to be
schedulable), the generic ftrace_replace_code() function was modified to
accept a flags argument in place of a single 'enable' flag. However, the
x86 version of this function was not updated. Fix the
up ftrace filter IP. This won't work if the address points to any
instruction apart from the one that has a branch to _mcount(). To
resolve this, have [dis]arm_kprobe_ftrace() use ftrace_function() to
identify the filter IP.
Acked-by: Masami Hiramatsu
Signed-off-by: Naveen N. Rao
---
kernel
the
'mflr r0'. Earlier -mprofile-kernel ABI included a 'std r0,stack'
instruction between the 'mflr r0' and the 'bl _mcount'. This is harmless
as the 'std r0,stack' instruction is inconsequential and is not relied
upon.
Suggested-by: Steven Rostedt (VMware)
Signed-off-by: Naveen N. Rao
---
arch
7fb3c ("ftrace: Allow ftrace_replace_code() to be schedulable")
Signed-off-by: Naveen N. Rao
---
arch/x86/kernel/ftrace.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 0927bb158ffc..f34005a17051 100644
--- a/
Naveen N. Rao (7):
ftrace: Expose flags used for ftrace_replace_code()
x86/ftrace: Fix use of flags in ftrace_replace_code()
ftrace: Expose __ftrace_replace_code()
powerpc/ftrace: Additionally nop out the preceding mflr with
-mprofile-kernel
ftrace: Update ftrace_location() for powerpc
Since ftrace_replace_code() is a __weak function and can be overridden,
we need to expose the flags that can be set. So, move the flags enum to
the header file.
Reviewed-by: Steven Rostedt (VMware)
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 5 +
kernel/trace/ftrace.c | 5
to the pre and post probe handlers.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes-ftrace.c | 32 +++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/kprobes-ftrace.c
b/arch/powerpc/kernel/kprobes-ftrace.c
index 972cb28174b2
(). We override
ftrace_replace_code() with a powerpc64 variant for this purpose.
Suggested-by: Nicholas Piggin
Reviewed-by: Nicholas Piggin
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 258 ++---
1 file changed, 236 insertions(+), 22 deletions
While over-riding ftrace_replace_code(), we still want to reuse the
existing __ftrace_replace_code() function. Rename the function and
make it available for other kernel code.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 1 +
kernel/trace/ftrace.c | 8
2 files changed, 5
Fixes: c7d64b560ce80 ("powerpc/ftrace: Enable C Version of recordmcount")
Signed-off-by: Naveen N. Rao
---
scripts/recordmcount.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/scripts/recordmcount.h b/scripts/recordmcount.h
index 13c5e6c8829c..47fca2c69a73 100644
--- a/script
Masami Hiramatsu wrote:
On Tue, 18 Jun 2019 20:17:06 +0530
"Naveen N. Rao" wrote:
With KPROBES_ON_FTRACE, kprobe is allowed to be inserted on instructions
that branch to _mcount (referred to as ftrace location). With
-mprofile-kernel, we now include the preceding 'mflr r0' as
Nicholas Piggin wrote:
Naveen N. Rao's on June 19, 2019 7:53 pm:
Nicholas Piggin wrote:
Michael Ellerman's on June 19, 2019 3:14 pm:
I'm also not convinced the ordering between the two patches is
guaranteed by the ISA, given that there's possibly no isync on the other
CPU.
Will they go
Nicholas Piggin wrote:
Michael Ellerman's on June 19, 2019 3:14 pm:
Hi Naveen,
Sorry I meant to reply to this earlier .. :/
No problem. Thanks for the questions.
"Naveen N. Rao" writes:
With -mprofile-kernel, gcc emits 'mflr r0', followed by 'bl _mcount' to
enable functi
Steven Rostedt wrote:
On Tue, 18 Jun 2019 23:53:11 +0530
"Naveen N. Rao" wrote:
Naveen N. Rao wrote:
> Steven Rostedt wrote:
>> On Tue, 18 Jun 2019 20:17:04 +0530
>> "Naveen N. Rao" wrote:
>>
>>> @@ -1551,7 +1551,7 @@ unsigned long f
Naveen N. Rao wrote:
Steven Rostedt wrote:
On Tue, 18 Jun 2019 20:17:04 +0530
"Naveen N. Rao" wrote:
@@ -1551,7 +1551,7 @@ unsigned long ftrace_location_range(unsigned long start,
unsigned long end)
key.flags = end;/* overload flags, as it is unsigned long */
Steven Rostedt wrote:
On Tue, 18 Jun 2019 20:17:04 +0530
"Naveen N. Rao" wrote:
@@ -1551,7 +1551,7 @@ unsigned long ftrace_location_range(unsigned long start,
unsigned long end)
key.flags = end;/* overload flags, as it is unsigned long */
for (pg = ftrace_pages
a custom version of ftrace_cmp_recs() which
looks at the instruction preceding the branch to _mcount() and marks
that instruction as belonging to ftrace if it is a 'nop' or 'mflr r0'.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 31 ++
include
up ftrace filter IP. This won't work if the address points to any
instruction apart from the one that has a branch to _mcount(). To
resolve this, have [dis]arm_kprobe_ftrace() use ftrace_function() to
identify the filter IP.
Signed-off-by: Naveen N. Rao
---
kernel/kprobes.c | 10 +-
1 file
in two instructions being
emitted: 'mflr r0' and 'bl _mcount'. So far, we were only nop'ing out
the branch to _mcount(). This series implements an approach to also nop
out the preceding mflr.
- Naveen
Naveen N. Rao (7):
ftrace: Expose flags used for ftrace_replace_code()
x86/ftrace: Fix
ftrace_replace_code() with a powerpc64 variant for this
purpose.
Suggested-by: Nicholas Piggin
Reviewed-by: Nicholas Piggin
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 241 ++---
1 file changed, 219 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc
to the pre and post probe handlers.
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/kprobes-ftrace.c | 30
1 file changed, 30 insertions(+)
diff --git a/arch/powerpc/kernel/kprobes-ftrace.c
b/arch/powerpc/kernel/kprobes-ftrace.c
index 972cb28174b2..6a0bd3c16cb6
While over-riding ftrace_replace_code(), we still want to reuse the
existing __ftrace_replace_code() function. Rename the function and
make it available for other kernel code.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 1 +
kernel/trace/ftrace.c | 8
2 files changed, 5
Since ftrace_replace_code() is a __weak function and can be overridden,
we need to expose the flags that can be set. So, move the flags enum to
the header file.
Reviewed-by: Steven Rostedt (VMware)
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 5 +
kernel/trace/ftrace.c | 5
7fb3c ("ftrace: Allow ftrace_replace_code() to be schedulable")
Signed-off-by: Naveen N. Rao
---
arch/x86/kernel/ftrace.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 0927bb158ffc..f34005a17051 100644
--- a/
uot;powerpc, hw_breakpoints: Implement hw_breakpoints for 64-bit
server processors")
Reviewed-by: Naveen N. Rao
- Naveen
Hi Steven,
Steven Rostedt wrote:
On Mon, 20 May 2019 09:13:20 -0400
Steven Rostedt wrote:
> I haven't yet tested this patch on x86, but this looked wrong so sending
> this as a RFC.
This code has been through a bit of updates, and I need to go through
and clean it up. I'll have to take
ction, use synchronize_rcu_tasks() to ensure all existing
threads make progress, and then patch in the branch to _mcount(). We
override ftrace_replace_code() with a powerpc64 variant for this
purpose.
Signed-off-by: Nicholas Piggin
Signed-off-by: Naveen N. Rao
Nice! Thanks for doing a real patch. You need
e progress, and then patch in the branch to _mcount(). We
override ftrace_replace_code() with a powerpc64 variant for this
purpose.
Signed-off-by: Nicholas Piggin
Signed-off-by: Naveen N. Rao
---
arch/powerpc/kernel/trace/ftrace.c | 188 +
1 file changed, 166 insertions(+), 22
Since ftrace_replace_code() is a __weak function and can be overridden,
we need to expose the flags that can be set. So, move the flags enum to
the header file.
Signed-off-by: Naveen N. Rao
---
include/linux/ftrace.h | 5 +
kernel/trace/ftrace.c | 5 -
2 files changed, 5 insertions
1 - 100 of 1306 matches
Mail list logo