Excerpts from Michael Ellerman's message of April 20, 2017 12:03:
"Naveen N. Rao" <naveen.n....@linux.vnet.ibm.com> writes:
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 71286dfd76a0..59159337a097 100644
@@ -112,6 +113,14 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name,
unsigned int offset)
+bool arch_within_kprobe_blacklist(unsigned long addr)
+ return (addr >= (unsigned long)__kprobes_text_start &&
+ addr < (unsigned long)__kprobes_text_end) ||
+ (addr >= (unsigned long)_stext &&
+ addr < (unsigned long)__head_end);
This isn't quite right when the kernel is relocated.
_stext and __head_end will be updated to point to the relocated copy of
the kernel, eg:
# grep -e _stext /proc/kallsyms
c000000002000000 T _stext
So you probably also want something like:
if (_stext != PAGE_OFFSET &&
addr >= PAGE_OFFSET &&
addr < (PAGE_OFFSET + (__head_end - _stext)))
Ah, so that's for ensuring we don't allow probing at the real exception
vectors, which get copied down from _stext. In that case, we are covered
by the test for kernel_text_address() in check_kprobe_address_safe(). We
only allow probing from _stext to _etext.
But that's entirely untested :)
You can test the relocatable case by enabling CONFIG_RELOCATABLE_TEST.
Done, thanks. This is working as expected (without the need for the
changes above). I am not allowed to probe at the real exception vectors
(and PAGE_OFFSET) as well as between _stext and __head_end.