Re: [patch 16/16] Add hardware breakpoint support for i386
This patch provides a simple interface for kernel-space watchpoints using processor's debug registers. Using Kwatch interface users can monitor kernel global variables and dump the debugging information such as kernel stack, global variables, processor registers. int register_kwatch(unsigned long addr, u8 length, u8 type, kwatch_handler_t handler) -length of the breakpoint can be 1,2 or 4 bytes long. -type can be read, write, execute. 0 Break on instruction execution only. 1 Break on data writes only. 3 Break on data reads or writes but not instruction fetches. -return value is the debug register number allocated/used for setting up this watch point. Sample code: This sample code sets a watchpoint on the pid_max and registers a call back function if any writes happen to pid_max. struct kwatch kp; void kwatch_handler(struct kwatch *p, struct pt_regs *regs) { ... } Register watchpoint probe from init_module: static int debug_regs_num; int init_module(void) { .. debug_regs_num = register_kwatch(kallsyms_lookup_name(pid_max), 4, 1, kwatch_handler); .. } Test this by changing the value of pid_max in /proc/sys/kernel/pid_max echo 1000 > /proc/sys/kernel/pid_max You see the call back function being called. Unregister the watchpoint from cleanup_module: void cleanup_module(void) { .. unregister_kwatch(debug_regs_num); .. } Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- --- linux-2.6.13-prasanna/arch/i386/Kconfig.debug |8 + linux-2.6.13-prasanna/arch/i386/kernel/Makefile |1 linux-2.6.13-prasanna/arch/i386/kernel/kwatch.c | 189 linux-2.6.13-prasanna/include/asm-i386/kwatch.h | 60 +++ 4 files changed, 258 insertions(+) diff -puN arch/i386/Kconfig.debug~kernel-watchpoint arch/i386/Kconfig.debug --- linux-2.6.13/arch/i386/Kconfig.debug~kernel-watchpoint 2005-08-30 11:44:25.921069488 +0530 +++ linux-2.6.13-prasanna/arch/i386/Kconfig.debug 2005-08-30 11:44:25.932067816 +0530 @@ -32,6 +32,14 @@ config KPROBES for kernel debugging, non-intrusive instrumentation and testing. If in doubt, say "N". +config KWATCH + bool "Kwatch points" + depends on DEBUG_KERNEL + select DEBUGREG + help + This enables kernel-space watchpoints using processor's debug + registers. If in doubt, say "N". + config DEBUGREG bool "Global Debug Registers" depends on DEBUG_KERNEL diff -puN /dev/null arch/i386/kernel/kwatch.c --- /dev/null 2005-08-30 16:04:24.253093808 +0530 +++ linux-2.6.13-prasanna/arch/i386/kernel/kwatch.c 2005-08-30 11:44:25.933067664 +0530 @@ -0,0 +1,189 @@ +/* + * Kernel Watchpoint interface. + * arch/i386/kernel/kwatch.c + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * + * Copyright (C) IBM Corporation, 2002, 2004 + * + * 2002-OctCreated by Vamsi Krishna S <[EMAIL PROTECTED]> for + * Kernel Watchpoint implementation. + * 2004-OctUpdated by Prasanna S Panchamukhi <[EMAIL PROTECTED]> to + * to make use of notifiers. + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static struct kwatch kwatch_list[DR_MAX]; +static spinlock_t kwatch_lock = SPIN_LOCK_UNLOCKED; +static unsigned long kwatch_in_progress; /* currently being handled */ + +struct dr_info { + int debugreg; + unsigned long addr; + int type; +}; + +static inline void write_smp_dr(void *info) +{ + struct dr_info *dr = (struct dr_info *)info; + + if (cpu_has_de && dr->type == DR_TYPE_IO) + set_in_cr4(X86_CR4_DE); + write_dr(dr->debugreg, dr->addr); +} +
Re: [patch 16/16] Add hardware breakpoint support for i386
Hi, > > This adds hardware breakpoint support for i386. This is not as well tested > as > > software breakpoints, but in some minimal testing appears to be > functional. > > This really would need so coordination with user space using > them. Otherwise it'll be quite unreliable because any user program > can break it. > > Long ago (in 2.4 time frame) there used to be a IBM patch floating > around to reserve them globally and user space to use specific ones. I > guess something like that would be needed again. > Yes, to add hardware breakpoint support for i386, there are two patches. 1. Provides hardware debug register allocation mechanism. 2. Provides light weight interface for kernel-space watchpoint probes These patches have been posted & reviewd on lkml and systemtap mailing lists. Your comments are welcome. Thanks Prasanna This patch provides debug register allocation mechanism. Useful for debuggers like IOW, kgdb, kdb, kernel watchpoint. --- Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- --- linux-2.6.13-prasanna/arch/i386/Kconfig.debug |8 linux-2.6.13-prasanna/arch/i386/kernel/Makefile |1 linux-2.6.13-prasanna/arch/i386/kernel/debugreg.c | 281 ++ linux-2.6.13-prasanna/arch/i386/kernel/process.c | 31 +- linux-2.6.13-prasanna/arch/i386/kernel/ptrace.c |5 linux-2.6.13-prasanna/arch/i386/kernel/signal.c |3 linux-2.6.13-prasanna/arch/i386/kernel/traps.c|2 linux-2.6.13-prasanna/include/asm-i386/debugreg.h | 189 ++ 8 files changed, 511 insertions(+), 9 deletions(-) diff -puN arch/i386/Kconfig.debug~kprobes-debug-regs arch/i386/Kconfig.debug --- linux-2.6.13/arch/i386/Kconfig.debug~kprobes-debug-regs 2005-08-30 11:43:49.369626152 +0530 +++ linux-2.6.13-prasanna/arch/i386/Kconfig.debug 2005-08-30 11:43:49.442615056 +0530 @@ -32,6 +32,14 @@ config KPROBES for kernel debugging, non-intrusive instrumentation and testing. If in doubt, say "N". +config DEBUGREG + bool "Global Debug Registers" + depends on DEBUG_KERNEL + default off + help + Global debug register allocation mechanism is useful for debuggers + IOW, Kgdb, Kdb, Kernel Watchpoint probes. If in doubt say "N" + config DEBUG_STACK_USAGE bool "Stack utilization instrumentation" depends on DEBUG_KERNEL diff -puN /dev/null arch/i386/kernel/debugreg.c --- /dev/null 2005-08-30 16:04:24.253093808 +0530 +++ linux-2.6.13-prasanna/arch/i386/kernel/debugreg.c 2005-08-30 11:43:49.444614752 +0530 @@ -0,0 +1,281 @@ +/* + * Debug register + * arch/i386/kernel/debugreg.c + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * + * Copyright (C) IBM Corporation, 2002, 2004 + * + * 2002-OctCreated by Vamsi Krishna S <[EMAIL PROTECTED]> and + * Bharata Rao <[EMAIL PROTECTED]> to provide debug register + * allocation mechanism. + * 2004-OctUpdated by Prasanna S Panchamukhi <[EMAIL PROTECTED]> with + * idr_allocations mechanism as suggested by Andi Kleen. + */ +/* + * This provides a debug register allocation mechanism, to be + * used by all debuggers, which need debug registers. + * + */ +#include +#include +#include +#include +#include +#include + +struct debugreg dr_list[DR_MAX]; +unsigned long dr7_global_mask = 0; +static spinlock_t dr_lock = SPIN_LOCK_UNLOCKED; +static DEFINE_IDR(debugreg_idr); +static DECLARE_MUTEX(debugreg_idr_mutex); +static spinlock_t debugreg_idr_lock = SPIN_LOCK_UNLOCKED; + +static unsigned long dr7_global_bits[] = { + DR7_DR0_BITS, DR7_DR1_BITS, DR7_DR2_BITS, DR7_DR3_BITS +}; + +static inline void set_dr7_global_mask(int regnum) +{ + if (DR_IS_ADDR(regnum)) + dr7_global_mask |= dr7_global_bits[regnum]; +} + +static inline void clear_dr7_global_mask(int regnum) +{ + if (DR_IS_ADDR(regnum)) + dr7_global_mask |= ~dr7_global_bits[regnum]; +} + +/* + * See if specific debug register is free. + */ +static int specific_debugreg(unsigned int regnum) +{ + int r, n; + + if (regnum >= DR_MAX) + return -EINVAL; + + down(_idr_mutex
Re: [patch 16/16] Add hardware breakpoint support for i386
Hi, This adds hardware breakpoint support for i386. This is not as well tested as software breakpoints, but in some minimal testing appears to be functional. This really would need so coordination with user space using them. Otherwise it'll be quite unreliable because any user program can break it. Long ago (in 2.4 time frame) there used to be a IBM patch floating around to reserve them globally and user space to use specific ones. I guess something like that would be needed again. Yes, to add hardware breakpoint support for i386, there are two patches. 1. Provides hardware debug register allocation mechanism. 2. Provides light weight interface for kernel-space watchpoint probes These patches have been posted reviewd on lkml and systemtap mailing lists. Your comments are welcome. Thanks Prasanna This patch provides debug register allocation mechanism. Useful for debuggers like IOW, kgdb, kdb, kernel watchpoint. --- Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- --- linux-2.6.13-prasanna/arch/i386/Kconfig.debug |8 linux-2.6.13-prasanna/arch/i386/kernel/Makefile |1 linux-2.6.13-prasanna/arch/i386/kernel/debugreg.c | 281 ++ linux-2.6.13-prasanna/arch/i386/kernel/process.c | 31 +- linux-2.6.13-prasanna/arch/i386/kernel/ptrace.c |5 linux-2.6.13-prasanna/arch/i386/kernel/signal.c |3 linux-2.6.13-prasanna/arch/i386/kernel/traps.c|2 linux-2.6.13-prasanna/include/asm-i386/debugreg.h | 189 ++ 8 files changed, 511 insertions(+), 9 deletions(-) diff -puN arch/i386/Kconfig.debug~kprobes-debug-regs arch/i386/Kconfig.debug --- linux-2.6.13/arch/i386/Kconfig.debug~kprobes-debug-regs 2005-08-30 11:43:49.369626152 +0530 +++ linux-2.6.13-prasanna/arch/i386/Kconfig.debug 2005-08-30 11:43:49.442615056 +0530 @@ -32,6 +32,14 @@ config KPROBES for kernel debugging, non-intrusive instrumentation and testing. If in doubt, say N. +config DEBUGREG + bool Global Debug Registers + depends on DEBUG_KERNEL + default off + help + Global debug register allocation mechanism is useful for debuggers + IOW, Kgdb, Kdb, Kernel Watchpoint probes. If in doubt say N + config DEBUG_STACK_USAGE bool Stack utilization instrumentation depends on DEBUG_KERNEL diff -puN /dev/null arch/i386/kernel/debugreg.c --- /dev/null 2005-08-30 16:04:24.253093808 +0530 +++ linux-2.6.13-prasanna/arch/i386/kernel/debugreg.c 2005-08-30 11:43:49.444614752 +0530 @@ -0,0 +1,281 @@ +/* + * Debug register + * arch/i386/kernel/debugreg.c + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * + * Copyright (C) IBM Corporation, 2002, 2004 + * + * 2002-OctCreated by Vamsi Krishna S [EMAIL PROTECTED] and + * Bharata Rao [EMAIL PROTECTED] to provide debug register + * allocation mechanism. + * 2004-OctUpdated by Prasanna S Panchamukhi [EMAIL PROTECTED] with + * idr_allocations mechanism as suggested by Andi Kleen. + */ +/* + * This provides a debug register allocation mechanism, to be + * used by all debuggers, which need debug registers. + * + */ +#include linux/kernel.h +#include linux/spinlock.h +#include linux/module.h +#include linux/idr.h +#include asm/system.h +#include asm/debugreg.h + +struct debugreg dr_list[DR_MAX]; +unsigned long dr7_global_mask = 0; +static spinlock_t dr_lock = SPIN_LOCK_UNLOCKED; +static DEFINE_IDR(debugreg_idr); +static DECLARE_MUTEX(debugreg_idr_mutex); +static spinlock_t debugreg_idr_lock = SPIN_LOCK_UNLOCKED; + +static unsigned long dr7_global_bits[] = { + DR7_DR0_BITS, DR7_DR1_BITS, DR7_DR2_BITS, DR7_DR3_BITS +}; + +static inline void set_dr7_global_mask(int regnum) +{ + if (DR_IS_ADDR(regnum)) + dr7_global_mask |= dr7_global_bits[regnum]; +} + +static inline void clear_dr7_global_mask(int regnum) +{ + if (DR_IS_ADDR(regnum)) + dr7_global_mask |= ~dr7_global_bits[regnum]; +} + +/* + * See if specific debug register is free. + */ +static int specific_debugreg(unsigned int regnum) +{ + int r, n; + + if (regnum = DR_MAX) + return -EINVAL; + + down(debugreg_idr_mutex); + r = idr_pre_get(debugreg_idr, GFP_KERNEL); + up
Re: [patch 16/16] Add hardware breakpoint support for i386
This patch provides a simple interface for kernel-space watchpoints using processor's debug registers. Using Kwatch interface users can monitor kernel global variables and dump the debugging information such as kernel stack, global variables, processor registers. int register_kwatch(unsigned long addr, u8 length, u8 type, kwatch_handler_t handler) -length of the breakpoint can be 1,2 or 4 bytes long. -type can be read, write, execute. 0 Break on instruction execution only. 1 Break on data writes only. 3 Break on data reads or writes but not instruction fetches. -return value is the debug register number allocated/used for setting up this watch point. Sample code: This sample code sets a watchpoint on the pid_max and registers a call back function if any writes happen to pid_max. struct kwatch kp; void kwatch_handler(struct kwatch *p, struct pt_regs *regs) { ...do-any-thing } Register watchpoint probe from init_module: static int debug_regs_num; int init_module(void) { ..do-any-thing debug_regs_num = register_kwatch(kallsyms_lookup_name(pid_max), 4, 1, kwatch_handler); ..do-any-thing } Test this by changing the value of pid_max in /proc/sys/kernel/pid_max echo 1000 /proc/sys/kernel/pid_max You see the call back function being called. Unregister the watchpoint from cleanup_module: void cleanup_module(void) { ..do-any-thing unregister_kwatch(debug_regs_num); ..do-any-thing } Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- --- linux-2.6.13-prasanna/arch/i386/Kconfig.debug |8 + linux-2.6.13-prasanna/arch/i386/kernel/Makefile |1 linux-2.6.13-prasanna/arch/i386/kernel/kwatch.c | 189 linux-2.6.13-prasanna/include/asm-i386/kwatch.h | 60 +++ 4 files changed, 258 insertions(+) diff -puN arch/i386/Kconfig.debug~kernel-watchpoint arch/i386/Kconfig.debug --- linux-2.6.13/arch/i386/Kconfig.debug~kernel-watchpoint 2005-08-30 11:44:25.921069488 +0530 +++ linux-2.6.13-prasanna/arch/i386/Kconfig.debug 2005-08-30 11:44:25.932067816 +0530 @@ -32,6 +32,14 @@ config KPROBES for kernel debugging, non-intrusive instrumentation and testing. If in doubt, say N. +config KWATCH + bool Kwatch points + depends on DEBUG_KERNEL + select DEBUGREG + help + This enables kernel-space watchpoints using processor's debug + registers. If in doubt, say N. + config DEBUGREG bool Global Debug Registers depends on DEBUG_KERNEL diff -puN /dev/null arch/i386/kernel/kwatch.c --- /dev/null 2005-08-30 16:04:24.253093808 +0530 +++ linux-2.6.13-prasanna/arch/i386/kernel/kwatch.c 2005-08-30 11:44:25.933067664 +0530 @@ -0,0 +1,189 @@ +/* + * Kernel Watchpoint interface. + * arch/i386/kernel/kwatch.c + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. + * + * Copyright (C) IBM Corporation, 2002, 2004 + * + * 2002-OctCreated by Vamsi Krishna S [EMAIL PROTECTED] for + * Kernel Watchpoint implementation. + * 2004-OctUpdated by Prasanna S Panchamukhi [EMAIL PROTECTED] to + * to make use of notifiers. + */ +#include linux/config.h +#include linux/kprobes.h +#include linux/ptrace.h +#include linux/spinlock.h +#include linux/module.h +#include linux/init.h +#include asm/kwatch.h +#include asm/kdebug.h +#include asm/debugreg.h +#include asm/bitops.h + +static struct kwatch kwatch_list[DR_MAX]; +static spinlock_t kwatch_lock = SPIN_LOCK_UNLOCKED; +static unsigned long kwatch_in_progress; /* currently being handled */ + +struct dr_info { + int debugreg; + unsigned long addr; + int type; +}; + +static inline void write_smp_dr(void *info) +{ + struct dr_info *dr = (struct dr_info *)info; + + if (cpu_has_de dr-type
Re: [5/6 PATCH] Kprobes : Prevent possible race conditions ia64 changes
Hi Anil, I have updated the patch as per your comments to move routines from jprobes.S to .kprobes.text section. Please let me know if you any issues. Thanks Prasanna This patch contains the ia64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- include/asm-ia64/asmmacro.h |0 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/jprobes.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/kprobes.c | 57 ++- linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/traps.c |5 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/lib/flush.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/mm/fault.c |3 7 files changed, 41 insertions(+), 27 deletions(-) diff -puN arch/ia64/kernel/kprobes.c~kprobes-exclude-functions-ia64 arch/ia64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/ia64/kernel/kprobes.c~kprobes-exclude-functions-ia64 2005-07-08 15:22:52.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/kprobes.c2005-07-08 15:22:52.0 +0530 @@ -87,8 +87,10 @@ static enum instruction_type bundle_enco * is IP relative instruction and update the kprobe * inst flag accordingly */ -static void update_kprobe_inst_flag(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static void __kprobes update_kprobe_inst_flag(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst, + struct kprobe *p) { p->ainsn.inst_flag = 0; p->ainsn.target_br_reg = 0; @@ -126,8 +128,10 @@ static void update_kprobe_inst_flag(uint * Returns 0 if supported * Returns -EINVAL if unsupported */ -static int unsupported_inst(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static int __kprobes unsupported_inst(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst, + struct kprobe *p) { unsigned long addr = (unsigned long)p->addr; @@ -168,8 +172,9 @@ static int unsupported_inst(uint templat * on which we are inserting kprobe is cmp instruction * with ctype as unc. */ -static uint is_cmp_ctype_unc_inst(uint template, uint slot, uint major_opcode, -unsigned long kprobe_inst) +static uint __kprobes is_cmp_ctype_unc_inst(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst) { cmp_inst_t cmp_inst; uint ctype_unc = 0; @@ -201,8 +206,10 @@ out: * In this function we override the bundle with * the break instruction at the given slot. */ -static void prepare_break_inst(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static void __kprobes prepare_break_inst(uint template, uint slot, +uint major_opcode, +unsigned long kprobe_inst, +struct kprobe *p) { unsigned long break_inst = BREAK_INST; bundle_t *bundle = >ainsn.insn.bundle; @@ -271,7 +278,8 @@ static inline int in_ivt_functions(unsig && addr < (unsigned long)__end_ivt_text); } -static int valid_kprobe_addr(int template, int slot, unsigned long addr) +static int __kprobes valid_kprobe_addr(int template, int slot, + unsigned long addr) { if ((slot > 2) || ((bundle_encoding[template][1] == L) && slot > 1)) { printk(KERN_WARNING "Attempting to insert unaligned kprobe " @@ -323,7 +331,7 @@ static void kretprobe_trampoline(void) *- cleanup by marking the instance as unused *- long jump back to the original return address */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -381,7 +389,8 @@ int trampoline_probe_handler(struct kpro return 1; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { struct kretprobe_instance *ri; @@ -399,7 +408,7 @@ void arch_prepare_kretprobe(struct kretp } } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { unsigned long addr = (unsig
Re: [3/6 PATCH] Kprobes : Prevent possible race conditions x86_64 changes
Hi Andi, I have updated the patch as per your comments to move int3,debug, page_fault, general_protection routines to .kprobes.text section. Please let me know if you any issues. Thanks Prasanna This patch contains the x86_64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/entry.S | 12 ++- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/kprobes.c | 35 +- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/traps.c | 14 ++-- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/mm/fault.c |3 5 files changed, 38 insertions(+), 27 deletions(-) diff -puN arch/x86_64/kernel/kprobes.c~kprobes-exclude-functions-x86_64 arch/x86_64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/x86_64/kernel/kprobes.c~kprobes-exclude-functions-x86_64 2005-07-08 11:14:01.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/kprobes.c 2005-07-08 11:14:01.0 +0530 @@ -74,7 +74,7 @@ static inline int is_IF_modifier(kprobe_ return 0; } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { /* insn: must be on special executable page on x86_64. */ up(_mutex); @@ -189,7 +189,7 @@ static inline s32 *is_riprel(u8 *insn) return NULL; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { s32 *ripdisp; memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE); @@ -215,21 +215,21 @@ void arch_copy_kprobe(struct kprobe *p) p->opcode = *p->addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p->addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p->addr = p->opcode; flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { up(_mutex); free_insn_slot(p->ainsn.insn); @@ -261,7 +261,7 @@ static inline void set_current_kprobe(st kprobe_saved_rflags &= ~IF_MASK; } -static void prepare_singlestep(struct kprobe *p, struct pt_regs *regs) +static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs) { regs->eflags |= TF_MASK; regs->eflags &= ~IF_MASK; @@ -272,7 +272,8 @@ static void prepare_singlestep(struct kp regs->rip = (unsigned long)p->ainsn.insn; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { unsigned long *sara = (unsigned long *)regs->rsp; struct kretprobe_instance *ri; @@ -295,7 +296,7 @@ void arch_prepare_kretprobe(struct kretp * Interrupts are disabled on entry as trap3 is an interrupt gate and they * remain disabled thorough out this function. */ -int kprobe_handler(struct pt_regs *regs) +int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; int ret = 0; @@ -399,7 +400,7 @@ no_kprobe: /* * Called when we hit the probe point at kretprobe_trampoline */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -478,7 +479,7 @@ int trampoline_probe_handler(struct kpro * that is atop the stack is the address following the copied instruction. * We need to make it the address following the original instruction. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { unsigned long *tos = (unsigned long *)regs->rsp; unsigned long next_rip = 0; @@ -536,7 +537,7 @@ static void resume_execution(struct kpro * Interrupts are disabled on entry as trap1 is an interrupt gate and they * remain disabled thoroughout this function. And we hold kprobe lock. */ -int post_kprobe_handler(struct pt_regs *regs) +int __kprobes post_kprobe_handler(struct pt_regs *regs) { if (!kprobe_running()) return 0; @@ -571,7 +572,7 @@ out: } /* Interrupts disabled, kprobe_lock held. */ -int kprobe_fault_handler(struct pt_regs *regs, int trapnr) +int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr) {
Re: [2/6 PATCH] Kprobes : Prevent possible race conditions i386 changes
Hi Andi, I have updated the patch as per your comments to move int3,debug, page_fault, general_protection routines to .kprobes.text section. Please let me know if you any issues. Thanks Prasanna This patch contains the i386 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/entry.S | 13 +++- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/kprobes.c | 29 +-- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/traps.c | 12 ++-- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/i386/mm/fault.c |4 + 5 files changed, 34 insertions(+), 25 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-exclude-functions-i386 arch/i386/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/i386/kernel/kprobes.c~kprobes-exclude-functions-i386 2005-07-08 12:09:51.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/kprobes.c2005-07-08 12:09:51.0 +0530 @@ -62,32 +62,32 @@ static inline int is_IF_modifier(kprobe_ return 0; } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { return 0; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); p->opcode = *p->addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p->addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p->addr = p->opcode; flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { } @@ -127,7 +127,8 @@ static inline void prepare_singlestep(st regs->eip = (unsigned long)>ainsn.insn; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { unsigned long *sara = (unsigned long *)>esp; struct kretprobe_instance *ri; @@ -150,7 +151,7 @@ void arch_prepare_kretprobe(struct kretp * Interrupts are disabled on entry as trap3 is an interrupt gate and they * remain disabled thorough out this function. */ -static int kprobe_handler(struct pt_regs *regs) +static int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; int ret = 0; @@ -259,7 +260,7 @@ no_kprobe: /* * Called when we hit the probe point at kretprobe_trampoline */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -338,7 +339,7 @@ int trampoline_probe_handler(struct kpro * that is atop the stack is the address following the copied instruction. * We need to make it the address following the original instruction. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { unsigned long *tos = (unsigned long *)>esp; unsigned long next_eip = 0; @@ -444,8 +445,8 @@ static inline int kprobe_fault_handler(s /* * Wrapper routine to for handling exceptions. */ -int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, -void *data) +int __kprobes kprobe_exceptions_notify(struct notifier_block *self, + unsigned long val, void *data) { struct die_args *args = (struct die_args *)data; switch (val) { @@ -473,7 +474,7 @@ int kprobe_exceptions_notify(struct noti return NOTIFY_DONE; } -int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct jprobe *jp = container_of(p, struct jprobe, kp); unsigned long addr; @@ -495,7 +496,7 @@ int setjmp_pre_handler(struct kprobe *p, return 1; } -void jprobe_return(void) +void __kprobes jprobe_return(void) { preempt_enable_no_resched(); asm volatile (" xchgl %%ebx,%%esp \n" @@ -506,7 +507,7 @@ void jprobe_return(void) (jprobe_saved_esp):"memory"); } -int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) +int
Re: [1/6 PATCH] Kprobes : Prevent possible race conditions generic changes
Hi Andrew, I have modified the patches as per your's and Andi's comments. Also modified entry.S for i386 and x86_64 architecture, to move few exception handlers page fault, general protection, int3, debug to .kprobes.text section. Also ia64 specific patch contains more routines from jprobes.S file as per Anil's comment. Please let me know if you have any issues. Thanks Prasanna There are possible race conditions if probes are placed on routines within the kprobes files and routines used by the kprobes. For example if you put probe on get_kprobe() routines, the system can hang while inserting probes on any routine such as do_fork(). Because while inserting probes on do_fork(), register_kprobes() routine grabs the kprobes spin lock and executes get_kprobe() routine and to handle probe of get_kprobe(), kprobes_handler() gets executed and tries to grab kprobes spin lock, and spins forever. This patch avoids such possible race conditions by preventing probes on routines within the kprobes file and routines used by kprobes. I have modified the patches as per Andi Kleen's suggestion to move kprobes routines and other routines used by kprobes to a seperate section .kprobes.text. Also moved page fault and exception handlers, general protection fault to .kprobes.text section. These patches have been tested on i386, x86_64 and ppc64 architectures, also compiled on ia64 and sparc64 architectures. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.13-rc1-mm1-prasanna/include/asm-generic/sections.h|1 linux-2.6.13-rc1-mm1-prasanna/include/asm-generic/vmlinux.lds.h |5 linux-2.6.13-rc1-mm1-prasanna/include/linux/kprobes.h |3 linux-2.6.13-rc1-mm1-prasanna/include/linux/linkage.h |7 linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c | 72 +- 5 files changed, 59 insertions(+), 29 deletions(-) diff -puN kernel/kprobes.c~kprobes-exclude-functions-generic kernel/kprobes.c --- linux-2.6.13-rc1-mm1/kernel/kprobes.c~kprobes-exclude-functions-generic 2005-07-08 14:05:14.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c 2005-07-08 14:05:14.0 +0530 @@ -37,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -72,7 +73,7 @@ static struct hlist_head kprobe_insn_pag * get_insn_slot() - Find a slot on an executable page for an instruction. * We allocate an executable page if there's no room on existing ones. */ -kprobe_opcode_t *get_insn_slot(void) +kprobe_opcode_t __kprobes *get_insn_slot(void) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -117,7 +118,7 @@ kprobe_opcode_t *get_insn_slot(void) return kip->insns; } -void free_insn_slot(kprobe_opcode_t *slot) +void __kprobes free_insn_slot(kprobe_opcode_t *slot) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -152,20 +153,20 @@ void free_insn_slot(kprobe_opcode_t *slo } /* Locks kprobe: irqs must be disabled */ -void lock_kprobes(void) +void __kprobes lock_kprobes(void) { spin_lock(_lock); kprobe_cpu = smp_processor_id(); } -void unlock_kprobes(void) +void __kprobes unlock_kprobes(void) { kprobe_cpu = NR_CPUS; spin_unlock(_lock); } /* You have to be holding the kprobe_lock */ -struct kprobe *get_kprobe(void *addr) +struct kprobe __kprobes *get_kprobe(void *addr) { struct hlist_head *head; struct hlist_node *node; @@ -183,7 +184,7 @@ struct kprobe *get_kprobe(void *addr) * Aggregate handlers for multiple kprobes support - these handlers * take care of invoking the individual kprobe handlers on p->list */ -static int aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp; @@ -198,8 +199,8 @@ static int aggr_pre_handler(struct kprob return 0; } -static void aggr_post_handler(struct kprobe *p, struct pt_regs *regs, - unsigned long flags) +static void __kprobes aggr_post_handler(struct kprobe *p, struct pt_regs *regs, + unsigned long flags) { struct kprobe *kp; @@ -213,8 +214,8 @@ static void aggr_post_handler(struct kpr return; } -static int aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, - int trapnr) +static int __kprobes aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, + int trapnr) { /* * if we faulted "during" the execution of a user specified @@ -227,7 +228,7 @@ static int aggr_fault_handler(struct kpr return 0; } -static int aggr_break_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_break_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp = curr_kprobe; if (curr_kprob
Re: [1/6 PATCH] Kprobes : Prevent possible race conditions generic changes
Hi Andrew, I have modified the patches as per your's and Andi's comments. Also modified entry.S for i386 and x86_64 architecture, to move few exception handlers page fault, general protection, int3, debug to .kprobes.text section. Also ia64 specific patch contains more routines from jprobes.S file as per Anil's comment. Please let me know if you have any issues. Thanks Prasanna There are possible race conditions if probes are placed on routines within the kprobes files and routines used by the kprobes. For example if you put probe on get_kprobe() routines, the system can hang while inserting probes on any routine such as do_fork(). Because while inserting probes on do_fork(), register_kprobes() routine grabs the kprobes spin lock and executes get_kprobe() routine and to handle probe of get_kprobe(), kprobes_handler() gets executed and tries to grab kprobes spin lock, and spins forever. This patch avoids such possible race conditions by preventing probes on routines within the kprobes file and routines used by kprobes. I have modified the patches as per Andi Kleen's suggestion to move kprobes routines and other routines used by kprobes to a seperate section .kprobes.text. Also moved page fault and exception handlers, general protection fault to .kprobes.text section. These patches have been tested on i386, x86_64 and ppc64 architectures, also compiled on ia64 and sparc64 architectures. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.13-rc1-mm1-prasanna/include/asm-generic/sections.h|1 linux-2.6.13-rc1-mm1-prasanna/include/asm-generic/vmlinux.lds.h |5 linux-2.6.13-rc1-mm1-prasanna/include/linux/kprobes.h |3 linux-2.6.13-rc1-mm1-prasanna/include/linux/linkage.h |7 linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c | 72 +- 5 files changed, 59 insertions(+), 29 deletions(-) diff -puN kernel/kprobes.c~kprobes-exclude-functions-generic kernel/kprobes.c --- linux-2.6.13-rc1-mm1/kernel/kprobes.c~kprobes-exclude-functions-generic 2005-07-08 14:05:14.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c 2005-07-08 14:05:14.0 +0530 @@ -37,6 +37,7 @@ #include linux/init.h #include linux/module.h #include linux/moduleloader.h +#include asm-generic/sections.h #include asm/cacheflush.h #include asm/errno.h #include asm/kdebug.h @@ -72,7 +73,7 @@ static struct hlist_head kprobe_insn_pag * get_insn_slot() - Find a slot on an executable page for an instruction. * We allocate an executable page if there's no room on existing ones. */ -kprobe_opcode_t *get_insn_slot(void) +kprobe_opcode_t __kprobes *get_insn_slot(void) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -117,7 +118,7 @@ kprobe_opcode_t *get_insn_slot(void) return kip-insns; } -void free_insn_slot(kprobe_opcode_t *slot) +void __kprobes free_insn_slot(kprobe_opcode_t *slot) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -152,20 +153,20 @@ void free_insn_slot(kprobe_opcode_t *slo } /* Locks kprobe: irqs must be disabled */ -void lock_kprobes(void) +void __kprobes lock_kprobes(void) { spin_lock(kprobe_lock); kprobe_cpu = smp_processor_id(); } -void unlock_kprobes(void) +void __kprobes unlock_kprobes(void) { kprobe_cpu = NR_CPUS; spin_unlock(kprobe_lock); } /* You have to be holding the kprobe_lock */ -struct kprobe *get_kprobe(void *addr) +struct kprobe __kprobes *get_kprobe(void *addr) { struct hlist_head *head; struct hlist_node *node; @@ -183,7 +184,7 @@ struct kprobe *get_kprobe(void *addr) * Aggregate handlers for multiple kprobes support - these handlers * take care of invoking the individual kprobe handlers on p-list */ -static int aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp; @@ -198,8 +199,8 @@ static int aggr_pre_handler(struct kprob return 0; } -static void aggr_post_handler(struct kprobe *p, struct pt_regs *regs, - unsigned long flags) +static void __kprobes aggr_post_handler(struct kprobe *p, struct pt_regs *regs, + unsigned long flags) { struct kprobe *kp; @@ -213,8 +214,8 @@ static void aggr_post_handler(struct kpr return; } -static int aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, - int trapnr) +static int __kprobes aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, + int trapnr) { /* * if we faulted during the execution of a user specified @@ -227,7 +228,7 @@ static int aggr_fault_handler(struct kpr return 0; } -static int aggr_break_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_break_handler(struct kprobe
Re: [2/6 PATCH] Kprobes : Prevent possible race conditions i386 changes
Hi Andi, I have updated the patch as per your comments to move int3,debug, page_fault, general_protection routines to .kprobes.text section. Please let me know if you any issues. Thanks Prasanna This patch contains the i386 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/entry.S | 13 +++- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/kprobes.c | 29 +-- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/traps.c | 12 ++-- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/i386/mm/fault.c |4 + 5 files changed, 34 insertions(+), 25 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-exclude-functions-i386 arch/i386/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/i386/kernel/kprobes.c~kprobes-exclude-functions-i386 2005-07-08 12:09:51.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/kprobes.c2005-07-08 12:09:51.0 +0530 @@ -62,32 +62,32 @@ static inline int is_IF_modifier(kprobe_ return 0; } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { return 0; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { memcpy(p-ainsn.insn, p-addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); p-opcode = *p-addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p-addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p-addr = p-opcode; flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { } @@ -127,7 +127,8 @@ static inline void prepare_singlestep(st regs-eip = (unsigned long)p-ainsn.insn; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { unsigned long *sara = (unsigned long *)regs-esp; struct kretprobe_instance *ri; @@ -150,7 +151,7 @@ void arch_prepare_kretprobe(struct kretp * Interrupts are disabled on entry as trap3 is an interrupt gate and they * remain disabled thorough out this function. */ -static int kprobe_handler(struct pt_regs *regs) +static int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; int ret = 0; @@ -259,7 +260,7 @@ no_kprobe: /* * Called when we hit the probe point at kretprobe_trampoline */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -338,7 +339,7 @@ int trampoline_probe_handler(struct kpro * that is atop the stack is the address following the copied instruction. * We need to make it the address following the original instruction. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { unsigned long *tos = (unsigned long *)regs-esp; unsigned long next_eip = 0; @@ -444,8 +445,8 @@ static inline int kprobe_fault_handler(s /* * Wrapper routine to for handling exceptions. */ -int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, -void *data) +int __kprobes kprobe_exceptions_notify(struct notifier_block *self, + unsigned long val, void *data) { struct die_args *args = (struct die_args *)data; switch (val) { @@ -473,7 +474,7 @@ int kprobe_exceptions_notify(struct noti return NOTIFY_DONE; } -int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct jprobe *jp = container_of(p, struct jprobe, kp); unsigned long addr; @@ -495,7 +496,7 @@ int setjmp_pre_handler(struct kprobe *p, return 1; } -void jprobe_return(void) +void __kprobes jprobe_return(void) { preempt_enable_no_resched(); asm volatile ( xchgl %%ebx,%%esp \n @@ -506,7 +507,7 @@ void jprobe_return(void) (jprobe_saved_esp):memory); } -int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs
Re: [3/6 PATCH] Kprobes : Prevent possible race conditions x86_64 changes
Hi Andi, I have updated the patch as per your comments to move int3,debug, page_fault, general_protection routines to .kprobes.text section. Please let me know if you any issues. Thanks Prasanna This patch contains the x86_64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/entry.S | 12 ++- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/kprobes.c | 35 +- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/traps.c | 14 ++-- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/mm/fault.c |3 5 files changed, 38 insertions(+), 27 deletions(-) diff -puN arch/x86_64/kernel/kprobes.c~kprobes-exclude-functions-x86_64 arch/x86_64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/x86_64/kernel/kprobes.c~kprobes-exclude-functions-x86_64 2005-07-08 11:14:01.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/kprobes.c 2005-07-08 11:14:01.0 +0530 @@ -74,7 +74,7 @@ static inline int is_IF_modifier(kprobe_ return 0; } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { /* insn: must be on special executable page on x86_64. */ up(kprobe_mutex); @@ -189,7 +189,7 @@ static inline s32 *is_riprel(u8 *insn) return NULL; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { s32 *ripdisp; memcpy(p-ainsn.insn, p-addr, MAX_INSN_SIZE); @@ -215,21 +215,21 @@ void arch_copy_kprobe(struct kprobe *p) p-opcode = *p-addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p-addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p-addr = p-opcode; flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { up(kprobe_mutex); free_insn_slot(p-ainsn.insn); @@ -261,7 +261,7 @@ static inline void set_current_kprobe(st kprobe_saved_rflags = ~IF_MASK; } -static void prepare_singlestep(struct kprobe *p, struct pt_regs *regs) +static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs) { regs-eflags |= TF_MASK; regs-eflags = ~IF_MASK; @@ -272,7 +272,8 @@ static void prepare_singlestep(struct kp regs-rip = (unsigned long)p-ainsn.insn; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { unsigned long *sara = (unsigned long *)regs-rsp; struct kretprobe_instance *ri; @@ -295,7 +296,7 @@ void arch_prepare_kretprobe(struct kretp * Interrupts are disabled on entry as trap3 is an interrupt gate and they * remain disabled thorough out this function. */ -int kprobe_handler(struct pt_regs *regs) +int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; int ret = 0; @@ -399,7 +400,7 @@ no_kprobe: /* * Called when we hit the probe point at kretprobe_trampoline */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -478,7 +479,7 @@ int trampoline_probe_handler(struct kpro * that is atop the stack is the address following the copied instruction. * We need to make it the address following the original instruction. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { unsigned long *tos = (unsigned long *)regs-rsp; unsigned long next_rip = 0; @@ -536,7 +537,7 @@ static void resume_execution(struct kpro * Interrupts are disabled on entry as trap1 is an interrupt gate and they * remain disabled thoroughout this function. And we hold kprobe lock. */ -int post_kprobe_handler(struct pt_regs *regs) +int __kprobes post_kprobe_handler(struct pt_regs *regs) { if (!kprobe_running()) return 0; @@ -571,7 +572,7 @@ out: } /* Interrupts disabled, kprobe_lock held. */ -int kprobe_fault_handler(struct pt_regs *regs, int trapnr) +int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr) { if (current_kprobe-fault_handler current_kprobe-fault_handler
Re: [5/6 PATCH] Kprobes : Prevent possible race conditions ia64 changes
Hi Anil, I have updated the patch as per your comments to move routines from jprobes.S to .kprobes.text section. Please let me know if you any issues. Thanks Prasanna This patch contains the ia64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- include/asm-ia64/asmmacro.h |0 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/jprobes.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/kprobes.c | 57 ++- linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/traps.c |5 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/lib/flush.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/mm/fault.c |3 7 files changed, 41 insertions(+), 27 deletions(-) diff -puN arch/ia64/kernel/kprobes.c~kprobes-exclude-functions-ia64 arch/ia64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/ia64/kernel/kprobes.c~kprobes-exclude-functions-ia64 2005-07-08 15:22:52.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/kprobes.c2005-07-08 15:22:52.0 +0530 @@ -87,8 +87,10 @@ static enum instruction_type bundle_enco * is IP relative instruction and update the kprobe * inst flag accordingly */ -static void update_kprobe_inst_flag(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static void __kprobes update_kprobe_inst_flag(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst, + struct kprobe *p) { p-ainsn.inst_flag = 0; p-ainsn.target_br_reg = 0; @@ -126,8 +128,10 @@ static void update_kprobe_inst_flag(uint * Returns 0 if supported * Returns -EINVAL if unsupported */ -static int unsupported_inst(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static int __kprobes unsupported_inst(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst, + struct kprobe *p) { unsigned long addr = (unsigned long)p-addr; @@ -168,8 +172,9 @@ static int unsupported_inst(uint templat * on which we are inserting kprobe is cmp instruction * with ctype as unc. */ -static uint is_cmp_ctype_unc_inst(uint template, uint slot, uint major_opcode, -unsigned long kprobe_inst) +static uint __kprobes is_cmp_ctype_unc_inst(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst) { cmp_inst_t cmp_inst; uint ctype_unc = 0; @@ -201,8 +206,10 @@ out: * In this function we override the bundle with * the break instruction at the given slot. */ -static void prepare_break_inst(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static void __kprobes prepare_break_inst(uint template, uint slot, +uint major_opcode, +unsigned long kprobe_inst, +struct kprobe *p) { unsigned long break_inst = BREAK_INST; bundle_t *bundle = p-ainsn.insn.bundle; @@ -271,7 +278,8 @@ static inline int in_ivt_functions(unsig addr (unsigned long)__end_ivt_text); } -static int valid_kprobe_addr(int template, int slot, unsigned long addr) +static int __kprobes valid_kprobe_addr(int template, int slot, + unsigned long addr) { if ((slot 2) || ((bundle_encoding[template][1] == L) slot 1)) { printk(KERN_WARNING Attempting to insert unaligned kprobe @@ -323,7 +331,7 @@ static void kretprobe_trampoline(void) *- cleanup by marking the instance as unused *- long jump back to the original return address */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -381,7 +389,8 @@ int trampoline_probe_handler(struct kpro return 1; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { struct kretprobe_instance *ri; @@ -399,7 +408,7 @@ void arch_prepare_kretprobe(struct kretp } } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { unsigned long addr = (unsigned long) p-addr; unsigned long *kprobe_addr = (unsigned
Re: [1/6 PATCH] Kprobes : Prevent possible race conditions generic changes
Hi Andrew, I have modified the patch as per your comments. As Andi mentioned, this patch set provides safety for kprobes and avoids possible kernel crash I think this safety feature will helps tools like systemtap which are uses kprobes mechanism. Also kprobes cleanup patch to cleanup the codingstyle is on the way. Please let me know if you have any issues. Thanks Prasanna There are possible race conditions if probes are placed on routines within the kprobes files and routines used by the kprobes. For example if you put probe on get_kprobe() routines, the system can hang while inserting probes on any routine such as do_fork(). Because while inserting probes on do_fork(), register_kprobes() routine grabs the kprobes spin lock and executes get_kprobe() routine and to handle probe of get_kprobe(), kprobes_handler() gets executed and tries to grab kprobes spin lock, and spins forever. This patch avoids such possible race conditions by preventing probes on routines within the kprobes file and routines used by kprobes. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.13-rc1-mm1-prasanna/include/asm-generic/sections.h|1 linux-2.6.13-rc1-mm1-prasanna/include/asm-generic/vmlinux.lds.h |5 linux-2.6.13-rc1-mm1-prasanna/include/linux/kprobes.h |3 linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c | 72 +- 4 files changed, 52 insertions(+), 29 deletions(-) diff -puN kernel/kprobes.c~kprobes-exclude-functions-generic kernel/kprobes.c --- linux-2.6.13-rc1-mm1/kernel/kprobes.c~kprobes-exclude-functions-generic 2005-07-07 17:13:26.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c 2005-07-07 17:18:44.0 +0530 @@ -37,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -72,7 +73,7 @@ static struct hlist_head kprobe_insn_pag * get_insn_slot() - Find a slot on an executable page for an instruction. * We allocate an executable page if there's no room on existing ones. */ -kprobe_opcode_t *get_insn_slot(void) +kprobe_opcode_t __kprobes *get_insn_slot(void) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -117,7 +118,7 @@ kprobe_opcode_t *get_insn_slot(void) return kip->insns; } -void free_insn_slot(kprobe_opcode_t *slot) +void __kprobes free_insn_slot(kprobe_opcode_t *slot) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -152,20 +153,20 @@ void free_insn_slot(kprobe_opcode_t *slo } /* Locks kprobe: irqs must be disabled */ -void lock_kprobes(void) +void __kprobes lock_kprobes(void) { spin_lock(_lock); kprobe_cpu = smp_processor_id(); } -void unlock_kprobes(void) +void __kprobes unlock_kprobes(void) { kprobe_cpu = NR_CPUS; spin_unlock(_lock); } /* You have to be holding the kprobe_lock */ -struct kprobe *get_kprobe(void *addr) +struct kprobe __kprobes *get_kprobe(void *addr) { struct hlist_head *head; struct hlist_node *node; @@ -183,7 +184,7 @@ struct kprobe *get_kprobe(void *addr) * Aggregate handlers for multiple kprobes support - these handlers * take care of invoking the individual kprobe handlers on p->list */ -static int aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp; @@ -198,8 +199,8 @@ static int aggr_pre_handler(struct kprob return 0; } -static void aggr_post_handler(struct kprobe *p, struct pt_regs *regs, - unsigned long flags) +static void __kprobes aggr_post_handler(struct kprobe *p, struct pt_regs *regs, + unsigned long flags) { struct kprobe *kp; @@ -213,8 +214,8 @@ static void aggr_post_handler(struct kpr return; } -static int aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, - int trapnr) +static int __kprobes aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, + int trapnr) { /* * if we faulted "during" the execution of a user specified @@ -227,7 +228,7 @@ static int aggr_fault_handler(struct kpr return 0; } -static int aggr_break_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_break_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp = curr_kprobe; if (curr_kprobe && kp->break_handler) { @@ -240,7 +241,7 @@ static int aggr_break_handler(struct kpr return 0; } -struct kretprobe_instance *get_free_rp_inst(struct kretprobe *rp) +struct kretprobe_instance __kprobes *get_free_rp_inst(struct kretprobe *rp) { struct hlist_node *node; struct kretprobe_instance *ri; @@ -249,7 +250,8 @@ struct kretprobe_instance *get_free_rp_i return NULL; } -static struct kre
Re: [6/6 PATCH] Kprobes : Prevent possible race conditions sparc64 changes
This patch contains the sparc64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/kernel/kprobes.c | 36 +- linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/mm/fault.c |8 +- linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/mm/init.c|3 linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/mm/ultra.S |2 5 files changed, 30 insertions(+), 20 deletions(-) diff -puN arch/sparc64/kernel/kprobes.c~kprobes-exclude-functions-sparc64 arch/sparc64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/sparc64/kernel/kprobes.c~kprobes-exclude-functions-sparc64 2005-07-06 20:08:40.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/kernel/kprobes.c 2005-07-06 20:08:40.0 +0530 @@ -8,6 +8,7 @@ #include #include #include +#include /* We do not have hardware single-stepping on sparc64. * So we implement software single-stepping with breakpoint @@ -37,31 +38,31 @@ * - Mark that we are no longer actively in a kprobe. */ -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { return 0; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { p->ainsn.insn[0] = *p->addr; p->ainsn.insn[1] = BREAKPOINT_INSTRUCTION_2; p->opcode = *p->addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p->addr = BREAKPOINT_INSTRUCTION; flushi(p->addr); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p->addr = p->opcode; flushi(p->addr); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { } @@ -111,7 +112,7 @@ static inline void prepare_singlestep(st } } -static int kprobe_handler(struct pt_regs *regs) +static int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; void *addr = (void *) regs->tpc; @@ -191,8 +192,9 @@ no_kprobe: * The original INSN location was REAL_PC, it actually * executed at PC and produced destination address NPC. */ -static unsigned long relbranch_fixup(u32 insn, unsigned long real_pc, -unsigned long pc, unsigned long npc) +static unsigned long __kprobes relbranch_fixup(u32 insn, unsigned long real_pc, + unsigned long pc, + unsigned long npc) { /* Branch not taken, no mods necessary. */ if (npc == pc + 0x4UL) @@ -217,7 +219,8 @@ static unsigned long relbranch_fixup(u32 /* If INSN is an instruction which writes it's PC location * into a destination register, fix that up. */ -static void retpc_fixup(struct pt_regs *regs, u32 insn, unsigned long real_pc) +static void __kprobes retpc_fixup(struct pt_regs *regs, u32 insn, + unsigned long real_pc) { unsigned long *slot = NULL; @@ -257,7 +260,7 @@ static void retpc_fixup(struct pt_regs * * This function prepares to return from the post-single-step * breakpoint trap. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { u32 insn = p->ainsn.insn[0]; @@ -315,8 +318,8 @@ static inline int kprobe_fault_handler(s /* * Wrapper routine to for handling exceptions. */ -int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, -void *data) +int __kprobes kprobe_exceptions_notify(struct notifier_block *self, + unsigned long val, void *data) { struct die_args *args = (struct die_args *)data; switch (val) { @@ -344,7 +347,8 @@ int kprobe_exceptions_notify(struct noti return NOTIFY_DONE; } -asmlinkage void kprobe_trap(unsigned long trap_level, struct pt_regs *regs) +asmlinkage void __kprobes kprobe_trap(unsigned long trap_level, + struct pt_regs *regs) { BUG_ON(trap_level != 0x170 && trap_level != 0x171); @@ -368,7 +372,7 @@ static struct pt_regs jprobe_saved_regs; static struct pt_regs *jprobe_saved_regs_location; static struct sparc_stackf jprobe_saved_stack; -int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct jprobe *jp = container_of(p, struct jprobe, kp); @@ -390,7 +394,7 @@ int setjmp_pre_handler(struct kprobe *p, return 1; } -void jprobe_return(void) +void __kprobes jprobe_return(void) { preempt_enable_no_
Re: [5/6 PATCH] Kprobes : Prevent possible race conditions ia64 changes
This patch contains the ia64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/kprobes.c | 57 ++- linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/traps.c |5 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/lib/flush.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/mm/fault.c |3 5 files changed, 40 insertions(+), 27 deletions(-) diff -puN arch/ia64/kernel/kprobes.c~kprobes-exclude-functions-ia64 arch/ia64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/ia64/kernel/kprobes.c~kprobes-exclude-functions-ia64 2005-07-07 11:19:05.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/kprobes.c2005-07-07 11:19:05.0 +0530 @@ -87,8 +87,10 @@ static enum instruction_type bundle_enco * is IP relative instruction and update the kprobe * inst flag accordingly */ -static void update_kprobe_inst_flag(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static void __kprobes update_kprobe_inst_flag(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst, + struct kprobe *p) { p->ainsn.inst_flag = 0; p->ainsn.target_br_reg = 0; @@ -126,8 +128,10 @@ static void update_kprobe_inst_flag(uint * Returns 0 if supported * Returns -EINVAL if unsupported */ -static int unsupported_inst(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static int __kprobes unsupported_inst(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst, + struct kprobe *p) { unsigned long addr = (unsigned long)p->addr; @@ -168,8 +172,9 @@ static int unsupported_inst(uint templat * on which we are inserting kprobe is cmp instruction * with ctype as unc. */ -static uint is_cmp_ctype_unc_inst(uint template, uint slot, uint major_opcode, -unsigned long kprobe_inst) +static uint __kprobes is_cmp_ctype_unc_inst(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst) { cmp_inst_t cmp_inst; uint ctype_unc = 0; @@ -201,8 +206,10 @@ out: * In this function we override the bundle with * the break instruction at the given slot. */ -static void prepare_break_inst(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static void __kprobes prepare_break_inst(uint template, uint slot, +uint major_opcode, +unsigned long kprobe_inst, +struct kprobe *p) { unsigned long break_inst = BREAK_INST; bundle_t *bundle = >ainsn.insn.bundle; @@ -271,7 +278,8 @@ static inline int in_ivt_functions(unsig && addr < (unsigned long)__end_ivt_text); } -static int valid_kprobe_addr(int template, int slot, unsigned long addr) +static int __kprobes valid_kprobe_addr(int template, int slot, + unsigned long addr) { if ((slot > 2) || ((bundle_encoding[template][1] == L) && slot > 1)) { printk(KERN_WARNING "Attempting to insert unaligned kprobe " @@ -323,7 +331,7 @@ static void kretprobe_trampoline(void) *- cleanup by marking the instance as unused *- long jump back to the original return address */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -381,7 +389,8 @@ int trampoline_probe_handler(struct kpro return 1; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { struct kretprobe_instance *ri; @@ -399,7 +408,7 @@ void arch_prepare_kretprobe(struct kretp } } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { unsigned long addr = (unsigned long) p->addr; unsigned long *kprobe_addr = (unsigned long *)(addr & ~0xFULL); @@ -430,7 +439,7 @@ int arch_prepare_kprobe(struct kprobe *p return 0; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { unsigned long addr = (
Re: [4/6 PATCH] Kprobes : Prevent possible race conditions ppc64 changes
This patch contains the ppc64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/kernel/kprobes.c | 29 +- linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/kernel/misc.S|4 - linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/kernel/traps.c |5 + linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/mm/fault.c |5 + linux-2.6.13-rc1-mm1-prasanna/include/asm-ppc64/processor.h | 14 6 files changed, 38 insertions(+), 20 deletions(-) diff -puN arch/ppc64/kernel/kprobes.c~kprobes-exclude-functions-ppc64 arch/ppc64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/ppc64/kernel/kprobes.c~kprobes-exclude-functions-ppc64 2005-07-06 20:07:22.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/kernel/kprobes.c 2005-07-06 20:07:22.0 +0530 @@ -44,7 +44,7 @@ static struct kprobe *kprobe_prev; static unsigned long kprobe_status_prev, kprobe_saved_msr_prev; static struct pt_regs jprobe_saved_regs; -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { int ret = 0; kprobe_opcode_t insn = *p->addr; @@ -68,27 +68,27 @@ int arch_prepare_kprobe(struct kprobe *p return ret; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); p->opcode = *p->addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p->addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p->addr = p->opcode; flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { up(_mutex); free_insn_slot(p->ainsn.insn); @@ -122,7 +122,8 @@ static inline void restore_previous_kpro kprobe_saved_msr = kprobe_saved_msr_prev; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { struct kretprobe_instance *ri; @@ -244,7 +245,7 @@ void kretprobe_trampoline_holder(void) /* * Called when the probe at kretprobe trampoline is hit */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -308,7 +309,7 @@ int trampoline_probe_handler(struct kpro * single-stepped a copy of the instruction. The address of this * copy is p->ainsn.insn. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { int ret; unsigned int insn = *p->ainsn.insn; @@ -373,8 +374,8 @@ static inline int kprobe_fault_handler(s /* * Wrapper routine to for handling exceptions. */ -int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, -void *data) +int __kprobes kprobe_exceptions_notify(struct notifier_block *self, + unsigned long val, void *data) { struct die_args *args = (struct die_args *)data; int ret = NOTIFY_DONE; @@ -406,7 +407,7 @@ int kprobe_exceptions_notify(struct noti return ret; } -int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct jprobe *jp = container_of(p, struct jprobe, kp); @@ -419,16 +420,16 @@ int setjmp_pre_handler(struct kprobe *p, return 1; } -void jprobe_return(void) +void __kprobes jprobe_return(void) { asm volatile("trap" ::: "memory"); } -void jprobe_return_end(void) +void __kprobes jprobe_return_end(void) { }; -int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) { /* * FIXME - we should ideally be validating that we got here 'cos diff -puN arch/ppc64/kernel/traps.c~kprobes-exclude-functions-ppc64 arch/ppc64/kernel/traps.c --- linux-2.6.13-rc1-mm1/arch/ppc64/kernel/traps.c~kprobes-exclude-functions-ppc64 2005-07-06 20:07:22.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/ppc64
Re: [3/6 PATCH] Kprobes : Prevent possible race conditions x86_64 changes
This patch contains the x86_64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/kprobes.c | 35 +- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/traps.c | 14 ++-- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/mm/fault.c |3 4 files changed, 30 insertions(+), 23 deletions(-) diff -puN arch/x86_64/kernel/kprobes.c~kprobes-exclude-functions-x86_64 arch/x86_64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/x86_64/kernel/kprobes.c~kprobes-exclude-functions-x86_64 2005-07-06 17:45:18.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/kprobes.c 2005-07-06 17:45:43.0 +0530 @@ -74,7 +74,7 @@ static inline int is_IF_modifier(kprobe_ return 0; } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { /* insn: must be on special executable page on x86_64. */ up(_mutex); @@ -189,7 +189,7 @@ static inline s32 *is_riprel(u8 *insn) return NULL; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { s32 *ripdisp; memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE); @@ -215,21 +215,21 @@ void arch_copy_kprobe(struct kprobe *p) p->opcode = *p->addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p->addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p->addr = p->opcode; flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { up(_mutex); free_insn_slot(p->ainsn.insn); @@ -261,7 +261,7 @@ static inline void set_current_kprobe(st kprobe_saved_rflags &= ~IF_MASK; } -static void prepare_singlestep(struct kprobe *p, struct pt_regs *regs) +static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs) { regs->eflags |= TF_MASK; regs->eflags &= ~IF_MASK; @@ -272,7 +272,8 @@ static void prepare_singlestep(struct kp regs->rip = (unsigned long)p->ainsn.insn; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { unsigned long *sara = (unsigned long *)regs->rsp; struct kretprobe_instance *ri; @@ -295,7 +296,7 @@ void arch_prepare_kretprobe(struct kretp * Interrupts are disabled on entry as trap3 is an interrupt gate and they * remain disabled thorough out this function. */ -int kprobe_handler(struct pt_regs *regs) +int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; int ret = 0; @@ -399,7 +400,7 @@ no_kprobe: /* * Called when we hit the probe point at kretprobe_trampoline */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -478,7 +479,7 @@ int trampoline_probe_handler(struct kpro * that is atop the stack is the address following the copied instruction. * We need to make it the address following the original instruction. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { unsigned long *tos = (unsigned long *)regs->rsp; unsigned long next_rip = 0; @@ -536,7 +537,7 @@ static void resume_execution(struct kpro * Interrupts are disabled on entry as trap1 is an interrupt gate and they * remain disabled thoroughout this function. And we hold kprobe lock. */ -int post_kprobe_handler(struct pt_regs *regs) +int __kprobes post_kprobe_handler(struct pt_regs *regs) { if (!kprobe_running()) return 0; @@ -571,7 +572,7 @@ out: } /* Interrupts disabled, kprobe_lock held. */ -int kprobe_fault_handler(struct pt_regs *regs, int trapnr) +int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr) { if (current_kprobe->fault_handler && current_kprobe->fault_handler(current_kprobe, regs, trapnr)) @@ -590,8 +591,8 @@ int kprobe_fault_handler(struct pt_regs /* * Wrapper routine for handling exceptions. */ -int kprobe_exceptions_notify(struc
Re: [2/6 PATCH] Kprobes : Prevent possible race conditions i386 changes
This patch contains the i386 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/kprobes.c | 29 +-- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/traps.c | 12 ++-- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/i386/mm/fault.c |4 + 4 files changed, 26 insertions(+), 20 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-exclude-functions-i386 arch/i386/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/i386/kernel/kprobes.c~kprobes-exclude-functions-i386 2005-07-06 17:31:04.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/kprobes.c2005-07-06 17:43:59.0 +0530 @@ -62,32 +62,32 @@ static inline int is_IF_modifier(kprobe_ return 0; } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { return 0; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); p->opcode = *p->addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p->addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p->addr = p->opcode; flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { } @@ -127,7 +127,8 @@ static inline void prepare_singlestep(st regs->eip = (unsigned long)>ainsn.insn; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { unsigned long *sara = (unsigned long *)>esp; struct kretprobe_instance *ri; @@ -150,7 +151,7 @@ void arch_prepare_kretprobe(struct kretp * Interrupts are disabled on entry as trap3 is an interrupt gate and they * remain disabled thorough out this function. */ -static int kprobe_handler(struct pt_regs *regs) +static int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; int ret = 0; @@ -259,7 +260,7 @@ no_kprobe: /* * Called when we hit the probe point at kretprobe_trampoline */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -338,7 +339,7 @@ int trampoline_probe_handler(struct kpro * that is atop the stack is the address following the copied instruction. * We need to make it the address following the original instruction. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { unsigned long *tos = (unsigned long *)>esp; unsigned long next_eip = 0; @@ -444,8 +445,8 @@ static inline int kprobe_fault_handler(s /* * Wrapper routine to for handling exceptions. */ -int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, -void *data) +int __kprobes kprobe_exceptions_notify(struct notifier_block *self, + unsigned long val, void *data) { struct die_args *args = (struct die_args *)data; switch (val) { @@ -473,7 +474,7 @@ int kprobe_exceptions_notify(struct noti return NOTIFY_DONE; } -int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct jprobe *jp = container_of(p, struct jprobe, kp); unsigned long addr; @@ -495,7 +496,7 @@ int setjmp_pre_handler(struct kprobe *p, return 1; } -void jprobe_return(void) +void __kprobes jprobe_return(void) { preempt_enable_no_resched(); asm volatile (" xchgl %%ebx,%%esp \n" @@ -506,7 +507,7 @@ void jprobe_return(void) (jprobe_saved_esp):"memory"); } -int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) { u8 *addr = (u8 *) (regs->eip - 1); unsigned long stack_addr = (unsigned long)jprobe_saved_esp; diff -puN arch/i386/kernel/traps.c~kprobes-exclude-functions-i386 arch/i386/kernel/
[1/6 PATCH] Kprobes : Prevent possible race conditions generic changes
Hi, Please provide your feedback on this kprobes patch set. Thanks Prasanna There are possible race conditions if probes are placed on routines within the kprobes files and routines used by the kprobes. For example if you put probe on get_kprobe() routines, the system can hang while inserting probes on any routine such as do_fork(). Because while inserting probes on do_fork(), register_kprobes() routine grabs the kprobes spin lock and executes get_kprobe() routine and to handle probe of get_kprobe(), kprobes_handler() gets executed and tries to grab kprobes spin lock, and spins forever. This patch avoids such possible race conditions by preventing probes on routines within the kprobes file and routines used by kprobes. I have modified the patches as per Andi Kleen's suggestion to move kprobes routines and other routines used by kprobes to a seperate section .kprobes.text. Also moved page fault and exception handlers to .kprobes.text section. These patches have been tested on i386, x86_64 and ppc64 architectures, also compiled on ia64 and sparc64 architectures. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.13-rc1-mm1-prasanna/include/asm-generic/vmlinux.lds.h |5 linux-2.6.13-rc1-mm1-prasanna/include/linux/kprobes.h |4 linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c | 69 ++ 3 files changed, 51 insertions(+), 27 deletions(-) diff -puN kernel/kprobes.c~kprobes-exclude-functions-generic kernel/kprobes.c --- linux-2.6.13-rc1-mm1/kernel/kprobes.c~kprobes-exclude-functions-generic 2005-07-06 18:51:16.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c 2005-07-06 18:51:45.0 +0530 @@ -72,7 +72,7 @@ static struct hlist_head kprobe_insn_pag * get_insn_slot() - Find a slot on an executable page for an instruction. * We allocate an executable page if there's no room on existing ones. */ -kprobe_opcode_t *get_insn_slot(void) +kprobe_opcode_t * __kprobes get_insn_slot(void) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -117,7 +117,7 @@ kprobe_opcode_t *get_insn_slot(void) return kip->insns; } -void free_insn_slot(kprobe_opcode_t *slot) +void __kprobes free_insn_slot(kprobe_opcode_t *slot) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -152,20 +152,20 @@ void free_insn_slot(kprobe_opcode_t *slo } /* Locks kprobe: irqs must be disabled */ -void lock_kprobes(void) +void __kprobes lock_kprobes(void) { spin_lock(_lock); kprobe_cpu = smp_processor_id(); } -void unlock_kprobes(void) +void __kprobes unlock_kprobes(void) { kprobe_cpu = NR_CPUS; spin_unlock(_lock); } /* You have to be holding the kprobe_lock */ -struct kprobe *get_kprobe(void *addr) +struct kprobe * __kprobes get_kprobe(void *addr) { struct hlist_head *head; struct hlist_node *node; @@ -183,7 +183,7 @@ struct kprobe *get_kprobe(void *addr) * Aggregate handlers for multiple kprobes support - these handlers * take care of invoking the individual kprobe handlers on p->list */ -static int aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp; @@ -198,8 +198,8 @@ static int aggr_pre_handler(struct kprob return 0; } -static void aggr_post_handler(struct kprobe *p, struct pt_regs *regs, - unsigned long flags) +static void __kprobes aggr_post_handler(struct kprobe *p, struct pt_regs *regs, + unsigned long flags) { struct kprobe *kp; @@ -213,8 +213,8 @@ static void aggr_post_handler(struct kpr return; } -static int aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, - int trapnr) +static int __kprobes aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, + int trapnr) { /* * if we faulted "during" the execution of a user specified @@ -227,7 +227,7 @@ static int aggr_fault_handler(struct kpr return 0; } -static int aggr_break_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_break_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp = curr_kprobe; if (curr_kprobe && kp->break_handler) { @@ -240,7 +240,7 @@ static int aggr_break_handler(struct kpr return 0; } -struct kretprobe_instance *get_free_rp_inst(struct kretprobe *rp) +struct kretprobe_instance * __kprobes get_free_rp_inst(struct kretprobe *rp) { struct hlist_node *node; struct kretprobe_instance *ri; @@ -249,7 +249,8 @@ struct kretprobe_instance *get_free_rp_i return NULL; } -static struct kretprobe_instance *get_used_rp_inst(struct kretprobe *rp) +static struct kretprobe_instance * __kprobes
[1/6 PATCH] Kprobes : Prevent possible race conditions generic changes
Hi, Please provide your feedback on this kprobes patch set. Thanks Prasanna There are possible race conditions if probes are placed on routines within the kprobes files and routines used by the kprobes. For example if you put probe on get_kprobe() routines, the system can hang while inserting probes on any routine such as do_fork(). Because while inserting probes on do_fork(), register_kprobes() routine grabs the kprobes spin lock and executes get_kprobe() routine and to handle probe of get_kprobe(), kprobes_handler() gets executed and tries to grab kprobes spin lock, and spins forever. This patch avoids such possible race conditions by preventing probes on routines within the kprobes file and routines used by kprobes. I have modified the patches as per Andi Kleen's suggestion to move kprobes routines and other routines used by kprobes to a seperate section .kprobes.text. Also moved page fault and exception handlers to .kprobes.text section. These patches have been tested on i386, x86_64 and ppc64 architectures, also compiled on ia64 and sparc64 architectures. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.13-rc1-mm1-prasanna/include/asm-generic/vmlinux.lds.h |5 linux-2.6.13-rc1-mm1-prasanna/include/linux/kprobes.h |4 linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c | 69 ++ 3 files changed, 51 insertions(+), 27 deletions(-) diff -puN kernel/kprobes.c~kprobes-exclude-functions-generic kernel/kprobes.c --- linux-2.6.13-rc1-mm1/kernel/kprobes.c~kprobes-exclude-functions-generic 2005-07-06 18:51:16.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c 2005-07-06 18:51:45.0 +0530 @@ -72,7 +72,7 @@ static struct hlist_head kprobe_insn_pag * get_insn_slot() - Find a slot on an executable page for an instruction. * We allocate an executable page if there's no room on existing ones. */ -kprobe_opcode_t *get_insn_slot(void) +kprobe_opcode_t * __kprobes get_insn_slot(void) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -117,7 +117,7 @@ kprobe_opcode_t *get_insn_slot(void) return kip-insns; } -void free_insn_slot(kprobe_opcode_t *slot) +void __kprobes free_insn_slot(kprobe_opcode_t *slot) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -152,20 +152,20 @@ void free_insn_slot(kprobe_opcode_t *slo } /* Locks kprobe: irqs must be disabled */ -void lock_kprobes(void) +void __kprobes lock_kprobes(void) { spin_lock(kprobe_lock); kprobe_cpu = smp_processor_id(); } -void unlock_kprobes(void) +void __kprobes unlock_kprobes(void) { kprobe_cpu = NR_CPUS; spin_unlock(kprobe_lock); } /* You have to be holding the kprobe_lock */ -struct kprobe *get_kprobe(void *addr) +struct kprobe * __kprobes get_kprobe(void *addr) { struct hlist_head *head; struct hlist_node *node; @@ -183,7 +183,7 @@ struct kprobe *get_kprobe(void *addr) * Aggregate handlers for multiple kprobes support - these handlers * take care of invoking the individual kprobe handlers on p-list */ -static int aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp; @@ -198,8 +198,8 @@ static int aggr_pre_handler(struct kprob return 0; } -static void aggr_post_handler(struct kprobe *p, struct pt_regs *regs, - unsigned long flags) +static void __kprobes aggr_post_handler(struct kprobe *p, struct pt_regs *regs, + unsigned long flags) { struct kprobe *kp; @@ -213,8 +213,8 @@ static void aggr_post_handler(struct kpr return; } -static int aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, - int trapnr) +static int __kprobes aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, + int trapnr) { /* * if we faulted during the execution of a user specified @@ -227,7 +227,7 @@ static int aggr_fault_handler(struct kpr return 0; } -static int aggr_break_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_break_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp = curr_kprobe; if (curr_kprobe kp-break_handler) { @@ -240,7 +240,7 @@ static int aggr_break_handler(struct kpr return 0; } -struct kretprobe_instance *get_free_rp_inst(struct kretprobe *rp) +struct kretprobe_instance * __kprobes get_free_rp_inst(struct kretprobe *rp) { struct hlist_node *node; struct kretprobe_instance *ri; @@ -249,7 +249,8 @@ struct kretprobe_instance *get_free_rp_i return NULL; } -static struct kretprobe_instance *get_used_rp_inst(struct kretprobe *rp) +static struct kretprobe_instance * __kprobes get_used_rp_inst(struct kretprobe
Re: [2/6 PATCH] Kprobes : Prevent possible race conditions i386 changes
This patch contains the i386 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/kprobes.c | 29 +-- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/traps.c | 12 ++-- linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/i386/mm/fault.c |4 + 4 files changed, 26 insertions(+), 20 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-exclude-functions-i386 arch/i386/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/i386/kernel/kprobes.c~kprobes-exclude-functions-i386 2005-07-06 17:31:04.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/i386/kernel/kprobes.c2005-07-06 17:43:59.0 +0530 @@ -62,32 +62,32 @@ static inline int is_IF_modifier(kprobe_ return 0; } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { return 0; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { memcpy(p-ainsn.insn, p-addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); p-opcode = *p-addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p-addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p-addr = p-opcode; flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { } @@ -127,7 +127,8 @@ static inline void prepare_singlestep(st regs-eip = (unsigned long)p-ainsn.insn; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { unsigned long *sara = (unsigned long *)regs-esp; struct kretprobe_instance *ri; @@ -150,7 +151,7 @@ void arch_prepare_kretprobe(struct kretp * Interrupts are disabled on entry as trap3 is an interrupt gate and they * remain disabled thorough out this function. */ -static int kprobe_handler(struct pt_regs *regs) +static int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; int ret = 0; @@ -259,7 +260,7 @@ no_kprobe: /* * Called when we hit the probe point at kretprobe_trampoline */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -338,7 +339,7 @@ int trampoline_probe_handler(struct kpro * that is atop the stack is the address following the copied instruction. * We need to make it the address following the original instruction. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { unsigned long *tos = (unsigned long *)regs-esp; unsigned long next_eip = 0; @@ -444,8 +445,8 @@ static inline int kprobe_fault_handler(s /* * Wrapper routine to for handling exceptions. */ -int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, -void *data) +int __kprobes kprobe_exceptions_notify(struct notifier_block *self, + unsigned long val, void *data) { struct die_args *args = (struct die_args *)data; switch (val) { @@ -473,7 +474,7 @@ int kprobe_exceptions_notify(struct noti return NOTIFY_DONE; } -int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct jprobe *jp = container_of(p, struct jprobe, kp); unsigned long addr; @@ -495,7 +496,7 @@ int setjmp_pre_handler(struct kprobe *p, return 1; } -void jprobe_return(void) +void __kprobes jprobe_return(void) { preempt_enable_no_resched(); asm volatile ( xchgl %%ebx,%%esp \n @@ -506,7 +507,7 @@ void jprobe_return(void) (jprobe_saved_esp):memory); } -int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) { u8 *addr = (u8 *) (regs-eip - 1); unsigned long stack_addr = (unsigned long)jprobe_saved_esp; diff -puN arch/i386/kernel/traps.c~kprobes-exclude-functions-i386 arch/i386/kernel/traps.c --- linux-2.6.13-rc1-mm1/arch/i386/kernel/traps.c~kprobes-exclude-functions
Re: [3/6 PATCH] Kprobes : Prevent possible race conditions x86_64 changes
This patch contains the x86_64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/kprobes.c | 35 +- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/traps.c | 14 ++-- linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/mm/fault.c |3 4 files changed, 30 insertions(+), 23 deletions(-) diff -puN arch/x86_64/kernel/kprobes.c~kprobes-exclude-functions-x86_64 arch/x86_64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/x86_64/kernel/kprobes.c~kprobes-exclude-functions-x86_64 2005-07-06 17:45:18.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/x86_64/kernel/kprobes.c 2005-07-06 17:45:43.0 +0530 @@ -74,7 +74,7 @@ static inline int is_IF_modifier(kprobe_ return 0; } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { /* insn: must be on special executable page on x86_64. */ up(kprobe_mutex); @@ -189,7 +189,7 @@ static inline s32 *is_riprel(u8 *insn) return NULL; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { s32 *ripdisp; memcpy(p-ainsn.insn, p-addr, MAX_INSN_SIZE); @@ -215,21 +215,21 @@ void arch_copy_kprobe(struct kprobe *p) p-opcode = *p-addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p-addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p-addr = p-opcode; flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { up(kprobe_mutex); free_insn_slot(p-ainsn.insn); @@ -261,7 +261,7 @@ static inline void set_current_kprobe(st kprobe_saved_rflags = ~IF_MASK; } -static void prepare_singlestep(struct kprobe *p, struct pt_regs *regs) +static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs) { regs-eflags |= TF_MASK; regs-eflags = ~IF_MASK; @@ -272,7 +272,8 @@ static void prepare_singlestep(struct kp regs-rip = (unsigned long)p-ainsn.insn; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { unsigned long *sara = (unsigned long *)regs-rsp; struct kretprobe_instance *ri; @@ -295,7 +296,7 @@ void arch_prepare_kretprobe(struct kretp * Interrupts are disabled on entry as trap3 is an interrupt gate and they * remain disabled thorough out this function. */ -int kprobe_handler(struct pt_regs *regs) +int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; int ret = 0; @@ -399,7 +400,7 @@ no_kprobe: /* * Called when we hit the probe point at kretprobe_trampoline */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -478,7 +479,7 @@ int trampoline_probe_handler(struct kpro * that is atop the stack is the address following the copied instruction. * We need to make it the address following the original instruction. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { unsigned long *tos = (unsigned long *)regs-rsp; unsigned long next_rip = 0; @@ -536,7 +537,7 @@ static void resume_execution(struct kpro * Interrupts are disabled on entry as trap1 is an interrupt gate and they * remain disabled thoroughout this function. And we hold kprobe lock. */ -int post_kprobe_handler(struct pt_regs *regs) +int __kprobes post_kprobe_handler(struct pt_regs *regs) { if (!kprobe_running()) return 0; @@ -571,7 +572,7 @@ out: } /* Interrupts disabled, kprobe_lock held. */ -int kprobe_fault_handler(struct pt_regs *regs, int trapnr) +int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr) { if (current_kprobe-fault_handler current_kprobe-fault_handler(current_kprobe, regs, trapnr)) @@ -590,8 +591,8 @@ int kprobe_fault_handler(struct pt_regs /* * Wrapper routine for handling exceptions. */ -int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, -void *data) +int
Re: [4/6 PATCH] Kprobes : Prevent possible race conditions ppc64 changes
This patch contains the ppc64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/kernel/kprobes.c | 29 +- linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/kernel/misc.S|4 - linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/kernel/traps.c |5 + linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/mm/fault.c |5 + linux-2.6.13-rc1-mm1-prasanna/include/asm-ppc64/processor.h | 14 6 files changed, 38 insertions(+), 20 deletions(-) diff -puN arch/ppc64/kernel/kprobes.c~kprobes-exclude-functions-ppc64 arch/ppc64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/ppc64/kernel/kprobes.c~kprobes-exclude-functions-ppc64 2005-07-06 20:07:22.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/kernel/kprobes.c 2005-07-06 20:07:22.0 +0530 @@ -44,7 +44,7 @@ static struct kprobe *kprobe_prev; static unsigned long kprobe_status_prev, kprobe_saved_msr_prev; static struct pt_regs jprobe_saved_regs; -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { int ret = 0; kprobe_opcode_t insn = *p-addr; @@ -68,27 +68,27 @@ int arch_prepare_kprobe(struct kprobe *p return ret; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { memcpy(p-ainsn.insn, p-addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); p-opcode = *p-addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p-addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p-addr = p-opcode; flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { up(kprobe_mutex); free_insn_slot(p-ainsn.insn); @@ -122,7 +122,8 @@ static inline void restore_previous_kpro kprobe_saved_msr = kprobe_saved_msr_prev; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { struct kretprobe_instance *ri; @@ -244,7 +245,7 @@ void kretprobe_trampoline_holder(void) /* * Called when the probe at kretprobe trampoline is hit */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -308,7 +309,7 @@ int trampoline_probe_handler(struct kpro * single-stepped a copy of the instruction. The address of this * copy is p-ainsn.insn. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { int ret; unsigned int insn = *p-ainsn.insn; @@ -373,8 +374,8 @@ static inline int kprobe_fault_handler(s /* * Wrapper routine to for handling exceptions. */ -int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, -void *data) +int __kprobes kprobe_exceptions_notify(struct notifier_block *self, + unsigned long val, void *data) { struct die_args *args = (struct die_args *)data; int ret = NOTIFY_DONE; @@ -406,7 +407,7 @@ int kprobe_exceptions_notify(struct noti return ret; } -int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct jprobe *jp = container_of(p, struct jprobe, kp); @@ -419,16 +420,16 @@ int setjmp_pre_handler(struct kprobe *p, return 1; } -void jprobe_return(void) +void __kprobes jprobe_return(void) { asm volatile(trap ::: memory); } -void jprobe_return_end(void) +void __kprobes jprobe_return_end(void) { }; -int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) { /* * FIXME - we should ideally be validating that we got here 'cos diff -puN arch/ppc64/kernel/traps.c~kprobes-exclude-functions-ppc64 arch/ppc64/kernel/traps.c --- linux-2.6.13-rc1-mm1/arch/ppc64/kernel/traps.c~kprobes-exclude-functions-ppc64 2005-07-06 20:07:22.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/ppc64/kernel/traps.c 2005-07-06 20:07:22.0 +0530 @@ -30,6 +30,7 @@ #include
Re: [5/6 PATCH] Kprobes : Prevent possible race conditions ia64 changes
This patch contains the ia64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/kprobes.c | 57 ++- linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/traps.c |5 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/lib/flush.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/ia64/mm/fault.c |3 5 files changed, 40 insertions(+), 27 deletions(-) diff -puN arch/ia64/kernel/kprobes.c~kprobes-exclude-functions-ia64 arch/ia64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/ia64/kernel/kprobes.c~kprobes-exclude-functions-ia64 2005-07-07 11:19:05.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/ia64/kernel/kprobes.c2005-07-07 11:19:05.0 +0530 @@ -87,8 +87,10 @@ static enum instruction_type bundle_enco * is IP relative instruction and update the kprobe * inst flag accordingly */ -static void update_kprobe_inst_flag(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static void __kprobes update_kprobe_inst_flag(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst, + struct kprobe *p) { p-ainsn.inst_flag = 0; p-ainsn.target_br_reg = 0; @@ -126,8 +128,10 @@ static void update_kprobe_inst_flag(uint * Returns 0 if supported * Returns -EINVAL if unsupported */ -static int unsupported_inst(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static int __kprobes unsupported_inst(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst, + struct kprobe *p) { unsigned long addr = (unsigned long)p-addr; @@ -168,8 +172,9 @@ static int unsupported_inst(uint templat * on which we are inserting kprobe is cmp instruction * with ctype as unc. */ -static uint is_cmp_ctype_unc_inst(uint template, uint slot, uint major_opcode, -unsigned long kprobe_inst) +static uint __kprobes is_cmp_ctype_unc_inst(uint template, uint slot, + uint major_opcode, + unsigned long kprobe_inst) { cmp_inst_t cmp_inst; uint ctype_unc = 0; @@ -201,8 +206,10 @@ out: * In this function we override the bundle with * the break instruction at the given slot. */ -static void prepare_break_inst(uint template, uint slot, uint major_opcode, - unsigned long kprobe_inst, struct kprobe *p) +static void __kprobes prepare_break_inst(uint template, uint slot, +uint major_opcode, +unsigned long kprobe_inst, +struct kprobe *p) { unsigned long break_inst = BREAK_INST; bundle_t *bundle = p-ainsn.insn.bundle; @@ -271,7 +278,8 @@ static inline int in_ivt_functions(unsig addr (unsigned long)__end_ivt_text); } -static int valid_kprobe_addr(int template, int slot, unsigned long addr) +static int __kprobes valid_kprobe_addr(int template, int slot, + unsigned long addr) { if ((slot 2) || ((bundle_encoding[template][1] == L) slot 1)) { printk(KERN_WARNING Attempting to insert unaligned kprobe @@ -323,7 +331,7 @@ static void kretprobe_trampoline(void) *- cleanup by marking the instance as unused *- long jump back to the original return address */ -int trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head; @@ -381,7 +389,8 @@ int trampoline_probe_handler(struct kpro return 1; } -void arch_prepare_kretprobe(struct kretprobe *rp, struct pt_regs *regs) +void __kprobes arch_prepare_kretprobe(struct kretprobe *rp, + struct pt_regs *regs) { struct kretprobe_instance *ri; @@ -399,7 +408,7 @@ void arch_prepare_kretprobe(struct kretp } } -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { unsigned long addr = (unsigned long) p-addr; unsigned long *kprobe_addr = (unsigned long *)(addr ~0xFULL); @@ -430,7 +439,7 @@ int arch_prepare_kprobe(struct kprobe *p return 0; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { unsigned long addr = (unsigned long)p-addr; unsigned long arm_addr = addr ~0xFULL; @@ -439,7
Re: [6/6 PATCH] Kprobes : Prevent possible race conditions sparc64 changes
This patch contains the sparc64 architecture specific changes to prevent the possible race conditions. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/kernel/kprobes.c | 36 +- linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/kernel/vmlinux.lds.S |1 linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/mm/fault.c |8 +- linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/mm/init.c|3 linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/mm/ultra.S |2 5 files changed, 30 insertions(+), 20 deletions(-) diff -puN arch/sparc64/kernel/kprobes.c~kprobes-exclude-functions-sparc64 arch/sparc64/kernel/kprobes.c --- linux-2.6.13-rc1-mm1/arch/sparc64/kernel/kprobes.c~kprobes-exclude-functions-sparc64 2005-07-06 20:08:40.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/arch/sparc64/kernel/kprobes.c 2005-07-06 20:08:40.0 +0530 @@ -8,6 +8,7 @@ #include linux/kprobes.h #include asm/kdebug.h #include asm/signal.h +#include asm/cacheflush.h /* We do not have hardware single-stepping on sparc64. * So we implement software single-stepping with breakpoint @@ -37,31 +38,31 @@ * - Mark that we are no longer actively in a kprobe. */ -int arch_prepare_kprobe(struct kprobe *p) +int __kprobes arch_prepare_kprobe(struct kprobe *p) { return 0; } -void arch_copy_kprobe(struct kprobe *p) +void __kprobes arch_copy_kprobe(struct kprobe *p) { p-ainsn.insn[0] = *p-addr; p-ainsn.insn[1] = BREAKPOINT_INSTRUCTION_2; p-opcode = *p-addr; } -void arch_arm_kprobe(struct kprobe *p) +void __kprobes arch_arm_kprobe(struct kprobe *p) { *p-addr = BREAKPOINT_INSTRUCTION; flushi(p-addr); } -void arch_disarm_kprobe(struct kprobe *p) +void __kprobes arch_disarm_kprobe(struct kprobe *p) { *p-addr = p-opcode; flushi(p-addr); } -void arch_remove_kprobe(struct kprobe *p) +void __kprobes arch_remove_kprobe(struct kprobe *p) { } @@ -111,7 +112,7 @@ static inline void prepare_singlestep(st } } -static int kprobe_handler(struct pt_regs *regs) +static int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; void *addr = (void *) regs-tpc; @@ -191,8 +192,9 @@ no_kprobe: * The original INSN location was REAL_PC, it actually * executed at PC and produced destination address NPC. */ -static unsigned long relbranch_fixup(u32 insn, unsigned long real_pc, -unsigned long pc, unsigned long npc) +static unsigned long __kprobes relbranch_fixup(u32 insn, unsigned long real_pc, + unsigned long pc, + unsigned long npc) { /* Branch not taken, no mods necessary. */ if (npc == pc + 0x4UL) @@ -217,7 +219,8 @@ static unsigned long relbranch_fixup(u32 /* If INSN is an instruction which writes it's PC location * into a destination register, fix that up. */ -static void retpc_fixup(struct pt_regs *regs, u32 insn, unsigned long real_pc) +static void __kprobes retpc_fixup(struct pt_regs *regs, u32 insn, + unsigned long real_pc) { unsigned long *slot = NULL; @@ -257,7 +260,7 @@ static void retpc_fixup(struct pt_regs * * This function prepares to return from the post-single-step * breakpoint trap. */ -static void resume_execution(struct kprobe *p, struct pt_regs *regs) +static void __kprobes resume_execution(struct kprobe *p, struct pt_regs *regs) { u32 insn = p-ainsn.insn[0]; @@ -315,8 +318,8 @@ static inline int kprobe_fault_handler(s /* * Wrapper routine to for handling exceptions. */ -int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, -void *data) +int __kprobes kprobe_exceptions_notify(struct notifier_block *self, + unsigned long val, void *data) { struct die_args *args = (struct die_args *)data; switch (val) { @@ -344,7 +347,8 @@ int kprobe_exceptions_notify(struct noti return NOTIFY_DONE; } -asmlinkage void kprobe_trap(unsigned long trap_level, struct pt_regs *regs) +asmlinkage void __kprobes kprobe_trap(unsigned long trap_level, + struct pt_regs *regs) { BUG_ON(trap_level != 0x170 trap_level != 0x171); @@ -368,7 +372,7 @@ static struct pt_regs jprobe_saved_regs; static struct pt_regs *jprobe_saved_regs_location; static struct sparc_stackf jprobe_saved_stack; -int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) +int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct jprobe *jp = container_of(p, struct jprobe, kp); @@ -390,7 +394,7 @@ int setjmp_pre_handler(struct kprobe *p, return 1; } -void jprobe_return(void) +void __kprobes jprobe_return(void) { preempt_enable_no_resched
Re: [1/6 PATCH] Kprobes : Prevent possible race conditions generic changes
Hi Andrew, I have modified the patch as per your comments. As Andi mentioned, this patch set provides safety for kprobes and avoids possible kernel crash I think this safety feature will helps tools like systemtap which are uses kprobes mechanism. Also kprobes cleanup patch to cleanup the codingstyle is on the way. Please let me know if you have any issues. Thanks Prasanna There are possible race conditions if probes are placed on routines within the kprobes files and routines used by the kprobes. For example if you put probe on get_kprobe() routines, the system can hang while inserting probes on any routine such as do_fork(). Because while inserting probes on do_fork(), register_kprobes() routine grabs the kprobes spin lock and executes get_kprobe() routine and to handle probe of get_kprobe(), kprobes_handler() gets executed and tries to grab kprobes spin lock, and spins forever. This patch avoids such possible race conditions by preventing probes on routines within the kprobes file and routines used by kprobes. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.13-rc1-mm1-prasanna/include/asm-generic/sections.h|1 linux-2.6.13-rc1-mm1-prasanna/include/asm-generic/vmlinux.lds.h |5 linux-2.6.13-rc1-mm1-prasanna/include/linux/kprobes.h |3 linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c | 72 +- 4 files changed, 52 insertions(+), 29 deletions(-) diff -puN kernel/kprobes.c~kprobes-exclude-functions-generic kernel/kprobes.c --- linux-2.6.13-rc1-mm1/kernel/kprobes.c~kprobes-exclude-functions-generic 2005-07-07 17:13:26.0 +0530 +++ linux-2.6.13-rc1-mm1-prasanna/kernel/kprobes.c 2005-07-07 17:18:44.0 +0530 @@ -37,6 +37,7 @@ #include linux/init.h #include linux/module.h #include linux/moduleloader.h +#include asm-generic/sections.h #include asm/cacheflush.h #include asm/errno.h #include asm/kdebug.h @@ -72,7 +73,7 @@ static struct hlist_head kprobe_insn_pag * get_insn_slot() - Find a slot on an executable page for an instruction. * We allocate an executable page if there's no room on existing ones. */ -kprobe_opcode_t *get_insn_slot(void) +kprobe_opcode_t __kprobes *get_insn_slot(void) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -117,7 +118,7 @@ kprobe_opcode_t *get_insn_slot(void) return kip-insns; } -void free_insn_slot(kprobe_opcode_t *slot) +void __kprobes free_insn_slot(kprobe_opcode_t *slot) { struct kprobe_insn_page *kip; struct hlist_node *pos; @@ -152,20 +153,20 @@ void free_insn_slot(kprobe_opcode_t *slo } /* Locks kprobe: irqs must be disabled */ -void lock_kprobes(void) +void __kprobes lock_kprobes(void) { spin_lock(kprobe_lock); kprobe_cpu = smp_processor_id(); } -void unlock_kprobes(void) +void __kprobes unlock_kprobes(void) { kprobe_cpu = NR_CPUS; spin_unlock(kprobe_lock); } /* You have to be holding the kprobe_lock */ -struct kprobe *get_kprobe(void *addr) +struct kprobe __kprobes *get_kprobe(void *addr) { struct hlist_head *head; struct hlist_node *node; @@ -183,7 +184,7 @@ struct kprobe *get_kprobe(void *addr) * Aggregate handlers for multiple kprobes support - these handlers * take care of invoking the individual kprobe handlers on p-list */ -static int aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp; @@ -198,8 +199,8 @@ static int aggr_pre_handler(struct kprob return 0; } -static void aggr_post_handler(struct kprobe *p, struct pt_regs *regs, - unsigned long flags) +static void __kprobes aggr_post_handler(struct kprobe *p, struct pt_regs *regs, + unsigned long flags) { struct kprobe *kp; @@ -213,8 +214,8 @@ static void aggr_post_handler(struct kpr return; } -static int aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, - int trapnr) +static int __kprobes aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, + int trapnr) { /* * if we faulted during the execution of a user specified @@ -227,7 +228,7 @@ static int aggr_fault_handler(struct kpr return 0; } -static int aggr_break_handler(struct kprobe *p, struct pt_regs *regs) +static int __kprobes aggr_break_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp = curr_kprobe; if (curr_kprobe kp-break_handler) { @@ -240,7 +241,7 @@ static int aggr_break_handler(struct kpr return 0; } -struct kretprobe_instance *get_free_rp_inst(struct kretprobe *rp) +struct kretprobe_instance __kprobes *get_free_rp_inst(struct kretprobe *rp) { struct hlist_node *node; struct kretprobe_instance *ri; @@ -249,7 +250,8 @@ struct
Re: kprobe support for memory access watchpoints
Jeff, > I was wondering if there are plans to support a method to register > watchpoints for memory data access with kprobe. On x86, it's possible to > watch for read/write access to arbitrary memory locations via DR memory > registers. Here are couple of patches providing debug register allocation mechanism and kernel API to register watchpoints. These patches were posted and reviewed on lkml some time back, Please see the URL below for details. http://seclists.org/lists/linux-kernel/2004/Oct/4730.html http://seclists.org/lists/linux-kernel/2004/Oct/4729.html Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: kprobe support for memory access watchpoints
Jeff, I was wondering if there are plans to support a method to register watchpoints for memory data access with kprobe. On x86, it's possible to watch for read/write access to arbitrary memory locations via DR memory registers. Here are couple of patches providing debug register allocation mechanism and kernel API to register watchpoints. These patches were posted and reviewed on lkml some time back, Please see the URL below for details. http://seclists.org/lists/linux-kernel/2004/Oct/4730.html http://seclists.org/lists/linux-kernel/2004/Oct/4729.html Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 2.6.12-rc1-mm3] [1/2] kprobes += function-return
> > > int register_returnprobe(struct rprobe *rp) { > > ... > > > > > independent of kprobe and jprobe. > > ... > > > > > > make unregister exitprobes independent of kprobe/jprobe. > > > > > ... > > > > 1. When you call register_j/kprobe(), if kprobe->rp is non-null, it is > > assumed to point to a retprobe that will be registered and unregistered > > along with the kprobe. (But this may make trouble for existing kprobes > > applications that didn't need to initialize the (nonexistent) rp > > pointer. Probably not a huge deal.) > > I suppose if pairing of entry and return probes is important for a user, > he/she can always do the following: > > static int ready; // 1 = everybody registered > // 2 = everybody knows we're registered > ... > ready = 0; > ... register_kprobe()... > ... register_retprobe() ... > /* instant XXX -- see below*/ > ready = 1; > > and in kp.pre_handler do > if (!ready) { > // return probe not registered yet > return 0; > } > ready = 2; > > > and in rp.handler do > if (ready != 2) { > // Probed function entered during instant XXX, > // so kp.pre_handler didn't act on it. > return 0; > } > > > Keeping a whole group of kprobes, jprobes, and retprobes in the starting > gate pending a "ready" signal (e.g., for SystemTap) could probably be > handled similarly. > > Unregistration shouldn't be an issue. At any time you can have N active > instances of the probed function, and have therefore recorded E entries > and E-N returns. Hien's code handles all that on retprobe > deregistration, but the user's instrumentation should never count on # > probed entries == # probed returns. > Jim, You can do something like you explained above to handle the pairing issues. You need to provide simple and compact interfaces for return probe feature. Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 2.6.12-rc1-mm3] [1/2] kprobes += function-return
int register_returnprobe(struct rprobe *rp) { ... independent of kprobe and jprobe. ... make unregister exitprobes independent of kprobe/jprobe. ... 1. When you call register_j/kprobe(), if kprobe-rp is non-null, it is assumed to point to a retprobe that will be registered and unregistered along with the kprobe. (But this may make trouble for existing kprobes applications that didn't need to initialize the (nonexistent) rp pointer. Probably not a huge deal.) I suppose if pairing of entry and return probes is important for a user, he/she can always do the following: static int ready; // 1 = everybody registered // 2 = everybody knows we're registered ... ready = 0; ... register_kprobe(kp)... ... register_retprobe(rp) ... /* instant XXX -- see below*/ ready = 1; and in kp.pre_handler do if (!ready) { // return probe not registered yet return 0; } ready = 2; body of handler and in rp.handler do if (ready != 2) { // Probed function entered during instant XXX, // so kp.pre_handler didn't act on it. return 0; } body of handler Keeping a whole group of kprobes, jprobes, and retprobes in the starting gate pending a ready signal (e.g., for SystemTap) could probably be handled similarly. Unregistration shouldn't be an issue. At any time you can have N active instances of the probed function, and have therefore recorded E entries and E-N returns. Hien's code handles all that on retprobe deregistration, but the user's instrumentation should never count on # probed entries == # probed returns. Jim, You can do something like you explained above to handle the pairing issues. You need to provide simple and compact interfaces for return probe feature. Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] Kprobes: Oops! in unregister_kprobe()
Hi, Please find the patch below to fix Oops! in unregister_kprobe(). Please let me know if you any issues. Thanks Prasanna kernel oops! when unregister_kprobe() is called on a non-registered kprobe. This patch fixes the above problem by checking if the probe exists before unregistering. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.12-rc2-prasanna/kernel/kprobes.c |6 +- 1 files changed, 5 insertions(+), 1 deletion(-) diff -puN kernel/kprobes.c~kprobes-unregister-oops-fix kernel/kprobes.c --- linux-2.6.12-rc2/kernel/kprobes.c~kprobes-unregister-oops-fix 2005-04-11 17:23:34.0 +0530 +++ linux-2.6.12-rc2-prasanna/kernel/kprobes.c 2005-04-11 17:32:50.0 +0530 @@ -110,13 +110,17 @@ rm_kprobe: void unregister_kprobe(struct kprobe *p) { unsigned long flags; - arch_remove_kprobe(p); spin_lock_irqsave(_lock, flags); + if (!get_kprobe(p->addr)) { + spin_unlock_irqrestore(_lock, flags); + return; + } *p->addr = p->opcode; hlist_del(>hlist); flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); spin_unlock_irqrestore(_lock, flags); + arch_remove_kprobe(p); } _ -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] Kprobes: Multiple probes feature at given address
Thanks Maneesh for your comments. Please find the patch below. > [..] > > Assumption : If a user has already inserted a probe using old > > register_kprobe() > > routine, and later wants to insert another probe at the same address using > > register_multiprobe() routine, then register_multiprobe() will return > > EEXIST. > > This can be avoided by renaming the interface routines. > > > I am not sure if systemTap can tolerate this resitriction. > Basically lets understand that there are two sets of users 1. One set wants to use the older register_kprobe() interface and dont want multiprobe complexities. 2. Second set of users want multiprobes (such as systemtap). By adding two new interface to insert multiprobe should help both the types of users. And now the new interface in this patch also accepts the same datatype ie struct kprobe *. Just by writting the wrappers around these interfaces will help the systemtap. I have modified this patch as per your comments. > I think it should not exit here without un-registering any thing if temp > is an active_probe. Instead, it should parse the ap->head to look > for the desired multiprobe to unregsiter. > Let the user take care about this. > -EEXIST doesnot seem to be a proper error code here. When temp is NULL that > means, there is no such kprobe. modified in the new patch. Please let me know if you have any issues. Thanks Prasanna Here is an attempt to provide multiple handlers feature as an addon patch over the existing kprobes infrastructure without even changing existing kprobes infrastructure. The design goal is to provide a simple, compact multiprobe feature without even changing a single line of existing kprobes. This patch introduces two new interface: register_multiprobe(struct kprobe *p); and unregister_multiprobe(struct kprobe *p); register_multiprobe(struct kprobe *p): User has to allocate kprobe (defined in kprobes.h) and pass the pointer to register_multiprobe(); This routine does some housekeeping by storing reference to individual handlers and registering kprobes with common handler if the user requests for the first time at a given address. On subsequenct calls to insert probes on the same address, this routines just adds the individual handlers to the hhlist (struct kprobe) without registering the kprobes. unregister_multiprobe(struct kprobe *p): User has to pass the kprobe pointer to unregister. This routine just checks if he is the only active user and calls unregister kprobes. If there are more active users, it just removes the individual handlers inserted by this user from the hhlist. Advantages : 1. Layered architecture, need not worry about underlying stuff. 2. Its simple and compact. 3. Wrapper routines can be written over new and existing interface to handle interface naming issue. 4. It works without even changing a single line of existing kprobes code. Assumption : If a user has already inserted a probe using old register_kprobe() routine, and later wants to insert another probe at the same address using register_multiprobe() routine, then register_multiprobe() will return EEXIST. This can be avoided by renaming the interface routines. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.12-rc2-prasanna/include/linux/kprobes.h | 26 +++ linux-2.6.12-rc2-prasanna/kernel/kprobes.c| 152 ++ 2 files changed, 178 insertions(+) diff -puN kernel/kprobes.c~kprobes-layered-multiple-handlers kernel/kprobes.c --- linux-2.6.12-rc2/kernel/kprobes.c~kprobes-layered-multiple-handlers 2005-04-11 13:52:52.0 +0530 +++ linux-2.6.12-rc2-prasanna/kernel/kprobes.c 2005-04-11 13:57:55.0 +0530 @@ -27,6 +27,9 @@ * interface to access function arguments. * 2004-SepPrasanna S Panchamukhi <[EMAIL PROTECTED]> Changed Kprobes * exceptions notifier to be first on the priority list. + * 2005-April Prasanna S Panchamukhi <[EMAIL PROTECTED]> Added multiple + * handlers feature as an addon interface over existing kprobes + * interface. */ #include #include @@ -116,6 +119,153 @@ void unregister_kprobe(struct kprobe *p) spin_unlock_irqrestore(_lock, flags); } + +/* common kprobes pre handler that gets control when the registered probe + * gets fired. This routines is wrapper over the inserted multiple handlers + * at a given address and calls individual handlers. + */ +int comm_pre_handler(struct kprobe *p, struct pt_regs *regs) +{ + struct active_probe *ap; + struct hlist_node *node; + struct hlist_head *head; + + ap = container_of(p, struct active_probe, comm_probe); + head = >head; + hlist_for_each(node, head) { + struct kprobe *kp = hlist_entry(node, struct kprobe , hhlist); + if
Re: [RFC] Kprobes: Multiple probes feature at given address
Thanks Maneesh for your comments. Please find the patch below. [..] Assumption : If a user has already inserted a probe using old register_kprobe() routine, and later wants to insert another probe at the same address using register_multiprobe() routine, then register_multiprobe() will return EEXIST. This can be avoided by renaming the interface routines. I am not sure if systemTap can tolerate this resitriction. Basically lets understand that there are two sets of users 1. One set wants to use the older register_kprobe() interface and dont want multiprobe complexities. 2. Second set of users want multiprobes (such as systemtap). By adding two new interface to insert multiprobe should help both the types of users. And now the new interface in this patch also accepts the same datatype ie struct kprobe *. Just by writting the wrappers around these interfaces will help the systemtap. I have modified this patch as per your comments. I think it should not exit here without un-registering any thing if temp is an active_probe. Instead, it should parse the ap-head to look for the desired multiprobe to unregsiter. Let the user take care about this. -EEXIST doesnot seem to be a proper error code here. When temp is NULL that means, there is no such kprobe. modified in the new patch. Please let me know if you have any issues. Thanks Prasanna Here is an attempt to provide multiple handlers feature as an addon patch over the existing kprobes infrastructure without even changing existing kprobes infrastructure. The design goal is to provide a simple, compact multiprobe feature without even changing a single line of existing kprobes. This patch introduces two new interface: register_multiprobe(struct kprobe *p); and unregister_multiprobe(struct kprobe *p); register_multiprobe(struct kprobe *p): User has to allocate kprobe (defined in kprobes.h) and pass the pointer to register_multiprobe(); This routine does some housekeeping by storing reference to individual handlers and registering kprobes with common handler if the user requests for the first time at a given address. On subsequenct calls to insert probes on the same address, this routines just adds the individual handlers to the hhlist (struct kprobe) without registering the kprobes. unregister_multiprobe(struct kprobe *p): User has to pass the kprobe pointer to unregister. This routine just checks if he is the only active user and calls unregister kprobes. If there are more active users, it just removes the individual handlers inserted by this user from the hhlist. Advantages : 1. Layered architecture, need not worry about underlying stuff. 2. Its simple and compact. 3. Wrapper routines can be written over new and existing interface to handle interface naming issue. 4. It works without even changing a single line of existing kprobes code. Assumption : If a user has already inserted a probe using old register_kprobe() routine, and later wants to insert another probe at the same address using register_multiprobe() routine, then register_multiprobe() will return EEXIST. This can be avoided by renaming the interface routines. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.12-rc2-prasanna/include/linux/kprobes.h | 26 +++ linux-2.6.12-rc2-prasanna/kernel/kprobes.c| 152 ++ 2 files changed, 178 insertions(+) diff -puN kernel/kprobes.c~kprobes-layered-multiple-handlers kernel/kprobes.c --- linux-2.6.12-rc2/kernel/kprobes.c~kprobes-layered-multiple-handlers 2005-04-11 13:52:52.0 +0530 +++ linux-2.6.12-rc2-prasanna/kernel/kprobes.c 2005-04-11 13:57:55.0 +0530 @@ -27,6 +27,9 @@ * interface to access function arguments. * 2004-SepPrasanna S Panchamukhi [EMAIL PROTECTED] Changed Kprobes * exceptions notifier to be first on the priority list. + * 2005-April Prasanna S Panchamukhi [EMAIL PROTECTED] Added multiple + * handlers feature as an addon interface over existing kprobes + * interface. */ #include linux/kprobes.h #include linux/spinlock.h @@ -116,6 +119,153 @@ void unregister_kprobe(struct kprobe *p) spin_unlock_irqrestore(kprobe_lock, flags); } + +/* common kprobes pre handler that gets control when the registered probe + * gets fired. This routines is wrapper over the inserted multiple handlers + * at a given address and calls individual handlers. + */ +int comm_pre_handler(struct kprobe *p, struct pt_regs *regs) +{ + struct active_probe *ap; + struct hlist_node *node; + struct hlist_head *head; + + ap = container_of(p, struct active_probe, comm_probe); + head = ap-head; + hlist_for_each(node, head) { + struct kprobe *kp = hlist_entry(node, struct kprobe , hhlist); + if (kp-pre_handler) + kp-pre_handler(kp, regs
[PATCH] Kprobes: Oops! in unregister_kprobe()
Hi, Please find the patch below to fix Oops! in unregister_kprobe(). Please let me know if you any issues. Thanks Prasanna kernel oops! when unregister_kprobe() is called on a non-registered kprobe. This patch fixes the above problem by checking if the probe exists before unregistering. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.12-rc2-prasanna/kernel/kprobes.c |6 +- 1 files changed, 5 insertions(+), 1 deletion(-) diff -puN kernel/kprobes.c~kprobes-unregister-oops-fix kernel/kprobes.c --- linux-2.6.12-rc2/kernel/kprobes.c~kprobes-unregister-oops-fix 2005-04-11 17:23:34.0 +0530 +++ linux-2.6.12-rc2-prasanna/kernel/kprobes.c 2005-04-11 17:32:50.0 +0530 @@ -110,13 +110,17 @@ rm_kprobe: void unregister_kprobe(struct kprobe *p) { unsigned long flags; - arch_remove_kprobe(p); spin_lock_irqsave(kprobe_lock, flags); + if (!get_kprobe(p-addr)) { + spin_unlock_irqrestore(kprobe_lock, flags); + return; + } *p-addr = p-opcode; hlist_del(p-hlist); flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); spin_unlock_irqrestore(kprobe_lock, flags); + arch_remove_kprobe(p); } _ -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: module for controlling kprobes with /proc
Hi Piotr, Good way to make kprobes useful, but I have some comments. >I have programmed a universal module to register/remove kprobes handlers >by interacting with /proc with simple commands. > why /proc ? You can use a combination of SysRq key to enter a kprobe command line prompt. Initially you can define a dummy breakpoint for command line prompt and accept commands from thereon. Later display the list of features add/remove/display breakpoint, backtrace etc. Also once you hit a breakpoint you give a command line prompt and user can backtrace/ dump some global memory, dump registers etc. Let me know if you need more information. Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 2.6.12-rc1-mm3] [1/2] kprobes += function-return
Hi Hien, This patch looks good to me, but I have some comments on this patch. >This patch adds function-return probes (AKA exit probes) to kprobes. > When establishing a probepoint at the entry to a function, you can also >establish a handler to be run when the function returns. >The subsequent post give example of how function-return probes can be used. >Two new registration interfaces are added to kprobes: >int register_kretprobe(struct kprobe *kp, struct rprobe *rp); >Registers a probepoint at the entry to the function whose address is >kp->addr. Each time that function returns, rp->handler will be run. >int register_jretprobe(struct jprobe *jp, struct rprobe *rp); >Like register_kretprobe, except a jprobe is established for the probed >function. Why two interfaces for the same feature? You can provide a simple interface like register_exitprobe(struct rprobe *rp) { } or int register_returnprobe(struct rprobe *rp) { } whichever you feel is a good name. independent of kprobe and jprobe. This routine should take care to register entry handler internally if not present. This routine can check if there are already entry point kprobe/jprobe and use some flags internally to say if the entry jprobe/kprobe already exists. >To unregister, you still use unregister_kprobe or unregister_jprobe. To >probe only a function's returns, call register_kretprobe() and specify >NULL handlers for the kprobe. make unregister exitprobes independent of kprobe/jprobe. To unregister provide this interface unregister_exitprobe(struct rpobe *rp) { } This routine should check if entry point kprobe/jprobe belows to user/ registered by exit probe. Remove the entry probe if no user has registered entry point kprobe/jprobe. If user has already registered entry point probes, just leave the entry point probes and remove only the exit point probes. Please let me know if you need more information. Thanks Prasanna - Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 2.6.12-rc1-mm3] [1/2] kprobes += function-return
Hi Hien, This patch looks good to me, but I have some comments on this patch. This patch adds function-return probes (AKA exit probes) to kprobes. When establishing a probepoint at the entry to a function, you can also establish a handler to be run when the function returns. The subsequent post give example of how function-return probes can be used. Two new registration interfaces are added to kprobes: int register_kretprobe(struct kprobe *kp, struct rprobe *rp); Registers a probepoint at the entry to the function whose address is kp-addr. Each time that function returns, rp-handler will be run. int register_jretprobe(struct jprobe *jp, struct rprobe *rp); Like register_kretprobe, except a jprobe is established for the probed function. Why two interfaces for the same feature? You can provide a simple interface like register_exitprobe(struct rprobe *rp) { } or int register_returnprobe(struct rprobe *rp) { } whichever you feel is a good name. independent of kprobe and jprobe. This routine should take care to register entry handler internally if not present. This routine can check if there are already entry point kprobe/jprobe and use some flags internally to say if the entry jprobe/kprobe already exists. To unregister, you still use unregister_kprobe or unregister_jprobe. To probe only a function's returns, call register_kretprobe() and specify NULL handlers for the kprobe. make unregister exitprobes independent of kprobe/jprobe. To unregister provide this interface unregister_exitprobe(struct rpobe *rp) { } This routine should check if entry point kprobe/jprobe belows to user/ registered by exit probe. Remove the entry probe if no user has registered entry point kprobe/jprobe. If user has already registered entry point probes, just leave the entry point probes and remove only the exit point probes. Please let me know if you need more information. Thanks Prasanna - Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: module for controlling kprobes with /proc
Hi Piotr, Good way to make kprobes useful, but I have some comments. I have programmed a universal module to register/remove kprobes handlers by interacting with /proc with simple commands. why /proc ? You can use a combination of SysRq key to enter a kprobe command line prompt. Initially you can define a dummy breakpoint for command line prompt and accept commands from thereon. Later display the list of features add/remove/display breakpoint, backtrace etc. Also once you hit a breakpoint you give a command line prompt and user can backtrace/ dump some global memory, dump registers etc. Let me know if you need more information. Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] Kprobes: Incorrect handling of probes on ret/lret instruction
Hi, Kprobes could not handle the insertion of a probe on the ret/lret instruction and used to oops after single stepping since kprobes was modifying eip/rip incorrectly. Adjustment of eip/rip is not required after single stepping in case of ret/lret instruction, because eip/rip points to the correct location after execution of the ret/lret instruction. This patch fixes the above problem. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- --- linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c |7 +++ linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c |7 +++ 2 files changed, 14 insertions(+) diff -puN arch/i386/kernel/kprobes.c~kprobes-ret-address-fix arch/i386/kernel/kprobes.c --- linux-2.6.12-rc1/arch/i386/kernel/kprobes.c~kprobes-ret-address-fix 2005-03-31 14:32:56.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c2005-03-31 14:37:24.0 +0530 @@ -218,6 +218,13 @@ static void resume_execution(struct kpro *tos &= ~(TF_MASK | IF_MASK); *tos |= kprobe_old_eflags; break; + case 0xc3: /* ret/lret */ + case 0xcb: + case 0xc2: + case 0xca: + regs->eflags &= ~TF_MASK; + /* eip is already adjusted, no more changes required*/ + return; case 0xe8: /* call relative - Fix return addr */ *tos = orig_eip + (*tos - copy_eip); break; diff -puN arch/x86_64/kernel/kprobes.c~kprobes-ret-address-fix arch/x86_64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c~kprobes-ret-address-fix 2005-03-31 14:33:31.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c 2005-03-31 14:37:08.0 +0530 @@ -231,6 +231,13 @@ static void resume_execution(struct kpro *tos &= ~(TF_MASK | IF_MASK); *tos |= kprobe_old_rflags; break; + case 0xc3: /* ret/lret */ + case 0xcb: + case 0xc2: + case 0xca: + regs->eflags &= ~TF_MASK; + /* rip is already adjusted, no more changes required*/ + return; case 0xe8: /* call relative - Fix return addr */ *tos = orig_rip + (*tos - copy_rip); break; _ Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] Kprobes: Incorrect handling of probes on ret/lret instruction
Hi, Kprobes could not handle the insertion of a probe on the ret/lret instruction and used to oops after single stepping since kprobes was modifying eip/rip incorrectly. Adjustment of eip/rip is not required after single stepping in case of ret/lret instruction, because eip/rip points to the correct location after execution of the ret/lret instruction. This patch fixes the above problem. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- --- linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c |7 +++ linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c |7 +++ 2 files changed, 14 insertions(+) diff -puN arch/i386/kernel/kprobes.c~kprobes-ret-address-fix arch/i386/kernel/kprobes.c --- linux-2.6.12-rc1/arch/i386/kernel/kprobes.c~kprobes-ret-address-fix 2005-03-31 14:32:56.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c2005-03-31 14:37:24.0 +0530 @@ -218,6 +218,13 @@ static void resume_execution(struct kpro *tos = ~(TF_MASK | IF_MASK); *tos |= kprobe_old_eflags; break; + case 0xc3: /* ret/lret */ + case 0xcb: + case 0xc2: + case 0xca: + regs-eflags = ~TF_MASK; + /* eip is already adjusted, no more changes required*/ + return; case 0xe8: /* call relative - Fix return addr */ *tos = orig_eip + (*tos - copy_eip); break; diff -puN arch/x86_64/kernel/kprobes.c~kprobes-ret-address-fix arch/x86_64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c~kprobes-ret-address-fix 2005-03-31 14:33:31.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c 2005-03-31 14:37:08.0 +0530 @@ -231,6 +231,13 @@ static void resume_execution(struct kpro *tos = ~(TF_MASK | IF_MASK); *tos |= kprobe_old_rflags; break; + case 0xc3: /* ret/lret */ + case 0xcb: + case 0xc2: + case 0xca: + regs-eflags = ~TF_MASK; + /* rip is already adjusted, no more changes required*/ + return; case 0xe8: /* call relative - Fix return addr */ *tos = orig_rip + (*tos - copy_rip); break; _ Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] Kprobes: Allow/deny probes on int3/breakpoint instruction?
Sorry typo error. Please use this patch. Thanks Prasanna Kprobes did an improper exit when a probe is inserted on an int3 instruction. In case of normal execution of int3/breakpoint instruction, it oops!. Probe on an int3 instruction was not handled properly by the kprobes, it generated faults after oops! doing an improper exit with holding the lock. This fix employes a bit different method to handle probe on an int3/breakpoint instruction. On execution of an int3/breakpoint instruction (placed by kprobe), kprobes_handler() is called which sets it for single stepping in-line(it does not matter whether we single step out-of-line/inline since the single stepping instruction is same). Now it single steps on int3/breakpoint instruction here, entering kprobes_handler() once again. Kprobes now check's the status that it is single stepping and avoids the recursion. It runs down through the trap handler and oops messages is seen on the console since it executed int3/breakpoint instruction. Here the kprobes single stepping handler never gets called. Is this behaviour acceptable ? Or should we avoid putting probes on an int3 /breakpoint instruction ? How should it handle such situations? Below is the patch to allow probes on an int3/breakpoint instruction. This patch fixes the above problem by doing a proper exit while avoiding recursion. Any pointers/suggestions on the above issues will be helpful. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c| 12 +++- linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c | 12 +++- linux-2.6.12-rc1-prasanna/arch/sparc64/kernel/kprobes.c | 16 ++-- linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c | 13 +++-- 4 files changed, 47 insertions(+), 6 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/i386/kernel/kprobes.c --- linux-2.6.12-rc1/arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 16:47:42.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c2005-03-30 16:51:43.0 +0530 @@ -84,7 +84,11 @@ static inline void prepare_singlestep(st { regs->eflags |= TF_MASK; regs->eflags &= ~IF_MASK; - regs->eip = (unsigned long)>ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p->opcode == BREAKPOINT_INSTRUCTION) + regs->eip = (unsigned long)p->addr; + else + regs->eip = (unsigned long)>ainsn.insn; } /* @@ -117,6 +121,12 @@ static int kprobe_handler(struct pt_regs Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs->eflags &= ~TF_MASK; + regs->eflags |= kprobe_saved_eflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/x86_64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 20:55:23.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c 2005-03-31 12:19:53.0 +0530 @@ -108,8 +108,11 @@ static void prepare_singlestep(struct kp { regs->eflags |= TF_MASK; regs->eflags &= ~IF_MASK; - - regs->rip = (unsigned long)p->ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p->opcode == BREAKPOINT_INSTRUCTION) + regs->rip = (unsigned long)p->addr; + else + regs->rip = (unsigned long)p->ainsn.insn; } /* @@ -131,6 +134,12 @@ int kprobe_handler(struct pt_regs *regs) Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs->eflags &= ~TF_MASK; + regs->eflags |= kprobe_saved_rflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/ppc64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 21:03:14.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c 2005-03-31 10:46:16.0 +0530 @@ -71,7 +71,11 @@ static inline void disarm_kpro
[PATCH] Kprobes: Allow/deny probes on int3/breakpoint instruction?
Hi, Kprobes did an improper exit when a probe is inserted on an int3 instruction. In case of normal execution of int3/breakpoint instruction, it oops!. Probe on an int3 instruction was not handled properly by the kprobes, it generated faults after oops! doing an improper exit with holding the lock. This fix employes a bit different method to handle probe on an int3/breakpoint instruction. On execution of an int3/breakpoint instruction (placed by kprobe), kprobes_handler() is called which sets it for single stepping in-line(it does not matter whether we single step out-of-line/inline since the single stepping instruction is same). Now it single steps on int3/breakpoint instruction here, entering kprobes_handler() once again. Kprobes now check's the status that it is single stepping and avoids the recursion. It runs down through the trap handler and oops messages is seen on the console since it executed int3/breakpoint instruction. Here the kprobes single stepping handler never gets called. Is this behaviour acceptable ? Or should we avoid putting probes on an int3 /breakpoint instruction ? How should it handle such situations? Below is the patch to allow probes on an int3/breakpoint instruction. This patch fixes the above problem by doing a proper exit while avoiding recursion. Any pointers/suggestions on the above issues will be helpful. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c| 12 +++- linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c | 12 +++- linux-2.6.12-rc1-prasanna/arch/sparc64/kernel/kprobes.c | 16 ++-- linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c | 13 +++-- 4 files changed, 47 insertions(+), 6 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/i386/kernel/kprobes.c --- linux-2.6.12-rc1/arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 16:47:42.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c2005-03-30 16:51:43.0 +0530 @@ -84,7 +84,11 @@ static inline void prepare_singlestep(st { regs->eflags |= TF_MASK; regs->eflags &= ~IF_MASK; - regs->eip = (unsigned long)>ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p->opcode == BREAKPOINT_INSTRUCTION) + regs->eip = (unsigned long)p->addr; + else + regs->eip = (unsigned long)>ainsn.insn; } /* @@ -117,6 +121,12 @@ static int kprobe_handler(struct pt_regs Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs->eflags &= ~TF_MASK; + regs->eflags |= kprobe_saved_eflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/x86_64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 20:55:23.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c 2005-03-31 12:19:53.0 +0530 @@ -108,8 +108,11 @@ static void prepare_singlestep(struct kp { regs->eflags |= TF_MASK; regs->eflags &= ~IF_MASK; - - regs->rip = (unsigned long)p->ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p->opcode == BREAKPOINT_INSTRUCTION) + regs->rip = (unsigned long)p->addr; + else + regs->rip = (unsigned long)p->ainsn.insn; } /* @@ -131,6 +134,12 @@ int kprobe_handler(struct pt_regs *regs) Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs->eflags &= ~TF_MASK; + regs->eflags |= kprobe_saved_rflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/ppc64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 21:03:14.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c 2005-03-31 10:46:16.0 +0530 @@ -71,7 +71,11 @@ static inline void disarm_kprobe(struct static inline void prepare_singlestep(s
[PATCH] Kprobes: Allow/deny probes on int3/breakpoint instruction?
Hi, Kprobes did an improper exit when a probe is inserted on an int3 instruction. In case of normal execution of int3/breakpoint instruction, it oops!. Probe on an int3 instruction was not handled properly by the kprobes, it generated faults after oops! doing an improper exit with holding the lock. This fix employes a bit different method to handle probe on an int3/breakpoint instruction. On execution of an int3/breakpoint instruction (placed by kprobe), kprobes_handler() is called which sets it for single stepping in-line(it does not matter whether we single step out-of-line/inline since the single stepping instruction is same). Now it single steps on int3/breakpoint instruction here, entering kprobes_handler() once again. Kprobes now check's the status that it is single stepping and avoids the recursion. It runs down through the trap handler and oops messages is seen on the console since it executed int3/breakpoint instruction. Here the kprobes single stepping handler never gets called. Is this behaviour acceptable ? Or should we avoid putting probes on an int3 /breakpoint instruction ? How should it handle such situations? Below is the patch to allow probes on an int3/breakpoint instruction. This patch fixes the above problem by doing a proper exit while avoiding recursion. Any pointers/suggestions on the above issues will be helpful. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c| 12 +++- linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c | 12 +++- linux-2.6.12-rc1-prasanna/arch/sparc64/kernel/kprobes.c | 16 ++-- linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c | 13 +++-- 4 files changed, 47 insertions(+), 6 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/i386/kernel/kprobes.c --- linux-2.6.12-rc1/arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 16:47:42.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c2005-03-30 16:51:43.0 +0530 @@ -84,7 +84,11 @@ static inline void prepare_singlestep(st { regs->eflags |= TF_MASK; regs->eflags &= ~IF_MASK; - regs->eip = (unsigned long)>ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p->opcode == BREAKPOINT_INSTRUCTION) + regs->eip = (unsigned long)p->addr; + else + regs->eip = (unsigned long)>ainsn.insn; } /* @@ -117,6 +121,12 @@ static int kprobe_handler(struct pt_regs Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs->eflags &= ~TF_MASK; + regs->eflags |= kprobe_saved_eflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/x86_64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 20:55:23.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c 2005-03-30 20:58:23.0 +0530 @@ -108,8 +108,11 @@ static void prepare_singlestep(struct kp { regs->eflags |= TF_MASK; regs->eflags &= ~IF_MASK; - - regs->rip = (unsigned long)p->ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p->opcode == BREAKPOINT_INSTRUCTION) + regs->rip = (unsigned long)p->addr; + else + regs->rip = (unsigned long)p->ainsn.insn; } /* @@ -131,6 +134,12 @@ int kprobe_handler(struct pt_regs *regs) Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs->eflags &= ~TF_MASK; + regs->eflags |= kprobe_saved_eflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/ppc64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 21:03:14.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c 2005-03-31 10:46:16.0 +0530 @@ -71,7 +71,11 @@ static inline void disarm_kprobe(struct static inline void prepare_singlestep(s
[PATCH] Kprobes: Allow/deny probes on int3/breakpoint instruction?
Hi, Kprobes did an improper exit when a probe is inserted on an int3 instruction. In case of normal execution of int3/breakpoint instruction, it oops!. Probe on an int3 instruction was not handled properly by the kprobes, it generated faults after oops! doing an improper exit with holding the lock. This fix employes a bit different method to handle probe on an int3/breakpoint instruction. On execution of an int3/breakpoint instruction (placed by kprobe), kprobes_handler() is called which sets it for single stepping in-line(it does not matter whether we single step out-of-line/inline since the single stepping instruction is same). Now it single steps on int3/breakpoint instruction here, entering kprobes_handler() once again. Kprobes now check's the status that it is single stepping and avoids the recursion. It runs down through the trap handler and oops messages is seen on the console since it executed int3/breakpoint instruction. Here the kprobes single stepping handler never gets called. Is this behaviour acceptable ? Or should we avoid putting probes on an int3 /breakpoint instruction ? How should it handle such situations? Below is the patch to allow probes on an int3/breakpoint instruction. This patch fixes the above problem by doing a proper exit while avoiding recursion. Any pointers/suggestions on the above issues will be helpful. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c| 12 +++- linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c | 12 +++- linux-2.6.12-rc1-prasanna/arch/sparc64/kernel/kprobes.c | 16 ++-- linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c | 13 +++-- 4 files changed, 47 insertions(+), 6 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/i386/kernel/kprobes.c --- linux-2.6.12-rc1/arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 16:47:42.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c2005-03-30 16:51:43.0 +0530 @@ -84,7 +84,11 @@ static inline void prepare_singlestep(st { regs-eflags |= TF_MASK; regs-eflags = ~IF_MASK; - regs-eip = (unsigned long)p-ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p-opcode == BREAKPOINT_INSTRUCTION) + regs-eip = (unsigned long)p-addr; + else + regs-eip = (unsigned long)p-ainsn.insn; } /* @@ -117,6 +121,12 @@ static int kprobe_handler(struct pt_regs Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs-eflags = ~TF_MASK; + regs-eflags |= kprobe_saved_eflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/x86_64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 20:55:23.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c 2005-03-30 20:58:23.0 +0530 @@ -108,8 +108,11 @@ static void prepare_singlestep(struct kp { regs-eflags |= TF_MASK; regs-eflags = ~IF_MASK; - - regs-rip = (unsigned long)p-ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p-opcode == BREAKPOINT_INSTRUCTION) + regs-rip = (unsigned long)p-addr; + else + regs-rip = (unsigned long)p-ainsn.insn; } /* @@ -131,6 +134,12 @@ int kprobe_handler(struct pt_regs *regs) Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs-eflags = ~TF_MASK; + regs-eflags |= kprobe_saved_eflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/ppc64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 21:03:14.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c 2005-03-31 10:46:16.0 +0530 @@ -71,7 +71,11 @@ static inline void disarm_kprobe(struct static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs) { regs-msr |= MSR_SE; - regs-nip = (unsigned long)p-ainsn.insn
[PATCH] Kprobes: Allow/deny probes on int3/breakpoint instruction?
Hi, Kprobes did an improper exit when a probe is inserted on an int3 instruction. In case of normal execution of int3/breakpoint instruction, it oops!. Probe on an int3 instruction was not handled properly by the kprobes, it generated faults after oops! doing an improper exit with holding the lock. This fix employes a bit different method to handle probe on an int3/breakpoint instruction. On execution of an int3/breakpoint instruction (placed by kprobe), kprobes_handler() is called which sets it for single stepping in-line(it does not matter whether we single step out-of-line/inline since the single stepping instruction is same). Now it single steps on int3/breakpoint instruction here, entering kprobes_handler() once again. Kprobes now check's the status that it is single stepping and avoids the recursion. It runs down through the trap handler and oops messages is seen on the console since it executed int3/breakpoint instruction. Here the kprobes single stepping handler never gets called. Is this behaviour acceptable ? Or should we avoid putting probes on an int3 /breakpoint instruction ? How should it handle such situations? Below is the patch to allow probes on an int3/breakpoint instruction. This patch fixes the above problem by doing a proper exit while avoiding recursion. Any pointers/suggestions on the above issues will be helpful. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c| 12 +++- linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c | 12 +++- linux-2.6.12-rc1-prasanna/arch/sparc64/kernel/kprobes.c | 16 ++-- linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c | 13 +++-- 4 files changed, 47 insertions(+), 6 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/i386/kernel/kprobes.c --- linux-2.6.12-rc1/arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 16:47:42.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c2005-03-30 16:51:43.0 +0530 @@ -84,7 +84,11 @@ static inline void prepare_singlestep(st { regs-eflags |= TF_MASK; regs-eflags = ~IF_MASK; - regs-eip = (unsigned long)p-ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p-opcode == BREAKPOINT_INSTRUCTION) + regs-eip = (unsigned long)p-addr; + else + regs-eip = (unsigned long)p-ainsn.insn; } /* @@ -117,6 +121,12 @@ static int kprobe_handler(struct pt_regs Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs-eflags = ~TF_MASK; + regs-eflags |= kprobe_saved_eflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/x86_64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 20:55:23.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c 2005-03-31 12:19:53.0 +0530 @@ -108,8 +108,11 @@ static void prepare_singlestep(struct kp { regs-eflags |= TF_MASK; regs-eflags = ~IF_MASK; - - regs-rip = (unsigned long)p-ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p-opcode == BREAKPOINT_INSTRUCTION) + regs-rip = (unsigned long)p-addr; + else + regs-rip = (unsigned long)p-ainsn.insn; } /* @@ -131,6 +134,12 @@ int kprobe_handler(struct pt_regs *regs) Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs-eflags = ~TF_MASK; + regs-eflags |= kprobe_saved_rflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/ppc64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 21:03:14.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c 2005-03-31 10:46:16.0 +0530 @@ -71,7 +71,11 @@ static inline void disarm_kprobe(struct static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs) { regs-msr |= MSR_SE; - regs-nip = (unsigned long)p-ainsn.insn
Re: [PATCH] Kprobes: Allow/deny probes on int3/breakpoint instruction?
Sorry typo error. Please use this patch. Thanks Prasanna Kprobes did an improper exit when a probe is inserted on an int3 instruction. In case of normal execution of int3/breakpoint instruction, it oops!. Probe on an int3 instruction was not handled properly by the kprobes, it generated faults after oops! doing an improper exit with holding the lock. This fix employes a bit different method to handle probe on an int3/breakpoint instruction. On execution of an int3/breakpoint instruction (placed by kprobe), kprobes_handler() is called which sets it for single stepping in-line(it does not matter whether we single step out-of-line/inline since the single stepping instruction is same). Now it single steps on int3/breakpoint instruction here, entering kprobes_handler() once again. Kprobes now check's the status that it is single stepping and avoids the recursion. It runs down through the trap handler and oops messages is seen on the console since it executed int3/breakpoint instruction. Here the kprobes single stepping handler never gets called. Is this behaviour acceptable ? Or should we avoid putting probes on an int3 /breakpoint instruction ? How should it handle such situations? Below is the patch to allow probes on an int3/breakpoint instruction. This patch fixes the above problem by doing a proper exit while avoiding recursion. Any pointers/suggestions on the above issues will be helpful. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c| 12 +++- linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c | 12 +++- linux-2.6.12-rc1-prasanna/arch/sparc64/kernel/kprobes.c | 16 ++-- linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c | 13 +++-- 4 files changed, 47 insertions(+), 6 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/i386/kernel/kprobes.c --- linux-2.6.12-rc1/arch/i386/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 16:47:42.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/i386/kernel/kprobes.c2005-03-30 16:51:43.0 +0530 @@ -84,7 +84,11 @@ static inline void prepare_singlestep(st { regs-eflags |= TF_MASK; regs-eflags = ~IF_MASK; - regs-eip = (unsigned long)p-ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p-opcode == BREAKPOINT_INSTRUCTION) + regs-eip = (unsigned long)p-addr; + else + regs-eip = (unsigned long)p-ainsn.insn; } /* @@ -117,6 +121,12 @@ static int kprobe_handler(struct pt_regs Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs-eflags = ~TF_MASK; + regs-eflags |= kprobe_saved_eflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/x86_64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/x86_64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 20:55:23.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/x86_64/kernel/kprobes.c 2005-03-31 12:19:53.0 +0530 @@ -108,8 +108,11 @@ static void prepare_singlestep(struct kp { regs-eflags |= TF_MASK; regs-eflags = ~IF_MASK; - - regs-rip = (unsigned long)p-ainsn.insn; + /*single step inline if the instruction is an int3*/ + if (p-opcode == BREAKPOINT_INSTRUCTION) + regs-rip = (unsigned long)p-addr; + else + regs-rip = (unsigned long)p-ainsn.insn; } /* @@ -131,6 +134,12 @@ int kprobe_handler(struct pt_regs *regs) Disarm the probe we just hit, and ignore it. */ p = get_kprobe(addr); if (p) { + if (kprobe_status == KPROBE_HIT_SS) { + regs-eflags = ~TF_MASK; + regs-eflags |= kprobe_saved_rflags; + unlock_kprobes(); + goto no_kprobe; + } disarm_kprobe(p, regs); ret = 1; } else { diff -puN arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 arch/ppc64/kernel/kprobes.c --- linux-2.6.12-rc1/arch/ppc64/kernel/kprobes.c~kprobes-allow-probes-on-int3 2005-03-30 21:03:14.0 +0530 +++ linux-2.6.12-rc1-prasanna/arch/ppc64/kernel/kprobes.c 2005-03-31 10:46:16.0 +0530 @@ -71,7 +71,11 @@ static inline void disarm_kprobe(struct static inline void prepare_singlestep(struct kprobe *p, struct pt_regs *regs) { regs-msr
[PATCH] kprobes: incorrect spin_unlock_irqrestore() call in register_kprobe()
Hi, register_kprobe() routine was calling spin_unlock_irqrestore() wrongly. This patch removes unwanted spin_unlock_irqrestore() call in register_kprobe() routine. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- --- linux-2.6.11-prasanna/kernel/kprobes.c |5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff -puN kernel/kprobes.c~kprobes-incorrect-returnval kernel/kprobes.c --- linux-2.6.11/kernel/kprobes.c~kprobes-incorrect-returnval 2005-03-16 11:03:42.0 +0530 +++ linux-2.6.11-prasanna/kernel/kprobes.c 2005-03-16 11:03:42.0 +0530 @@ -79,7 +79,7 @@ int register_kprobe(struct kprobe *p) unsigned long flags = 0; if ((ret = arch_prepare_kprobe(p)) != 0) { - goto out; + goto rm_kprobe; } spin_lock_irqsave(_lock, flags); INIT_HLIST_NODE(>hlist); @@ -96,8 +96,9 @@ int register_kprobe(struct kprobe *p) *p->addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p->addr, (unsigned long) p->addr + sizeof(kprobe_opcode_t)); - out: +out: spin_unlock_irqrestore(_lock, flags); +rm_kprobe: if (ret == -EEXIST) arch_remove_kprobe(p); return ret; _ Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] kprobes: incorrect spin_unlock_irqrestore() call in register_kprobe()
Hi, register_kprobe() routine was calling spin_unlock_irqrestore() wrongly. This patch removes unwanted spin_unlock_irqrestore() call in register_kprobe() routine. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- --- linux-2.6.11-prasanna/kernel/kprobes.c |5 +++-- 1 files changed, 3 insertions(+), 2 deletions(-) diff -puN kernel/kprobes.c~kprobes-incorrect-returnval kernel/kprobes.c --- linux-2.6.11/kernel/kprobes.c~kprobes-incorrect-returnval 2005-03-16 11:03:42.0 +0530 +++ linux-2.6.11-prasanna/kernel/kprobes.c 2005-03-16 11:03:42.0 +0530 @@ -79,7 +79,7 @@ int register_kprobe(struct kprobe *p) unsigned long flags = 0; if ((ret = arch_prepare_kprobe(p)) != 0) { - goto out; + goto rm_kprobe; } spin_lock_irqsave(kprobe_lock, flags); INIT_HLIST_NODE(p-hlist); @@ -96,8 +96,9 @@ int register_kprobe(struct kprobe *p) *p-addr = BREAKPOINT_INSTRUCTION; flush_icache_range((unsigned long) p-addr, (unsigned long) p-addr + sizeof(kprobe_opcode_t)); - out: +out: spin_unlock_irqrestore(kprobe_lock, flags); +rm_kprobe: if (ret == -EEXIST) arch_remove_kprobe(p); return ret; _ Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: 2.6.10 kprobes/jprobes panic
Hi Badri, > Hi, > > I ran into this while playing with jprobes in 2.6.10. > > I tried to install jprobe handler on a invalid address, User should prevent inserting jprobes on an invalid address. > I get OOPS. I was hoping for a error check and a graceful > exit rather than kernel Oops. > Error check and graceful exit can be done in the jprobe handler module. In the jprobe network packet logging patch, error check was taken care by using kallsyms_lookup_name() as shown below. nt->jp.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name(nt->funcname); if (nt->jp.kp.addr) { printk("plant jprobe at %s (%p), handler addr %p\n", nt->funcname, nt->jp.kp.addr, nt->jp.entry); register_jprobe(>jp); } else { printk("couldn't find %s to plant jprobe\n", nt->funcname); }. Please see the patch at the URL for more details. http://lkml.org/lkml/2004/8/16/179 Thanks Prasanna > Unable to handle kernel paging request at c01836b0 RIP: > {__memcpy+114} > PML4 17d6cf067 PGD 0 > Oops: [1] SMP > CPU 1 > Modules linked in: diotest > Pid: 14225, comm: insmod Not tainted 2.6.10n > RIP: 0010:[] {__memcpy+114} > RSP: 0018:01019b841d58 EFLAGS: 00010047 > RAX: ffa7 RBX: 0101bfa44200 RCX: > RDX: 000f RSI: c01836b0 RDI: ffa7 > RBP: a8e0 R08: 01018000 R09: > R10: 0101bfa44218 R11: 0111 R12: 0216 > R13: 804f1440 R14: 0020 R15: 0002 > FS: 002a9588e6e0() GS:80628800() > knlGS:55970080 > CS: 0010 DS: ES: CR0: 8005003b > CR2: c01836b0 CR3: 0001a072c000 CR4: 06e0 > Process insmod (pid: 14225, threadinfo 01019b84, task > 0101bf9394e0) > Stack: 0101bfa44200 8011edcc 0212 > a8e0 >ffef 80158542 804f1480 > a940 >804f1440 a05c > Call Trace:{arch_prepare_kprobe+300} > {register_kprobe+82} >{:diotest:init_dmods+44} > {sys_init_module+6387} >{__pagevec_free+32} > {release_pages+382} >{do_munmap+918} > {__down_read+49} >{__up_write+48} > {system_call+126} > > > > > Code: 4c 8b 06 4c 89 07 48 8d 7f 08 48 8d 76 08 75 ee 89 d1 83 e1 > RIP {__memcpy+114} RSP <01019b841d58> > CR2: c01836b0 > > > -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: 2.6.10 kprobes/jprobes panic
Hi Badri, Hi, I ran into this while playing with jprobes in 2.6.10. I tried to install jprobe handler on a invalid address, User should prevent inserting jprobes on an invalid address. I get OOPS. I was hoping for a error check and a graceful exit rather than kernel Oops. Error check and graceful exit can be done in the jprobe handler module. In the jprobe network packet logging patch, error check was taken care by using kallsyms_lookup_name() as shown below. nt-jp.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name(nt-funcname); if (nt-jp.kp.addr) { printk(plant jprobe at %s (%p), handler addr %p\n, nt-funcname, nt-jp.kp.addr, nt-jp.entry); register_jprobe(nt-jp); } else { printk(couldn't find %s to plant jprobe\n, nt-funcname); }. Please see the patch at the URL for more details. http://lkml.org/lkml/2004/8/16/179 Thanks Prasanna Unable to handle kernel paging request at c01836b0 RIP: 8026e622{__memcpy+114} PML4 17d6cf067 PGD 0 Oops: [1] SMP CPU 1 Modules linked in: diotest Pid: 14225, comm: insmod Not tainted 2.6.10n RIP: 0010:[8026e622] 8026e622{__memcpy+114} RSP: 0018:01019b841d58 EFLAGS: 00010047 RAX: ffa7 RBX: 0101bfa44200 RCX: RDX: 000f RSI: c01836b0 RDI: ffa7 RBP: a8e0 R08: 01018000 R09: R10: 0101bfa44218 R11: 0111 R12: 0216 R13: 804f1440 R14: 0020 R15: 0002 FS: 002a9588e6e0() GS:80628800() knlGS:55970080 CS: 0010 DS: ES: CR0: 8005003b CR2: c01836b0 CR3: 0001a072c000 CR4: 06e0 Process insmod (pid: 14225, threadinfo 01019b84, task 0101bf9394e0) Stack: 0101bfa44200 8011edcc 0212 a8e0 ffef 80158542 804f1480 a940 804f1440 a05c Call Trace:8011edcc{arch_prepare_kprobe+300} 80158542{register_kprobe+82} a05c{:diotest:init_dmods+44} 80150823{sys_init_module+6387} 8015e9c0{__pagevec_free+32} 8016490e{release_pages+382} 8016d4e6{do_munmap+918} 803ebb11{__down_read+49} 8026bc90{__up_write+48} 8010e4ce{system_call+126} Code: 4c 8b 06 4c 89 07 48 8d 7f 08 48 8d 76 08 75 ee 89 d1 83 e1 RIP 8026e622{__memcpy+114} RSP 01019b841d58 CR2: c01836b0 -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] kprobes x86_64 memory allocation changes
Hi, This patch moves the memory allocation required by kprobes outside spin lock as suggested by Andi Kleen. Please let me know your comments. Thanks Prasanna Minor changes to the kprobes code to provide memory allocation for x86_64 architecture outside kprobes spin lock. Signed-off-by: Prasanna S Panchamukhi <[EMAIL PROTECTED]> --- --- linux-2.6.11-rc1-prasanna/arch/i386/kernel/kprobes.c|6 +- linux-2.6.11-rc1-prasanna/arch/ppc64/kernel/kprobes.c | 10 -- linux-2.6.11-rc1-prasanna/arch/sparc64/kernel/kprobes.c |6 +- linux-2.6.11-rc1-prasanna/arch/x86_64/kernel/kprobes.c | 16 +--- linux-2.6.11-rc1-prasanna/include/linux/kprobes.h |1 + linux-2.6.11-rc1-prasanna/kernel/kprobes.c | 13 - 6 files changed, 40 insertions(+), 12 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-x86_64-changes arch/i386/kernel/kprobes.c --- linux-2.6.11-rc1/arch/i386/kernel/kprobes.c~kprobes-x86_64-changes 2005-01-19 19:46:23.0 +0530 +++ linux-2.6.11-rc1-prasanna/arch/i386/kernel/kprobes.c2005-01-19 19:46:23.0 +0530 @@ -61,10 +61,14 @@ static inline int is_IF_modifier(kprobe_ int arch_prepare_kprobe(struct kprobe *p) { - memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); return 0; } +void arch_copy_kprobe(struct kprobe *p) +{ + memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); +} + void arch_remove_kprobe(struct kprobe *p) { } diff -puN arch/sparc64/kernel/kprobes.c~kprobes-x86_64-changes arch/sparc64/kernel/kprobes.c --- linux-2.6.11-rc1/arch/sparc64/kernel/kprobes.c~kprobes-x86_64-changes 2005-01-19 19:46:23.0 +0530 +++ linux-2.6.11-rc1-prasanna/arch/sparc64/kernel/kprobes.c 2005-01-19 19:46:23.0 +0530 @@ -40,9 +40,13 @@ int arch_prepare_kprobe(struct kprobe *p) { + return 0; +} + +void arch_copy_kprobe(struct kprobe *p) +{ p->ainsn.insn[0] = *p->addr; p->ainsn.insn[1] = BREAKPOINT_INSTRUCTION_2; - return 0; } void arch_remove_kprobe(struct kprobe *p) diff -puN arch/x86_64/kernel/kprobes.c~kprobes-x86_64-changes arch/x86_64/kernel/kprobes.c --- linux-2.6.11-rc1/arch/x86_64/kernel/kprobes.c~kprobes-x86_64-changes 2005-01-19 19:46:23.0 +0530 +++ linux-2.6.11-rc1-prasanna/arch/x86_64/kernel/kprobes.c 2005-01-19 19:46:23.0 +0530 @@ -39,6 +39,8 @@ #include #include +static DECLARE_MUTEX(kprobe_mutex); + /* kprobe_status settings */ #define KPROBE_HIT_ACTIVE 0x0001 #define KPROBE_HIT_SS 0x0002 @@ -75,17 +77,25 @@ static inline int is_IF_modifier(kprobe_ int arch_prepare_kprobe(struct kprobe *p) { /* insn: must be on special executable page on x86_64. */ + up(_mutex); p->ainsn.insn = get_insn_slot(); + down(_mutex); if (!p->ainsn.insn) { return -ENOMEM; } - memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE); return 0; } +void arch_copy_kprobe(struct kprobe *p) +{ + memcpy(p->ainsn.insn, p->addr, MAX_INSN_SIZE); +} + void arch_remove_kprobe(struct kprobe *p) { + up(_mutex); free_insn_slot(p->ainsn.insn); + down(_mutex); } static inline void disarm_kprobe(struct kprobe *p, struct pt_regs *regs) @@ -425,12 +435,12 @@ static kprobe_opcode_t *get_insn_slot(vo } /* All out of space. Need to allocate a new page. Use slot 0.*/ - kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_ATOMIC); + kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_KERNEL); if (!kip) { return NULL; } kip->insns = (kprobe_opcode_t*) __vmalloc(PAGE_SIZE, - GFP_ATOMIC|__GFP_HIGHMEM, __pgprot(__PAGE_KERNEL_EXEC)); + GFP_KERNEL|__GFP_HIGHMEM, __pgprot(__PAGE_KERNEL_EXEC)); if (!kip->insns) { kfree(kip); return NULL; diff -puN include/linux/kprobes.h~kprobes-x86_64-changes include/linux/kprobes.h --- linux-2.6.11-rc1/include/linux/kprobes.h~kprobes-x86_64-changes 2005-01-19 19:46:23.0 +0530 +++ linux-2.6.11-rc1-prasanna/include/linux/kprobes.h 2005-01-19 19:46:23.0 +0530 @@ -95,6 +95,7 @@ static inline int kprobe_running(void) } extern int arch_prepare_kprobe(struct kprobe *p); +extern void arch_copy_kprobe(struct kprobe *p); extern void arch_remove_kprobe(struct kprobe *p); extern void show_registers(struct pt_regs *regs); diff -puN kernel/kprobes.c~kprobes-x86_64-changes kernel/kprobes.c --- linux-2.6.11-rc1/kernel/kprobes.c~kprobes-x86_64-changes2005-01-19 19:46:23.0 +0530 +++ linux-2.6.11-rc1-prasanna/kernel/kprobes.c 2005-01-19 19:46:23.0 +0530 @@ -76,18 +76,19 @@ struct kprobe *get_kprobe(void *addr) int register_kprobe(struct kprobe *p) { int ret = 0; -
Re: x86-64: int3 no longer causes SIGTRAP in 2.6.10
Hi Andi, > > > - set_intr_gate(3,); > > > + set_system_gate(3,); > > > set_system_gate(4,); /* int4-5 can be called from all */ > > > set_system_gate(5,); > > > set_intr_gate(6,_op); > > > Index: linux/arch/x86_64/kernel/kprobes.c This looks good to me. Andi do you see any thing that will cause premption by moving int3 to system gate. > > > === > > > --- linux.orig/arch/x86_64/kernel/kprobes.c 2005-01-04 12:12:39.%N > > > +0100 > > > +++ linux/arch/x86_64/kernel/kprobes.c2005-01-18 02:46:05.%N +0100 > > > @@ -297,6 +297,8 @@ > > > struct die_args *args = (struct die_args *)data; > > > switch (val) { > > > case DIE_INT3: > > > + if (args->regs->cs & 3) > > > + return NOTIFY_DONE; This will prevent handling of userspace probes (privilege level 3). The kprobe_exception handler will return from here and registered user space probe handler won't be called. Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: x86-64: int3 no longer causes SIGTRAP in 2.6.10
Hi Andi, - set_intr_gate(3,int3); + set_system_gate(3,int3); set_system_gate(4,overflow); /* int4-5 can be called from all */ set_system_gate(5,bounds); set_intr_gate(6,invalid_op); Index: linux/arch/x86_64/kernel/kprobes.c This looks good to me. Andi do you see any thing that will cause premption by moving int3 to system gate. === --- linux.orig/arch/x86_64/kernel/kprobes.c 2005-01-04 12:12:39.%N +0100 +++ linux/arch/x86_64/kernel/kprobes.c2005-01-18 02:46:05.%N +0100 @@ -297,6 +297,8 @@ struct die_args *args = (struct die_args *)data; switch (val) { case DIE_INT3: + if (args-regs-cs 3) + return NOTIFY_DONE; This will prevent handling of userspace probes (privilege level 3). The kprobe_exception handler will return from here and registered user space probe handler won't be called. Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] kprobes x86_64 memory allocation changes
Hi, This patch moves the memory allocation required by kprobes outside spin lock as suggested by Andi Kleen. Please let me know your comments. Thanks Prasanna Minor changes to the kprobes code to provide memory allocation for x86_64 architecture outside kprobes spin lock. Signed-off-by: Prasanna S Panchamukhi [EMAIL PROTECTED] --- --- linux-2.6.11-rc1-prasanna/arch/i386/kernel/kprobes.c|6 +- linux-2.6.11-rc1-prasanna/arch/ppc64/kernel/kprobes.c | 10 -- linux-2.6.11-rc1-prasanna/arch/sparc64/kernel/kprobes.c |6 +- linux-2.6.11-rc1-prasanna/arch/x86_64/kernel/kprobes.c | 16 +--- linux-2.6.11-rc1-prasanna/include/linux/kprobes.h |1 + linux-2.6.11-rc1-prasanna/kernel/kprobes.c | 13 - 6 files changed, 40 insertions(+), 12 deletions(-) diff -puN arch/i386/kernel/kprobes.c~kprobes-x86_64-changes arch/i386/kernel/kprobes.c --- linux-2.6.11-rc1/arch/i386/kernel/kprobes.c~kprobes-x86_64-changes 2005-01-19 19:46:23.0 +0530 +++ linux-2.6.11-rc1-prasanna/arch/i386/kernel/kprobes.c2005-01-19 19:46:23.0 +0530 @@ -61,10 +61,14 @@ static inline int is_IF_modifier(kprobe_ int arch_prepare_kprobe(struct kprobe *p) { - memcpy(p-ainsn.insn, p-addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); return 0; } +void arch_copy_kprobe(struct kprobe *p) +{ + memcpy(p-ainsn.insn, p-addr, MAX_INSN_SIZE * sizeof(kprobe_opcode_t)); +} + void arch_remove_kprobe(struct kprobe *p) { } diff -puN arch/sparc64/kernel/kprobes.c~kprobes-x86_64-changes arch/sparc64/kernel/kprobes.c --- linux-2.6.11-rc1/arch/sparc64/kernel/kprobes.c~kprobes-x86_64-changes 2005-01-19 19:46:23.0 +0530 +++ linux-2.6.11-rc1-prasanna/arch/sparc64/kernel/kprobes.c 2005-01-19 19:46:23.0 +0530 @@ -40,9 +40,13 @@ int arch_prepare_kprobe(struct kprobe *p) { + return 0; +} + +void arch_copy_kprobe(struct kprobe *p) +{ p-ainsn.insn[0] = *p-addr; p-ainsn.insn[1] = BREAKPOINT_INSTRUCTION_2; - return 0; } void arch_remove_kprobe(struct kprobe *p) diff -puN arch/x86_64/kernel/kprobes.c~kprobes-x86_64-changes arch/x86_64/kernel/kprobes.c --- linux-2.6.11-rc1/arch/x86_64/kernel/kprobes.c~kprobes-x86_64-changes 2005-01-19 19:46:23.0 +0530 +++ linux-2.6.11-rc1-prasanna/arch/x86_64/kernel/kprobes.c 2005-01-19 19:46:23.0 +0530 @@ -39,6 +39,8 @@ #include asm/pgtable.h #include asm/kdebug.h +static DECLARE_MUTEX(kprobe_mutex); + /* kprobe_status settings */ #define KPROBE_HIT_ACTIVE 0x0001 #define KPROBE_HIT_SS 0x0002 @@ -75,17 +77,25 @@ static inline int is_IF_modifier(kprobe_ int arch_prepare_kprobe(struct kprobe *p) { /* insn: must be on special executable page on x86_64. */ + up(kprobe_mutex); p-ainsn.insn = get_insn_slot(); + down(kprobe_mutex); if (!p-ainsn.insn) { return -ENOMEM; } - memcpy(p-ainsn.insn, p-addr, MAX_INSN_SIZE); return 0; } +void arch_copy_kprobe(struct kprobe *p) +{ + memcpy(p-ainsn.insn, p-addr, MAX_INSN_SIZE); +} + void arch_remove_kprobe(struct kprobe *p) { + up(kprobe_mutex); free_insn_slot(p-ainsn.insn); + down(kprobe_mutex); } static inline void disarm_kprobe(struct kprobe *p, struct pt_regs *regs) @@ -425,12 +435,12 @@ static kprobe_opcode_t *get_insn_slot(vo } /* All out of space. Need to allocate a new page. Use slot 0.*/ - kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_ATOMIC); + kip = kmalloc(sizeof(struct kprobe_insn_page), GFP_KERNEL); if (!kip) { return NULL; } kip-insns = (kprobe_opcode_t*) __vmalloc(PAGE_SIZE, - GFP_ATOMIC|__GFP_HIGHMEM, __pgprot(__PAGE_KERNEL_EXEC)); + GFP_KERNEL|__GFP_HIGHMEM, __pgprot(__PAGE_KERNEL_EXEC)); if (!kip-insns) { kfree(kip); return NULL; diff -puN include/linux/kprobes.h~kprobes-x86_64-changes include/linux/kprobes.h --- linux-2.6.11-rc1/include/linux/kprobes.h~kprobes-x86_64-changes 2005-01-19 19:46:23.0 +0530 +++ linux-2.6.11-rc1-prasanna/include/linux/kprobes.h 2005-01-19 19:46:23.0 +0530 @@ -95,6 +95,7 @@ static inline int kprobe_running(void) } extern int arch_prepare_kprobe(struct kprobe *p); +extern void arch_copy_kprobe(struct kprobe *p); extern void arch_remove_kprobe(struct kprobe *p); extern void show_registers(struct pt_regs *regs); diff -puN kernel/kprobes.c~kprobes-x86_64-changes kernel/kprobes.c --- linux-2.6.11-rc1/kernel/kprobes.c~kprobes-x86_64-changes2005-01-19 19:46:23.0 +0530 +++ linux-2.6.11-rc1-prasanna/kernel/kprobes.c 2005-01-19 19:46:23.0 +0530 @@ -76,18 +76,19 @@ struct kprobe *get_kprobe(void *addr) int register_kprobe(struct kprobe *p) { int ret = 0; - unsigned long flags; + unsigned
Re: x86-64: int3 no longer causes SIGTRAP in 2.6.10
On Tue, Jan 18, 2005 at 02:47:08AM +0100, Andi Kleen wrote: > Juho Snellman <[EMAIL PROTECTED]> writes: > > > 2.6.10 changed the behaviour of the int3 instruction on x86-64. It > > used to result in a SIGTRAP, now it's a SIGSEGV in both native and > > 32-bit legacy modes. This was apparently caused by the kprobe port, > > specifically this part: > > > > --- a/arch/x86_64/kernel/traps.c2004-12-24 13:36:17 -08:00 > > +++ b/arch/x86_64/kernel/traps.c2004-12-24 13:36:17 -08:00 > > @@ -862,8 +910,8 @@ > > set_intr_gate(0,_error); > > set_intr_gate_ist(1,,DEBUG_STACK); > > set_intr_gate_ist(2,,NMI_STACK); > > - set_system_gate(3,); /* int3-5 can be called from all */ > > - set_system_gate(4,); > > + set_intr_gate(3,); > > + set_system_gate(4,); /* int4-5 can be called from all */ > > > > Was effectively disabling int3 a conscious decision, or just an > > unintended side-effect? This breaks at least Steel Bank Common Lisp > > It's a bug. Thanks for the report. > > I'm not sure why it was even changed. Prasanna? > > I think it should be just changed back. If kprobes cannot > deal with traps for user space it needs to be fixed. e.g. > by adding a user space check in kprobe_handler(). > Yes its a bug, we turn trap 3 into interrupt gates to ensure that it is not preemtable. Thanks Prasanna > -Andi > > Like this patch. > > Index: linux/arch/x86_64/kernel/traps.c > === > --- linux.orig/arch/x86_64/kernel/traps.c 2005-01-17 10:34:24.%N +0100 > +++ linux/arch/x86_64/kernel/traps.c 2005-01-18 02:42:02.%N +0100 > @@ -908,7 +908,7 @@ > set_intr_gate(0,_error); > set_intr_gate_ist(1,,DEBUG_STACK); > set_intr_gate_ist(2,,NMI_STACK); > - set_intr_gate(3,); > + set_system_gate(3,); > set_system_gate(4,); /* int4-5 can be called from all */ > set_system_gate(5,); > set_intr_gate(6,_op); > Index: linux/arch/x86_64/kernel/kprobes.c > === > --- linux.orig/arch/x86_64/kernel/kprobes.c 2005-01-04 12:12:39.%N +0100 > +++ linux/arch/x86_64/kernel/kprobes.c2005-01-18 02:46:05.%N +0100 > @@ -297,6 +297,8 @@ > struct die_args *args = (struct die_args *)data; > switch (val) { > case DIE_INT3: > + if (args->regs->cs & 3) > + return NOTIFY_DONE; > if (kprobe_handler(args->regs)) > return NOTIFY_STOP; > break; > > -- Have a Nice Day! Thanks & Regards Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: x86-64: int3 no longer causes SIGTRAP in 2.6.10
On Tue, Jan 18, 2005 at 02:47:08AM +0100, Andi Kleen wrote: Juho Snellman [EMAIL PROTECTED] writes: 2.6.10 changed the behaviour of the int3 instruction on x86-64. It used to result in a SIGTRAP, now it's a SIGSEGV in both native and 32-bit legacy modes. This was apparently caused by the kprobe port, specifically this part: --- a/arch/x86_64/kernel/traps.c2004-12-24 13:36:17 -08:00 +++ b/arch/x86_64/kernel/traps.c2004-12-24 13:36:17 -08:00 @@ -862,8 +910,8 @@ set_intr_gate(0,divide_error); set_intr_gate_ist(1,debug,DEBUG_STACK); set_intr_gate_ist(2,nmi,NMI_STACK); - set_system_gate(3,int3); /* int3-5 can be called from all */ - set_system_gate(4,overflow); + set_intr_gate(3,int3); + set_system_gate(4,overflow); /* int4-5 can be called from all */ Was effectively disabling int3 a conscious decision, or just an unintended side-effect? This breaks at least Steel Bank Common Lisp It's a bug. Thanks for the report. I'm not sure why it was even changed. Prasanna? I think it should be just changed back. If kprobes cannot deal with traps for user space it needs to be fixed. e.g. by adding a user space check in kprobe_handler(). Yes its a bug, we turn trap 3 into interrupt gates to ensure that it is not preemtable. Thanks Prasanna -Andi Like this patch. Index: linux/arch/x86_64/kernel/traps.c === --- linux.orig/arch/x86_64/kernel/traps.c 2005-01-17 10:34:24.%N +0100 +++ linux/arch/x86_64/kernel/traps.c 2005-01-18 02:42:02.%N +0100 @@ -908,7 +908,7 @@ set_intr_gate(0,divide_error); set_intr_gate_ist(1,debug,DEBUG_STACK); set_intr_gate_ist(2,nmi,NMI_STACK); - set_intr_gate(3,int3); + set_system_gate(3,int3); set_system_gate(4,overflow); /* int4-5 can be called from all */ set_system_gate(5,bounds); set_intr_gate(6,invalid_op); Index: linux/arch/x86_64/kernel/kprobes.c === --- linux.orig/arch/x86_64/kernel/kprobes.c 2005-01-04 12:12:39.%N +0100 +++ linux/arch/x86_64/kernel/kprobes.c2005-01-18 02:46:05.%N +0100 @@ -297,6 +297,8 @@ struct die_args *args = (struct die_args *)data; switch (val) { case DIE_INT3: + if (args-regs-cs 3) + return NOTIFY_DONE; if (kprobe_handler(args-regs)) return NOTIFY_STOP; break; -- Have a Nice Day! Thanks Regards Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: 2.6.11-rc1-mm1
Hi Karim, > Thomas Gleixner wrote: >> It's not only me, who needs constant time. Everybody interested in >> tracing will need that. In my opinion its a principle of tracing. > > relayfs is a generalized buffering mechanism. Tracing is one application > it serves. Check out the web site: "high-speed data-relay filesystem." > Fancy name huh ... > >> The "lockless" mechanism is _FAKE_ as I already pointed out. It replaces >> locks by do { } while loops. So what ? > How about combining "buffering mechansim of relayfs" and "kernel-> user space tranport by debugfs" This will also remove lots of compilcated code from realyfs. Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 <[EMAIL PROTECTED]> - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: 2.6.11-rc1-mm1
Hi Karim, Thomas Gleixner wrote: It's not only me, who needs constant time. Everybody interested in tracing will need that. In my opinion its a principle of tracing. relayfs is a generalized buffering mechanism. Tracing is one application it serves. Check out the web site: high-speed data-relay filesystem. Fancy name huh ... The lockless mechanism is _FAKE_ as I already pointed out. It replaces locks by do { } while loops. So what ? How about combining buffering mechansim of relayfs and kernel- user space tranport by debugfs This will also remove lots of compilcated code from realyfs. Thanks Prasanna -- Prasanna S Panchamukhi Linux Technology Center India Software Labs, IBM Bangalore Ph: 91-80-25044636 [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/