Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-03 Thread Abhishek Sagar
On 1/4/08, Masami Hiramatsu <[EMAIL PROTECTED]> wrote:
> > + case KPROBE_HIT_SS:
> > + if (*p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
> > + regs->flags &= ~TF_MASK;
> > + regs->flags |= kcb->kprobe_saved_flags;
> > + return 0;
> > + } else {
> > + recursive_singlestep(p, regs, kcb);
> > + }
> > + break;
> > + default:
> > + /* impossible cases */
> > + WARN_ON(1);
>
> WARN_ON() does not call panic(). Since the kernel doesn't stop,
> we need to prepare singlestep after that.

We shouldn't panic in 'default'. First, we'll never hit this case, and
if we do then we can be sure that the fault is not due to a probe. So
instead of singlestepping we'll let the kernel handle it. If it cant,
it'll panic/die() for us. I'll try to cleanup this case and
incorporate these suggestions in a separate patch, as u suggested.

> Masami Hiramatsu
>
> Software Engineer
> Hitachi Computer Products (America) Inc.
> Software Solutions Division
>
> e-mail: [EMAIL PROTECTED], [EMAIL PROTECTED]

--
Thanks,
Abhishek Sagar
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-03 Thread Abhishek Sagar
On 1/4/08, Masami Hiramatsu <[EMAIL PROTECTED]> wrote:
> I could understand what the original code did at last.
> If a kprobe is inserted on a breakpoint which other debugger inserts,
> it single step inline instead of out-of-line.(this is done in 
> prepare_singlestep)
> In this case, (p && kprobe_running() && kcb->kprobe_status==KPROBE_HIT_SS)
> is true and we need pass the control to the debugger.
> And if (*p->ainsn.insn != BREAKPOINT_INSTRUCTION) (or (p != 
> kprobe_running())) in
> that case, there may be some bugs.

Yes, we can only fault while singlestepping for a unique case, which
is when we're singlestepping (in-line) a breakpoint because a probe
was installed on it. All other scenarios are a BUG . That's also
assuming that no exception will preempt singlestepping, whose codepath
has a probe on it.

> Now I think your original suggestion is correct.
> Please fix it in another patch.

Ok.

> --
> Masami Hiramatsu
>
> Software Engineer
> Hitachi Computer Products (America) Inc.
> Software Solutions Division
>
> e-mail: [EMAIL PROTECTED], [EMAIL PROTECTED]

Thanks,
Abhishek Sagar
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-03 Thread Masami Hiramatsu
Hi Abhishek,

Masami Hiramatsu wrote:
>> Hmm, I can not agree, because it is possible to insert a kprobe
>> into kprobe's instruction buffer. If it should be a bug, we must
>> check it when registering the kprobe.
> 
> I discussed it with other maintainers and knew that current kprobes
> does not allow user to insert a kprobe to another kprobe's instruction
> buffer, because register_kprobe ensures the insertion address is text.
> Now I changed my mind. I think that case (p && kprobe_running() &&
> kcb->kprobe_status==KPROBE_HIT_SS) is BUG(), even if (*p->ainsn.insn ==
> BREAKPOINT_INSTRUCTION).

I could understand what the original code did at last.
If a kprobe is inserted on a breakpoint which other debugger inserts,
it single step inline instead of out-of-line.(this is done in 
prepare_singlestep)
In this case, (p && kprobe_running() && kcb->kprobe_status==KPROBE_HIT_SS)
is true and we need pass the control to the debugger.
And if (*p->ainsn.insn != BREAKPOINT_INSTRUCTION) (or (p != kprobe_running())) 
in
that case, there may be some bugs.

Now I think your original suggestion is correct.
Please fix it in another patch.

Thank you very much,

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America) Inc.
Software Solutions Division

e-mail: [EMAIL PROTECTED], [EMAIL PROTECTED]

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-03 Thread Masami Hiramatsu
Abhishek Sagar wrote:
> Masami Hiramatsu wrote:
...
> Done. You should find the desired changed in this patch.

Well done!
This cleans it up very well.
I have just one more comment.

> @@ -463,14 +487,26 @@ static int __kprobes reenter_kprobe(struct kprobe *p, 
> struct pt_regs *regs,
>   arch_disarm_kprobe(p);
>   regs->ip = (unsigned long)p->addr;
>   reset_current_kprobe();
> - return 1;
> + preempt_enable_no_resched();
> + break;
>  #endif
> + case KPROBE_HIT_ACTIVE:
> + recursive_singlestep(p, regs, kcb);
> + break;
> + case KPROBE_HIT_SS:
> + if (*p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
> + regs->flags &= ~TF_MASK;
> + regs->flags |= kcb->kprobe_saved_flags;
> + return 0;
> + } else {
> + recursive_singlestep(p, regs, kcb);
> + }
> + break;
> + default:
> + /* impossible cases */
> + WARN_ON(1);

WARN_ON() does not call panic(). Since the kernel doesn't stop,
we need to prepare singlestep after that.

How about this?
---
+   case KPROBE_HIT_ACTIVE:
+   break;
+   case KPROBE_HIT_SS:
+   if (*p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
+   regs->flags &= ~TF_MASK;
+   regs->flags |= kcb->kprobe_saved_flags;
+   return 0;
+   }
+   break;
+   default:
+   /* impossible cases */
+   WARN_ON(1);
}
save_previous_kprobe(kcb);
set_current_kprobe(p, regs, kcb);
kprobes_inc_nmissed_count(p);
prepare_singlestep(p, regs);
kcb->kprobe_status = KPROBE_REENTER;
return 1;
 }
---


Thank you,

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America) Inc.
Software Solutions Division

e-mail: [EMAIL PROTECTED], [EMAIL PROTECTED]

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-03 Thread Abhishek Sagar
Masami Hiramatsu wrote:
>>> As a result, this function just setups re-entrance.
>> As you've also pointed out in your previous reply, this case is
>> peculiar and therefore I believe it should be marked as a BUG(). I've
>> left the original case, if (kcb->kprobe_status==KPROBE_HIT_SS) &&
>> (*p->ainsn.insn == BREAKPOINT_INSTRUCTION), untouched and is handled
>> as it was before. However, if (kcb->kprobe_status==KPROBE_HIT_SS) &&
>> !(*p->ainsn.insn == BREAKPOINT_INSTRUCTION), then instead of
>> incrementing nmissed count like before, it should cry out a BUG. This
>> is not an ordinary recursive probe handling case which should update
>> the nmissed count.
> 
> Hmm, I can not agree, because it is possible to insert a kprobe
> into kprobe's instruction buffer. If it should be a bug, we must
> check it when registering the kprobe.
> (And also, in *p->ainsn.insn == BREAKPOINT_INSTRUCTION case, I doubt
> that the kernel can handle this "orphaned" breakpoint, because the
> breakpoint address has been changed.)
> 
> Anyway, if you would like to change the logic, please separate it from
> cleanup patch.

Okay. For now, I've restored the original behavior.
 
 -out:
 + if (!ret)
 + preempt_enable_no_resched();
>>> I think this is a bit ugly...
>>> Why would you avoid using mutiple "return"s in this function?
>>> I think you can remove "ret" from this function.
>> Hmm...there the are no deeply nested if-elses nor overlapping paths
>> which need to be avoided. All the nested checks unroll cleanly too.
> 
> Oh, I just mentioned about this "if (!ret)" condition.
> Could you try to remove this "ret" variable?
> I think some "ret=1" could be replaced by "return 1" easily.

Done. You should find the desired changed in this patch.

Signed-off-by: Abhishek Sagar <[EMAIL PROTECTED]>
Signed-off-by: Quentin Barnes <[EMAIL PROTECTED]>
---

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index a72e02b..06f8d08 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -441,6 +441,34 @@ void __kprobes arch_prepare_kretprobe(struct 
kretprobe_instance *ri,
/* Replace the return addr with trampoline addr */
*sara = (unsigned long) _trampoline;
 }
+
+static void __kprobes recursive_singlestep(struct kprobe *p,
+  struct pt_regs *regs,
+  struct kprobe_ctlblk *kcb)
+{
+   save_previous_kprobe(kcb);
+   set_current_kprobe(p, regs, kcb);
+   kprobes_inc_nmissed_count(p);
+   prepare_singlestep(p, regs);
+   kcb->kprobe_status = KPROBE_REENTER;
+}
+
+static void __kprobes setup_singlestep(struct kprobe *p, struct pt_regs *regs,
+  struct kprobe_ctlblk *kcb)
+{
+#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
+   if (p->ainsn.boostable == 1 && !p->post_handler) {
+   /* Boost up -- we can execute copied instructions directly */
+   reset_current_kprobe();
+   regs->ip = (unsigned long)p->ainsn.insn;
+   preempt_enable_no_resched();
+   return;
+   }
+#endif
+   prepare_singlestep(p, regs);
+   kcb->kprobe_status = KPROBE_HIT_SS;
+}
+
 /*
  * We have reentered the kprobe_handler(), since another probe was hit while
  * within the handler. We save the original kprobes variables and just single
@@ -449,13 +477,9 @@ void __kprobes arch_prepare_kretprobe(struct 
kretprobe_instance *ri,
 static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
 {
-   if (kcb->kprobe_status == KPROBE_HIT_SS &&
-   *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
-   regs->flags &= ~X86_EFLAGS_TF;
-   regs->flags |= kcb->kprobe_saved_flags;
-   return 0;
+   switch (kcb->kprobe_status) {
+   case KPROBE_HIT_SSDONE:
 #ifdef CONFIG_X86_64
-   } else if (kcb->kprobe_status == KPROBE_HIT_SSDONE) {
/* TODO: Provide re-entrancy from post_kprobes_handler() and
 * avoid exception stack corruption while single-stepping on
 * the instruction of the new probe.
@@ -463,14 +487,26 @@ static int __kprobes reenter_kprobe(struct kprobe *p, 
struct pt_regs *regs,
arch_disarm_kprobe(p);
regs->ip = (unsigned long)p->addr;
reset_current_kprobe();
-   return 1;
+   preempt_enable_no_resched();
+   break;
 #endif
+   case KPROBE_HIT_ACTIVE:
+   recursive_singlestep(p, regs, kcb);
+   break;
+   case KPROBE_HIT_SS:
+   if (*p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
+   regs->flags &= ~TF_MASK;
+   regs->flags |= kcb->kprobe_saved_flags;
+   return 0;
+   } else {
+   recursive_singlestep(p, regs, 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-03 Thread Masami Hiramatsu
Hi Abhishek,

Masami Hiramatsu wrote:
...
 + case KPROBE_HIT_SS:
 + if (*p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
 + regs->flags &= ~TF_MASK;
 + regs->flags |= kcb->kprobe_saved_flags;
 + } else {
 + /* BUG? */
 + }
 + break;
>>> If my thought is correct, we don't need to use swich-case here,
>>> Because there are only 2 cases, KPROBE_HIT_SSDONE (x86-64 only)
>>> or others.
>>> As a result, this function just setups re-entrance.
>> As you've also pointed out in your previous reply, this case is
>> peculiar and therefore I believe it should be marked as a BUG(). I've
>> left the original case, if (kcb->kprobe_status==KPROBE_HIT_SS) &&
>> (*p->ainsn.insn == BREAKPOINT_INSTRUCTION), untouched and is handled
>> as it was before. However, if (kcb->kprobe_status==KPROBE_HIT_SS) &&
>> !(*p->ainsn.insn == BREAKPOINT_INSTRUCTION), then instead of
>> incrementing nmissed count like before, it should cry out a BUG. This
>> is not an ordinary recursive probe handling case which should update
>> the nmissed count.
> 
> Hmm, I can not agree, because it is possible to insert a kprobe
> into kprobe's instruction buffer. If it should be a bug, we must
> check it when registering the kprobe.

I discussed it with other maintainers and knew that current kprobes
does not allow user to insert a kprobe to another kprobe's instruction
buffer, because register_kprobe ensures the insertion address is text.
Now I changed my mind. I think that case (p && kprobe_running() &&
kcb->kprobe_status==KPROBE_HIT_SS) is BUG(), even if (*p->ainsn.insn ==
BREAKPOINT_INSTRUCTION).

> (And also, in *p->ainsn.insn == BREAKPOINT_INSTRUCTION case, I doubt
> that the kernel can handle this "orphaned" breakpoint, because the
> breakpoint address has been changed.)

I also changed my mind. In this case, the kernel debugger can retrieve
correct breakpoint address by using kprobe_running() as below.
---
kp = kprobe_running();
if (kp)
addr = kp->addr;
else
addr = regs->ip;
---

The last discussion point is that we should restore flags or not if
(!p && kprobe_running() && kcb->kprobe_status==KPROBE_HIT_SS).
I think we do not need to do that if the debugger premises that
kprobes exists.

Thank you,

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America) Inc.
Software Solutions Division

e-mail: [EMAIL PROTECTED], [EMAIL PROTECTED]

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-03 Thread Abhishek Sagar
Masami Hiramatsu wrote:
 As a result, this function just setups re-entrance.
 As you've also pointed out in your previous reply, this case is
 peculiar and therefore I believe it should be marked as a BUG(). I've
 left the original case, if (kcb-kprobe_status==KPROBE_HIT_SS) 
 (*p-ainsn.insn == BREAKPOINT_INSTRUCTION), untouched and is handled
 as it was before. However, if (kcb-kprobe_status==KPROBE_HIT_SS) 
 !(*p-ainsn.insn == BREAKPOINT_INSTRUCTION), then instead of
 incrementing nmissed count like before, it should cry out a BUG. This
 is not an ordinary recursive probe handling case which should update
 the nmissed count.
 
 Hmm, I can not agree, because it is possible to insert a kprobe
 into kprobe's instruction buffer. If it should be a bug, we must
 check it when registering the kprobe.
 (And also, in *p-ainsn.insn == BREAKPOINT_INSTRUCTION case, I doubt
 that the kernel can handle this orphaned breakpoint, because the
 breakpoint address has been changed.)
 
 Anyway, if you would like to change the logic, please separate it from
 cleanup patch.

Okay. For now, I've restored the original behavior.
 
 -out:
 + if (!ret)
 + preempt_enable_no_resched();
 I think this is a bit ugly...
 Why would you avoid using mutiple returns in this function?
 I think you can remove ret from this function.
 Hmm...there the are no deeply nested if-elses nor overlapping paths
 which need to be avoided. All the nested checks unroll cleanly too.
 
 Oh, I just mentioned about this if (!ret) condition.
 Could you try to remove this ret variable?
 I think some ret=1 could be replaced by return 1 easily.

Done. You should find the desired changed in this patch.

Signed-off-by: Abhishek Sagar [EMAIL PROTECTED]
Signed-off-by: Quentin Barnes [EMAIL PROTECTED]
---

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index a72e02b..06f8d08 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -441,6 +441,34 @@ void __kprobes arch_prepare_kretprobe(struct 
kretprobe_instance *ri,
/* Replace the return addr with trampoline addr */
*sara = (unsigned long) kretprobe_trampoline;
 }
+
+static void __kprobes recursive_singlestep(struct kprobe *p,
+  struct pt_regs *regs,
+  struct kprobe_ctlblk *kcb)
+{
+   save_previous_kprobe(kcb);
+   set_current_kprobe(p, regs, kcb);
+   kprobes_inc_nmissed_count(p);
+   prepare_singlestep(p, regs);
+   kcb-kprobe_status = KPROBE_REENTER;
+}
+
+static void __kprobes setup_singlestep(struct kprobe *p, struct pt_regs *regs,
+  struct kprobe_ctlblk *kcb)
+{
+#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
+   if (p-ainsn.boostable == 1  !p-post_handler) {
+   /* Boost up -- we can execute copied instructions directly */
+   reset_current_kprobe();
+   regs-ip = (unsigned long)p-ainsn.insn;
+   preempt_enable_no_resched();
+   return;
+   }
+#endif
+   prepare_singlestep(p, regs);
+   kcb-kprobe_status = KPROBE_HIT_SS;
+}
+
 /*
  * We have reentered the kprobe_handler(), since another probe was hit while
  * within the handler. We save the original kprobes variables and just single
@@ -449,13 +477,9 @@ void __kprobes arch_prepare_kretprobe(struct 
kretprobe_instance *ri,
 static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
 {
-   if (kcb-kprobe_status == KPROBE_HIT_SS 
-   *p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
-   regs-flags = ~X86_EFLAGS_TF;
-   regs-flags |= kcb-kprobe_saved_flags;
-   return 0;
+   switch (kcb-kprobe_status) {
+   case KPROBE_HIT_SSDONE:
 #ifdef CONFIG_X86_64
-   } else if (kcb-kprobe_status == KPROBE_HIT_SSDONE) {
/* TODO: Provide re-entrancy from post_kprobes_handler() and
 * avoid exception stack corruption while single-stepping on
 * the instruction of the new probe.
@@ -463,14 +487,26 @@ static int __kprobes reenter_kprobe(struct kprobe *p, 
struct pt_regs *regs,
arch_disarm_kprobe(p);
regs-ip = (unsigned long)p-addr;
reset_current_kprobe();
-   return 1;
+   preempt_enable_no_resched();
+   break;
 #endif
+   case KPROBE_HIT_ACTIVE:
+   recursive_singlestep(p, regs, kcb);
+   break;
+   case KPROBE_HIT_SS:
+   if (*p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
+   regs-flags = ~TF_MASK;
+   regs-flags |= kcb-kprobe_saved_flags;
+   return 0;
+   } else {
+   recursive_singlestep(p, regs, kcb);
+   }
+   break;
+   default:
+   /* impossible cases */
+  

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-03 Thread Masami Hiramatsu
Abhishek Sagar wrote:
 Masami Hiramatsu wrote:
...
 Done. You should find the desired changed in this patch.

Well done!
This cleans it up very well.
I have just one more comment.

 @@ -463,14 +487,26 @@ static int __kprobes reenter_kprobe(struct kprobe *p, 
 struct pt_regs *regs,
   arch_disarm_kprobe(p);
   regs-ip = (unsigned long)p-addr;
   reset_current_kprobe();
 - return 1;
 + preempt_enable_no_resched();
 + break;
  #endif
 + case KPROBE_HIT_ACTIVE:
 + recursive_singlestep(p, regs, kcb);
 + break;
 + case KPROBE_HIT_SS:
 + if (*p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
 + regs-flags = ~TF_MASK;
 + regs-flags |= kcb-kprobe_saved_flags;
 + return 0;
 + } else {
 + recursive_singlestep(p, regs, kcb);
 + }
 + break;
 + default:
 + /* impossible cases */
 + WARN_ON(1);

WARN_ON() does not call panic(). Since the kernel doesn't stop,
we need to prepare singlestep after that.

How about this?
---
+   case KPROBE_HIT_ACTIVE:
+   break;
+   case KPROBE_HIT_SS:
+   if (*p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
+   regs-flags = ~TF_MASK;
+   regs-flags |= kcb-kprobe_saved_flags;
+   return 0;
+   }
+   break;
+   default:
+   /* impossible cases */
+   WARN_ON(1);
}
save_previous_kprobe(kcb);
set_current_kprobe(p, regs, kcb);
kprobes_inc_nmissed_count(p);
prepare_singlestep(p, regs);
kcb-kprobe_status = KPROBE_REENTER;
return 1;
 }
---


Thank you,

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America) Inc.
Software Solutions Division

e-mail: [EMAIL PROTECTED], [EMAIL PROTECTED]

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-03 Thread Masami Hiramatsu
Hi Abhishek,

Masami Hiramatsu wrote:
 Hmm, I can not agree, because it is possible to insert a kprobe
 into kprobe's instruction buffer. If it should be a bug, we must
 check it when registering the kprobe.
 
 I discussed it with other maintainers and knew that current kprobes
 does not allow user to insert a kprobe to another kprobe's instruction
 buffer, because register_kprobe ensures the insertion address is text.
 Now I changed my mind. I think that case (p  kprobe_running() 
 kcb-kprobe_status==KPROBE_HIT_SS) is BUG(), even if (*p-ainsn.insn ==
 BREAKPOINT_INSTRUCTION).

I could understand what the original code did at last.
If a kprobe is inserted on a breakpoint which other debugger inserts,
it single step inline instead of out-of-line.(this is done in 
prepare_singlestep)
In this case, (p  kprobe_running()  kcb-kprobe_status==KPROBE_HIT_SS)
is true and we need pass the control to the debugger.
And if (*p-ainsn.insn != BREAKPOINT_INSTRUCTION) (or (p != kprobe_running())) 
in
that case, there may be some bugs.

Now I think your original suggestion is correct.
Please fix it in another patch.

Thank you very much,

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America) Inc.
Software Solutions Division

e-mail: [EMAIL PROTECTED], [EMAIL PROTECTED]

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-03 Thread Abhishek Sagar
On 1/4/08, Masami Hiramatsu [EMAIL PROTECTED] wrote:
 I could understand what the original code did at last.
 If a kprobe is inserted on a breakpoint which other debugger inserts,
 it single step inline instead of out-of-line.(this is done in 
 prepare_singlestep)
 In this case, (p  kprobe_running()  kcb-kprobe_status==KPROBE_HIT_SS)
 is true and we need pass the control to the debugger.
 And if (*p-ainsn.insn != BREAKPOINT_INSTRUCTION) (or (p != 
 kprobe_running())) in
 that case, there may be some bugs.

Yes, we can only fault while singlestepping for a unique case, which
is when we're singlestepping (in-line) a breakpoint because a probe
was installed on it. All other scenarios are a BUG . That's also
assuming that no exception will preempt singlestepping, whose codepath
has a probe on it.

 Now I think your original suggestion is correct.
 Please fix it in another patch.

Ok.

 --
 Masami Hiramatsu

 Software Engineer
 Hitachi Computer Products (America) Inc.
 Software Solutions Division

 e-mail: [EMAIL PROTECTED], [EMAIL PROTECTED]

Thanks,
Abhishek Sagar
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-03 Thread Abhishek Sagar
On 1/4/08, Masami Hiramatsu [EMAIL PROTECTED] wrote:
  + case KPROBE_HIT_SS:
  + if (*p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
  + regs-flags = ~TF_MASK;
  + regs-flags |= kcb-kprobe_saved_flags;
  + return 0;
  + } else {
  + recursive_singlestep(p, regs, kcb);
  + }
  + break;
  + default:
  + /* impossible cases */
  + WARN_ON(1);

 WARN_ON() does not call panic(). Since the kernel doesn't stop,
 we need to prepare singlestep after that.

We shouldn't panic in 'default'. First, we'll never hit this case, and
if we do then we can be sure that the fault is not due to a probe. So
instead of singlestepping we'll let the kernel handle it. If it cant,
it'll panic/die() for us. I'll try to cleanup this case and
incorporate these suggestions in a separate patch, as u suggested.

 Masami Hiramatsu

 Software Engineer
 Hitachi Computer Products (America) Inc.
 Software Solutions Division

 e-mail: [EMAIL PROTECTED], [EMAIL PROTECTED]

--
Thanks,
Abhishek Sagar
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-02 Thread Masami Hiramatsu
Hi Abhishek,

Abhishek Sagar wrote:
> On 1/2/08, Masami Hiramatsu <[EMAIL PROTECTED]> wrote:
>> I think setup_singlestep() in your first patch is better, because it avoided
>> code duplication(*).
> 
> Will retain it then.

Thank you very much.

>>>  static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
>>>   struct kprobe_ctlblk *kcb)
>>>  {
>>> - if (kcb->kprobe_status == KPROBE_HIT_SS &&
>>> - *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
>>> - regs->flags &= ~X86_EFLAGS_TF;
>>> - regs->flags |= kcb->kprobe_saved_flags;
>>> - return 0;
>>> + int ret = 0;
>>> +
>>> + switch (kcb->kprobe_status) {
>>> + case KPROBE_HIT_SSDONE:
>>>  #ifdef CONFIG_X86_64
>>> - } else if (kcb->kprobe_status == KPROBE_HIT_SSDONE) {
>>> - /* TODO: Provide re-entrancy from post_kprobes_handler() and
>>> -  * avoid exception stack corruption while single-stepping on
>>> + /* TODO: Provide re-entrancy from
>>> +  * post_kprobes_handler() and avoid exception
>>> +  * stack corruption while single-stepping on
>> Why would you modify it?
> 
> Do you mean the comment? I had this code in kprobe_handler earlier and
> it consistently exceeded 80 columns in my case. Will fix it anyways.

Yes, my previous patch already took care of it. Thanks.

>>> + ret = 1;
>>> + break;
>>>  #endif
>>> + case KPROBE_HIT_ACTIVE:
>>> + /* a probe has been hit inside a
>>> +  * user handler */
>>> + save_previous_kprobe(kcb);
>>> + set_current_kprobe(p, regs, kcb);
>>> + kprobes_inc_nmissed_count(p);
>>> + prepare_singlestep(p, regs);
>>> + kcb->kprobe_status = KPROBE_REENTER;
>>> + ret = 1;
>>> + break;
>>> + case KPROBE_HIT_SS:
>>> + if (*p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
>>> + regs->flags &= ~TF_MASK;
>>> + regs->flags |= kcb->kprobe_saved_flags;
>>> + } else {
>>> + /* BUG? */
>>> + }
>>> + break;
>> If my thought is correct, we don't need to use swich-case here,
>> Because there are only 2 cases, KPROBE_HIT_SSDONE (x86-64 only)
>> or others.
>> As a result, this function just setups re-entrance.
> 
> As you've also pointed out in your previous reply, this case is
> peculiar and therefore I believe it should be marked as a BUG(). I've
> left the original case, if (kcb->kprobe_status==KPROBE_HIT_SS) &&
> (*p->ainsn.insn == BREAKPOINT_INSTRUCTION), untouched and is handled
> as it was before. However, if (kcb->kprobe_status==KPROBE_HIT_SS) &&
> !(*p->ainsn.insn == BREAKPOINT_INSTRUCTION), then instead of
> incrementing nmissed count like before, it should cry out a BUG. This
> is not an ordinary recursive probe handling case which should update
> the nmissed count.

Hmm, I can not agree, because it is possible to insert a kprobe
into kprobe's instruction buffer. If it should be a bug, we must
check it when registering the kprobe.
(And also, in *p->ainsn.insn == BREAKPOINT_INSTRUCTION case, I doubt
that the kernel can handle this "orphaned" breakpoint, because the
breakpoint address has been changed.)

Anyway, if you would like to change the logic, please separate it from
cleanup patch.

>>> + default:
>>> + /* impossible cases */
>>> + BUG();
>> I think no need to check that here.
> 
> It may not get hit at runtime but is quite informative.

OK, I see.

> 
>>>  static int __kprobes kprobe_handler(struct pt_regs *regs)
>>>  {
>>> - struct kprobe *p;
>>>   int ret = 0;
>>>   kprobe_opcode_t *addr;
>>> + struct kprobe *p, *cur;
>>>   struct kprobe_ctlblk *kcb;
>>>
>>>   addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t));
>>> + if (*addr != BREAKPOINT_INSTRUCTION) {
>>> + /*
>>> +  * The breakpoint instruction was removed right
>>> +  * after we hit it.  Another cpu has removed
>>> +  * either a probepoint or a debugger breakpoint
>>> +  * at this address.  In either case, no further
>>> +  * handling of this interrupt is appropriate.
>>> +  * Back up over the (now missing) int3 and run
>>> +  * the original instruction.
>>> +  */
>>> + regs->ip = (unsigned long)addr;
>>> + return 1;
>>> + }
>> I think this block changing would better be separated from this patch,
>> because it changes code flow logically.
> 
> Agreed. I'll address this in another thread, but for now it is safest
> to check it for all cases right at the entry of kprobe_handler().

Thanks,

>>> -ss_probe:
>>> - ret = 1;
>>> -#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
>>> - if (p->ainsn.boostable == 1 && !p->post_handler) {
>>> - /* Boost 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-02 Thread Ingo Molnar

* Abhishek Sagar <[EMAIL PROTECTED]> wrote:

> > > + default:
> > > + /* impossible cases */
> > > + BUG();
> >
> > I think no need to check that here.
> 
> It may not get hit at runtime but is quite informative.

sidenote: please use WARN_ON(1) instead. In the case of the impossible 
triggering somehow (the Linux kernel is notorious for that ;-), we'd 
have a nice WARN_ON(1) in the dmesg, instead of a crashed box.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-02 Thread Abhishek Sagar
On 1/2/08, Masami Hiramatsu <[EMAIL PROTECTED]> wrote:
> I think setup_singlestep() in your first patch is better, because it avoided
> code duplication(*).

Will retain it then.

> >  static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
> >   struct kprobe_ctlblk *kcb)
> >  {
> > - if (kcb->kprobe_status == KPROBE_HIT_SS &&
> > - *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
> > - regs->flags &= ~X86_EFLAGS_TF;
> > - regs->flags |= kcb->kprobe_saved_flags;
> > - return 0;
> > + int ret = 0;
> > +
> > + switch (kcb->kprobe_status) {
> > + case KPROBE_HIT_SSDONE:
> >  #ifdef CONFIG_X86_64
> > - } else if (kcb->kprobe_status == KPROBE_HIT_SSDONE) {
> > - /* TODO: Provide re-entrancy from post_kprobes_handler() and
> > -  * avoid exception stack corruption while single-stepping on
> > + /* TODO: Provide re-entrancy from
> > +  * post_kprobes_handler() and avoid exception
> > +  * stack corruption while single-stepping on
>
> Why would you modify it?

Do you mean the comment? I had this code in kprobe_handler earlier and
it consistently exceeded 80 columns in my case. Will fix it anyways.

> > + ret = 1;
> > + break;
> >  #endif
> > + case KPROBE_HIT_ACTIVE:
> > + /* a probe has been hit inside a
> > +  * user handler */
> > + save_previous_kprobe(kcb);
> > + set_current_kprobe(p, regs, kcb);
> > + kprobes_inc_nmissed_count(p);
> > + prepare_singlestep(p, regs);
> > + kcb->kprobe_status = KPROBE_REENTER;
> > + ret = 1;
> > + break;
> > + case KPROBE_HIT_SS:
> > + if (*p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
> > + regs->flags &= ~TF_MASK;
> > + regs->flags |= kcb->kprobe_saved_flags;
> > + } else {
> > + /* BUG? */
> > + }
> > + break;
>
> If my thought is correct, we don't need to use swich-case here,
> Because there are only 2 cases, KPROBE_HIT_SSDONE (x86-64 only)
> or others.
> As a result, this function just setups re-entrance.

As you've also pointed out in your previous reply, this case is
peculiar and therefore I believe it should be marked as a BUG(). I've
left the original case, if (kcb->kprobe_status==KPROBE_HIT_SS) &&
(*p->ainsn.insn == BREAKPOINT_INSTRUCTION), untouched and is handled
as it was before. However, if (kcb->kprobe_status==KPROBE_HIT_SS) &&
!(*p->ainsn.insn == BREAKPOINT_INSTRUCTION), then instead of
incrementing nmissed count like before, it should cry out a BUG. This
is not an ordinary recursive probe handling case which should update
the nmissed count.

> > + default:
> > + /* impossible cases */
> > + BUG();
>
> I think no need to check that here.

It may not get hit at runtime but is quite informative.

> >  static int __kprobes kprobe_handler(struct pt_regs *regs)
> >  {
> > - struct kprobe *p;
> >   int ret = 0;
> >   kprobe_opcode_t *addr;
> > + struct kprobe *p, *cur;
> >   struct kprobe_ctlblk *kcb;
> >
> >   addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t));
> > + if (*addr != BREAKPOINT_INSTRUCTION) {
> > + /*
> > +  * The breakpoint instruction was removed right
> > +  * after we hit it.  Another cpu has removed
> > +  * either a probepoint or a debugger breakpoint
> > +  * at this address.  In either case, no further
> > +  * handling of this interrupt is appropriate.
> > +  * Back up over the (now missing) int3 and run
> > +  * the original instruction.
> > +  */
> > + regs->ip = (unsigned long)addr;
> > + return 1;
> > + }
>
> I think this block changing would better be separated from this patch,
> because it changes code flow logically.

Agreed. I'll address this in another thread, but for now it is safest
to check it for all cases right at the entry of kprobe_handler().

> >
> > - /*
> > -  * We don't want to be preempted for the entire
> > -  * duration of kprobe processing
> > -  */
> >   preempt_disable();
> >   kcb = get_kprobe_ctlblk();
> > -
> > + cur = kprobe_running();
>
> I think you don't need "cur", because kprobe_running() is called
> just once on each path.

Will get rid of that.

> > - regs->ip = (unsigned long)addr;
> > + if (!p->pre_handler || !p->pre_handler(p, regs)) {
> > + if (setup_boost(p, regs)) {
> > + prepare_singlestep(p, regs);
> > + kcb->kprobe_status = KPROBE_HIT_SS;
>
> (*) duplication 1
>
> > + }
> > + }

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-02 Thread Masami Hiramatsu
Hi Abhishek,

Thank you for good work.

Abhishek Sagar wrote:
> @@ -441,6 +441,26 @@ void __kprobes arch_prepare_kretprobe(struct 
> kretprobe_instance *ri,
>   /* Replace the return addr with trampoline addr */
>   *sara = (unsigned long) _trampoline;
>  }
> +
> +#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
> +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
> *regs)
> +{
> + if (p->ainsn.boostable == 1 && !p->post_handler) {
> + /* Boost up -- we can execute copied instructions directly */
> + reset_current_kprobe();
> + regs->ip = (unsigned long)p->ainsn.insn;
> + preempt_enable_no_resched();
> + return 0;
> + }
> + return 1;
> +}
> +#else
> +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
> *regs)
> +{
> + return 1;
> +}
> +#endif

I think setup_singlestep() in your first patch is better, because it avoided
code duplication(*).

> +
>  /*
>   * We have reentered the kprobe_handler(), since another probe was hit while
>   * within the handler. We save the original kprobes variables and just single
> @@ -449,29 +469,47 @@ void __kprobes arch_prepare_kretprobe(struct 
> kretprobe_instance *ri,
>  static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
>   struct kprobe_ctlblk *kcb)
>  {
> - if (kcb->kprobe_status == KPROBE_HIT_SS &&
> - *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
> - regs->flags &= ~X86_EFLAGS_TF;
> - regs->flags |= kcb->kprobe_saved_flags;
> - return 0;
> + int ret = 0;
> +
> + switch (kcb->kprobe_status) {
> + case KPROBE_HIT_SSDONE:
>  #ifdef CONFIG_X86_64
> - } else if (kcb->kprobe_status == KPROBE_HIT_SSDONE) {
> - /* TODO: Provide re-entrancy from post_kprobes_handler() and
> -  * avoid exception stack corruption while single-stepping on
> + /* TODO: Provide re-entrancy from
> +  * post_kprobes_handler() and avoid exception
> +  * stack corruption while single-stepping on

Why would you modify it?

>* the instruction of the new probe.
>*/
>   arch_disarm_kprobe(p);
>   regs->ip = (unsigned long)p->addr;
>   reset_current_kprobe();
> - return 1;
> + preempt_enable_no_resched();

I think preepmt_enable here is good idea.

> + ret = 1;
> + break;
>  #endif
> + case KPROBE_HIT_ACTIVE:
> + /* a probe has been hit inside a
> +  * user handler */
> + save_previous_kprobe(kcb);
> + set_current_kprobe(p, regs, kcb);
> + kprobes_inc_nmissed_count(p);
> + prepare_singlestep(p, regs);
> + kcb->kprobe_status = KPROBE_REENTER;
> + ret = 1;
> + break;
> + case KPROBE_HIT_SS:
> + if (*p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
> + regs->flags &= ~TF_MASK;
> + regs->flags |= kcb->kprobe_saved_flags;
> + } else {
> + /* BUG? */
> + }
> + break;

If my thought is correct, we don't need to use swich-case here,
Because there are only 2 cases, KPROBE_HIT_SSDONE (x86-64 only)
or others.
As a result, this function just setups re-entrance.

> + default:
> + /* impossible cases */
> + BUG();

I think no need to check that here.

>   }
> - save_previous_kprobe(kcb);
> - set_current_kprobe(p, regs, kcb);
> - kprobes_inc_nmissed_count(p);
> - prepare_singlestep(p, regs);
> - kcb->kprobe_status = KPROBE_REENTER;
> - return 1;
> +
> + return ret;
>  }
>  
>  /*
> @@ -480,82 +518,67 @@ static int __kprobes reenter_kprobe(struct kprobe *p, 
> struct pt_regs *regs,
>   */
>  static int __kprobes kprobe_handler(struct pt_regs *regs)
>  {
> - struct kprobe *p;
>   int ret = 0;
>   kprobe_opcode_t *addr;
> + struct kprobe *p, *cur;
>   struct kprobe_ctlblk *kcb;
>  
>   addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t));
> + if (*addr != BREAKPOINT_INSTRUCTION) {
> + /*
> +  * The breakpoint instruction was removed right
> +  * after we hit it.  Another cpu has removed
> +  * either a probepoint or a debugger breakpoint
> +  * at this address.  In either case, no further
> +  * handling of this interrupt is appropriate.
> +  * Back up over the (now missing) int3 and run
> +  * the original instruction.
> +  */
> + regs->ip = (unsigned long)addr;
> + return 1;
> + }

I think this block changing would better be separated from this patch,
because it changes code flow logically.

>  
> - /*
> -  * We don't want to be preempted for the 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-02 Thread Masami Hiramatsu
Hi,

Abhishek Sagar wrote:
>>>  static int __kprobes kprobe_handler(struct pt_regs *regs)
>>>  {
>>> - struct kprobe *p;
>>>   int ret = 0;
>>>   kprobe_opcode_t *addr;
>>> + struct kprobe *p, *cur;
>>>   struct kprobe_ctlblk *kcb;
>>>
>>>   addr = (kprobe_opcode_t *)(regs->eip - sizeof(kprobe_opcode_t));
>>> + if (*addr != BREAKPOINT_INSTRUCTION) {
>>> + /*
>>> +  * The breakpoint instruction was removed right
>>> +  * after we hit it.  Another cpu has removed
>>> +  * either a probepoint or a debugger breakpoint
>>> +  * at this address.  In either case, no further
>>> +  * handling of this interrupt is appropriate.
>>> +  * Back up over the (now missing) int3 and run
>>> +  * the original instruction.
>>> +  */
>>> + regs->eip -= sizeof(kprobe_opcode_t);
>>> + return 1;
>>> + }
> 
> I have moved the above breakpoint removal check at the beginning of
> the function. The current check is for '!p' case only, whereas IMO
> this check should be performed for all cases. An external debugger may
> very well plant and remove a breakpoint on an address which has probe
> on it. This check is a race nevertheless, so its relative placing
> within kprobe_handler() would be best where its duplication can be
> avoided.

I think there is another reason; now we have disable_all_kprobes() which
disarms(removes BREAKPOINT instruction) all kprobes. So, this should be 
commented.

By the way, IMHO, if a debugger causes it, that might be a bug.
Since when the debugger removes a breakpoint, it should prevent the exception
caused by the breakpoint on other processors. In other words, the debugger
should work as transparently as possible.

>>> - /* Check we're not actually recursing */
>>> - if (kprobe_running()) {
>>> - p = get_kprobe(addr);
>>> - if (p) {
>>> - if (kcb->kprobe_status == KPROBE_HIT_SS &&
>>> - *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
>>> - regs->eflags &= ~TF_MASK;
>>> - regs->eflags |= kcb->kprobe_saved_eflags;
>>> - goto no_kprobe;
>>> - }
>>> - /* We have reentered the kprobe_handler(), since
>>> -  * another probe was hit while within the handler.
>>> -  * We here save the original kprobes variables and
>>> -  * just single step on the instruction of the new 
>>> probe
>>> -  * without calling any user handlers.
>>> -  */
>>> - save_previous_kprobe(kcb);
>>> - set_current_kprobe(p, regs, kcb);
>>> - kprobes_inc_nmissed_count(p);
>>> - prepare_singlestep(p, regs);
>>> - kcb->kprobe_status = KPROBE_REENTER;
>>> - return 1;
>>> - } else {
>>> - if (*addr != BREAKPOINT_INSTRUCTION) {
>>> - /* The breakpoint instruction was removed by
>>> -  * another cpu right after we hit, no further
>>> -  * handling of this interrupt is appropriate
>>> -  */
>>> - regs->eip -= sizeof(kprobe_opcode_t);
>>> + if (p) {
>>> + if (cur) {
>>> + switch (kcb->kprobe_status) {
>>> + case KPROBE_HIT_ACTIVE:
>>> + case KPROBE_HIT_SSDONE:
>>> + /* a probe has been hit inside a
>>> +  * user handler */
>>> + save_previous_kprobe(kcb);
>>> + set_current_kprobe(p, regs, kcb);
>>> + kprobes_inc_nmissed_count(p);
>>> + prepare_singlestep(p, regs);
>>> + kcb->kprobe_status = KPROBE_REENTER;
>>>   ret = 1;
>>> - goto no_kprobe;
>>> - }
>>> - p = __get_cpu_var(current_kprobe);
>>> - if (p->break_handler && p->break_handler(p, regs)) {
>>> - goto ss_probe;
>>> + break;
>>> + case KPROBE_HIT_SS:
>>> + if (*p->ainsn.insn == BREAKPOINT_INSTRUCTION) 
>>> {
>>> + regs->eflags &= ~TF_MASK;
>>> + regs->eflags |=
>>> + kcb->kprobe_saved_eflags;
>>> + } else {
>>> + /* BUG? */
>>> + }

I think whole of this case might have a bug. Since "p" is a 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-02 Thread Masami Hiramatsu
Hi,

Abhishek Sagar wrote:
  static int __kprobes kprobe_handler(struct pt_regs *regs)
  {
 - struct kprobe *p;
   int ret = 0;
   kprobe_opcode_t *addr;
 + struct kprobe *p, *cur;
   struct kprobe_ctlblk *kcb;

   addr = (kprobe_opcode_t *)(regs-eip - sizeof(kprobe_opcode_t));
 + if (*addr != BREAKPOINT_INSTRUCTION) {
 + /*
 +  * The breakpoint instruction was removed right
 +  * after we hit it.  Another cpu has removed
 +  * either a probepoint or a debugger breakpoint
 +  * at this address.  In either case, no further
 +  * handling of this interrupt is appropriate.
 +  * Back up over the (now missing) int3 and run
 +  * the original instruction.
 +  */
 + regs-eip -= sizeof(kprobe_opcode_t);
 + return 1;
 + }
 
 I have moved the above breakpoint removal check at the beginning of
 the function. The current check is for '!p' case only, whereas IMO
 this check should be performed for all cases. An external debugger may
 very well plant and remove a breakpoint on an address which has probe
 on it. This check is a race nevertheless, so its relative placing
 within kprobe_handler() would be best where its duplication can be
 avoided.

I think there is another reason; now we have disable_all_kprobes() which
disarms(removes BREAKPOINT instruction) all kprobes. So, this should be 
commented.

By the way, IMHO, if a debugger causes it, that might be a bug.
Since when the debugger removes a breakpoint, it should prevent the exception
caused by the breakpoint on other processors. In other words, the debugger
should work as transparently as possible.

 - /* Check we're not actually recursing */
 - if (kprobe_running()) {
 - p = get_kprobe(addr);
 - if (p) {
 - if (kcb-kprobe_status == KPROBE_HIT_SS 
 - *p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
 - regs-eflags = ~TF_MASK;
 - regs-eflags |= kcb-kprobe_saved_eflags;
 - goto no_kprobe;
 - }
 - /* We have reentered the kprobe_handler(), since
 -  * another probe was hit while within the handler.
 -  * We here save the original kprobes variables and
 -  * just single step on the instruction of the new 
 probe
 -  * without calling any user handlers.
 -  */
 - save_previous_kprobe(kcb);
 - set_current_kprobe(p, regs, kcb);
 - kprobes_inc_nmissed_count(p);
 - prepare_singlestep(p, regs);
 - kcb-kprobe_status = KPROBE_REENTER;
 - return 1;
 - } else {
 - if (*addr != BREAKPOINT_INSTRUCTION) {
 - /* The breakpoint instruction was removed by
 -  * another cpu right after we hit, no further
 -  * handling of this interrupt is appropriate
 -  */
 - regs-eip -= sizeof(kprobe_opcode_t);
 + if (p) {
 + if (cur) {
 + switch (kcb-kprobe_status) {
 + case KPROBE_HIT_ACTIVE:
 + case KPROBE_HIT_SSDONE:
 + /* a probe has been hit inside a
 +  * user handler */
 + save_previous_kprobe(kcb);
 + set_current_kprobe(p, regs, kcb);
 + kprobes_inc_nmissed_count(p);
 + prepare_singlestep(p, regs);
 + kcb-kprobe_status = KPROBE_REENTER;
   ret = 1;
 - goto no_kprobe;
 - }
 - p = __get_cpu_var(current_kprobe);
 - if (p-break_handler  p-break_handler(p, regs)) {
 - goto ss_probe;
 + break;
 + case KPROBE_HIT_SS:
 + if (*p-ainsn.insn == BREAKPOINT_INSTRUCTION) 
 {
 + regs-eflags = ~TF_MASK;
 + regs-eflags |=
 + kcb-kprobe_saved_eflags;
 + } else {
 + /* BUG? */
 + }

I think whole of this case might have a bug. Since p is a recursing kprobe on
the instruction buffer of the running kprobe, *p-ainsn.insn is the next 
singlestep
target. I think KPROBE_HIT_SS should be treated as same as KPROBE_HIT_ACTIVE.

If the author thought a situation that a user inserts a kprobe on a breakpoint 
which
has been 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-02 Thread Masami Hiramatsu
Hi Abhishek,

Thank you for good work.

Abhishek Sagar wrote:
 @@ -441,6 +441,26 @@ void __kprobes arch_prepare_kretprobe(struct 
 kretprobe_instance *ri,
   /* Replace the return addr with trampoline addr */
   *sara = (unsigned long) kretprobe_trampoline;
  }
 +
 +#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
 +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
 *regs)
 +{
 + if (p-ainsn.boostable == 1  !p-post_handler) {
 + /* Boost up -- we can execute copied instructions directly */
 + reset_current_kprobe();
 + regs-ip = (unsigned long)p-ainsn.insn;
 + preempt_enable_no_resched();
 + return 0;
 + }
 + return 1;
 +}
 +#else
 +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
 *regs)
 +{
 + return 1;
 +}
 +#endif

I think setup_singlestep() in your first patch is better, because it avoided
code duplication(*).

 +
  /*
   * We have reentered the kprobe_handler(), since another probe was hit while
   * within the handler. We save the original kprobes variables and just single
 @@ -449,29 +469,47 @@ void __kprobes arch_prepare_kretprobe(struct 
 kretprobe_instance *ri,
  static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
   struct kprobe_ctlblk *kcb)
  {
 - if (kcb-kprobe_status == KPROBE_HIT_SS 
 - *p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
 - regs-flags = ~X86_EFLAGS_TF;
 - regs-flags |= kcb-kprobe_saved_flags;
 - return 0;
 + int ret = 0;
 +
 + switch (kcb-kprobe_status) {
 + case KPROBE_HIT_SSDONE:
  #ifdef CONFIG_X86_64
 - } else if (kcb-kprobe_status == KPROBE_HIT_SSDONE) {
 - /* TODO: Provide re-entrancy from post_kprobes_handler() and
 -  * avoid exception stack corruption while single-stepping on
 + /* TODO: Provide re-entrancy from
 +  * post_kprobes_handler() and avoid exception
 +  * stack corruption while single-stepping on

Why would you modify it?

* the instruction of the new probe.
*/
   arch_disarm_kprobe(p);
   regs-ip = (unsigned long)p-addr;
   reset_current_kprobe();
 - return 1;
 + preempt_enable_no_resched();

I think preepmt_enable here is good idea.

 + ret = 1;
 + break;
  #endif
 + case KPROBE_HIT_ACTIVE:
 + /* a probe has been hit inside a
 +  * user handler */
 + save_previous_kprobe(kcb);
 + set_current_kprobe(p, regs, kcb);
 + kprobes_inc_nmissed_count(p);
 + prepare_singlestep(p, regs);
 + kcb-kprobe_status = KPROBE_REENTER;
 + ret = 1;
 + break;
 + case KPROBE_HIT_SS:
 + if (*p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
 + regs-flags = ~TF_MASK;
 + regs-flags |= kcb-kprobe_saved_flags;
 + } else {
 + /* BUG? */
 + }
 + break;

If my thought is correct, we don't need to use swich-case here,
Because there are only 2 cases, KPROBE_HIT_SSDONE (x86-64 only)
or others.
As a result, this function just setups re-entrance.

 + default:
 + /* impossible cases */
 + BUG();

I think no need to check that here.

   }
 - save_previous_kprobe(kcb);
 - set_current_kprobe(p, regs, kcb);
 - kprobes_inc_nmissed_count(p);
 - prepare_singlestep(p, regs);
 - kcb-kprobe_status = KPROBE_REENTER;
 - return 1;
 +
 + return ret;
  }
  
  /*
 @@ -480,82 +518,67 @@ static int __kprobes reenter_kprobe(struct kprobe *p, 
 struct pt_regs *regs,
   */
  static int __kprobes kprobe_handler(struct pt_regs *regs)
  {
 - struct kprobe *p;
   int ret = 0;
   kprobe_opcode_t *addr;
 + struct kprobe *p, *cur;
   struct kprobe_ctlblk *kcb;
  
   addr = (kprobe_opcode_t *)(regs-ip - sizeof(kprobe_opcode_t));
 + if (*addr != BREAKPOINT_INSTRUCTION) {
 + /*
 +  * The breakpoint instruction was removed right
 +  * after we hit it.  Another cpu has removed
 +  * either a probepoint or a debugger breakpoint
 +  * at this address.  In either case, no further
 +  * handling of this interrupt is appropriate.
 +  * Back up over the (now missing) int3 and run
 +  * the original instruction.
 +  */
 + regs-ip = (unsigned long)addr;
 + return 1;
 + }

I think this block changing would better be separated from this patch,
because it changes code flow logically.

  
 - /*
 -  * We don't want to be preempted for the entire
 -  * duration of kprobe processing
 -  */
   preempt_disable();
   kcb = get_kprobe_ctlblk();
 -
 + cur = 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-02 Thread Abhishek Sagar
On 1/2/08, Masami Hiramatsu [EMAIL PROTECTED] wrote:
 I think setup_singlestep() in your first patch is better, because it avoided
 code duplication(*).

Will retain it then.

   static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
   {
  - if (kcb-kprobe_status == KPROBE_HIT_SS 
  - *p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
  - regs-flags = ~X86_EFLAGS_TF;
  - regs-flags |= kcb-kprobe_saved_flags;
  - return 0;
  + int ret = 0;
  +
  + switch (kcb-kprobe_status) {
  + case KPROBE_HIT_SSDONE:
   #ifdef CONFIG_X86_64
  - } else if (kcb-kprobe_status == KPROBE_HIT_SSDONE) {
  - /* TODO: Provide re-entrancy from post_kprobes_handler() and
  -  * avoid exception stack corruption while single-stepping on
  + /* TODO: Provide re-entrancy from
  +  * post_kprobes_handler() and avoid exception
  +  * stack corruption while single-stepping on

 Why would you modify it?

Do you mean the comment? I had this code in kprobe_handler earlier and
it consistently exceeded 80 columns in my case. Will fix it anyways.

  + ret = 1;
  + break;
   #endif
  + case KPROBE_HIT_ACTIVE:
  + /* a probe has been hit inside a
  +  * user handler */
  + save_previous_kprobe(kcb);
  + set_current_kprobe(p, regs, kcb);
  + kprobes_inc_nmissed_count(p);
  + prepare_singlestep(p, regs);
  + kcb-kprobe_status = KPROBE_REENTER;
  + ret = 1;
  + break;
  + case KPROBE_HIT_SS:
  + if (*p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
  + regs-flags = ~TF_MASK;
  + regs-flags |= kcb-kprobe_saved_flags;
  + } else {
  + /* BUG? */
  + }
  + break;

 If my thought is correct, we don't need to use swich-case here,
 Because there are only 2 cases, KPROBE_HIT_SSDONE (x86-64 only)
 or others.
 As a result, this function just setups re-entrance.

As you've also pointed out in your previous reply, this case is
peculiar and therefore I believe it should be marked as a BUG(). I've
left the original case, if (kcb-kprobe_status==KPROBE_HIT_SS) 
(*p-ainsn.insn == BREAKPOINT_INSTRUCTION), untouched and is handled
as it was before. However, if (kcb-kprobe_status==KPROBE_HIT_SS) 
!(*p-ainsn.insn == BREAKPOINT_INSTRUCTION), then instead of
incrementing nmissed count like before, it should cry out a BUG. This
is not an ordinary recursive probe handling case which should update
the nmissed count.

  + default:
  + /* impossible cases */
  + BUG();

 I think no need to check that here.

It may not get hit at runtime but is quite informative.

   static int __kprobes kprobe_handler(struct pt_regs *regs)
   {
  - struct kprobe *p;
int ret = 0;
kprobe_opcode_t *addr;
  + struct kprobe *p, *cur;
struct kprobe_ctlblk *kcb;
 
addr = (kprobe_opcode_t *)(regs-ip - sizeof(kprobe_opcode_t));
  + if (*addr != BREAKPOINT_INSTRUCTION) {
  + /*
  +  * The breakpoint instruction was removed right
  +  * after we hit it.  Another cpu has removed
  +  * either a probepoint or a debugger breakpoint
  +  * at this address.  In either case, no further
  +  * handling of this interrupt is appropriate.
  +  * Back up over the (now missing) int3 and run
  +  * the original instruction.
  +  */
  + regs-ip = (unsigned long)addr;
  + return 1;
  + }

 I think this block changing would better be separated from this patch,
 because it changes code flow logically.

Agreed. I'll address this in another thread, but for now it is safest
to check it for all cases right at the entry of kprobe_handler().

 
  - /*
  -  * We don't want to be preempted for the entire
  -  * duration of kprobe processing
  -  */
preempt_disable();
kcb = get_kprobe_ctlblk();
  -
  + cur = kprobe_running();

 I think you don't need cur, because kprobe_running() is called
 just once on each path.

Will get rid of that.

  - regs-ip = (unsigned long)addr;
  + if (!p-pre_handler || !p-pre_handler(p, regs)) {
  + if (setup_boost(p, regs)) {
  + prepare_singlestep(p, regs);
  + kcb-kprobe_status = KPROBE_HIT_SS;

 (*) duplication 1

  + }
  + }
ret = 1;
  - goto preempt_out;
}
  - if (kprobe_running()) {
  - p = __get_cpu_var(current_kprobe);
  - 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-02 Thread Ingo Molnar

* Abhishek Sagar [EMAIL PROTECTED] wrote:

   + default:
   + /* impossible cases */
   + BUG();
 
  I think no need to check that here.
 
 It may not get hit at runtime but is quite informative.

sidenote: please use WARN_ON(1) instead. In the case of the impossible 
triggering somehow (the Linux kernel is notorious for that ;-), we'd 
have a nice WARN_ON(1) in the dmesg, instead of a crashed box.

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-02 Thread Masami Hiramatsu
Hi Abhishek,

Abhishek Sagar wrote:
 On 1/2/08, Masami Hiramatsu [EMAIL PROTECTED] wrote:
 I think setup_singlestep() in your first patch is better, because it avoided
 code duplication(*).
 
 Will retain it then.

Thank you very much.

  static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
   struct kprobe_ctlblk *kcb)
  {
 - if (kcb-kprobe_status == KPROBE_HIT_SS 
 - *p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
 - regs-flags = ~X86_EFLAGS_TF;
 - regs-flags |= kcb-kprobe_saved_flags;
 - return 0;
 + int ret = 0;
 +
 + switch (kcb-kprobe_status) {
 + case KPROBE_HIT_SSDONE:
  #ifdef CONFIG_X86_64
 - } else if (kcb-kprobe_status == KPROBE_HIT_SSDONE) {
 - /* TODO: Provide re-entrancy from post_kprobes_handler() and
 -  * avoid exception stack corruption while single-stepping on
 + /* TODO: Provide re-entrancy from
 +  * post_kprobes_handler() and avoid exception
 +  * stack corruption while single-stepping on
 Why would you modify it?
 
 Do you mean the comment? I had this code in kprobe_handler earlier and
 it consistently exceeded 80 columns in my case. Will fix it anyways.

Yes, my previous patch already took care of it. Thanks.

 + ret = 1;
 + break;
  #endif
 + case KPROBE_HIT_ACTIVE:
 + /* a probe has been hit inside a
 +  * user handler */
 + save_previous_kprobe(kcb);
 + set_current_kprobe(p, regs, kcb);
 + kprobes_inc_nmissed_count(p);
 + prepare_singlestep(p, regs);
 + kcb-kprobe_status = KPROBE_REENTER;
 + ret = 1;
 + break;
 + case KPROBE_HIT_SS:
 + if (*p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
 + regs-flags = ~TF_MASK;
 + regs-flags |= kcb-kprobe_saved_flags;
 + } else {
 + /* BUG? */
 + }
 + break;
 If my thought is correct, we don't need to use swich-case here,
 Because there are only 2 cases, KPROBE_HIT_SSDONE (x86-64 only)
 or others.
 As a result, this function just setups re-entrance.
 
 As you've also pointed out in your previous reply, this case is
 peculiar and therefore I believe it should be marked as a BUG(). I've
 left the original case, if (kcb-kprobe_status==KPROBE_HIT_SS) 
 (*p-ainsn.insn == BREAKPOINT_INSTRUCTION), untouched and is handled
 as it was before. However, if (kcb-kprobe_status==KPROBE_HIT_SS) 
 !(*p-ainsn.insn == BREAKPOINT_INSTRUCTION), then instead of
 incrementing nmissed count like before, it should cry out a BUG. This
 is not an ordinary recursive probe handling case which should update
 the nmissed count.

Hmm, I can not agree, because it is possible to insert a kprobe
into kprobe's instruction buffer. If it should be a bug, we must
check it when registering the kprobe.
(And also, in *p-ainsn.insn == BREAKPOINT_INSTRUCTION case, I doubt
that the kernel can handle this orphaned breakpoint, because the
breakpoint address has been changed.)

Anyway, if you would like to change the logic, please separate it from
cleanup patch.

 + default:
 + /* impossible cases */
 + BUG();
 I think no need to check that here.
 
 It may not get hit at runtime but is quite informative.

OK, I see.

 
  static int __kprobes kprobe_handler(struct pt_regs *regs)
  {
 - struct kprobe *p;
   int ret = 0;
   kprobe_opcode_t *addr;
 + struct kprobe *p, *cur;
   struct kprobe_ctlblk *kcb;

   addr = (kprobe_opcode_t *)(regs-ip - sizeof(kprobe_opcode_t));
 + if (*addr != BREAKPOINT_INSTRUCTION) {
 + /*
 +  * The breakpoint instruction was removed right
 +  * after we hit it.  Another cpu has removed
 +  * either a probepoint or a debugger breakpoint
 +  * at this address.  In either case, no further
 +  * handling of this interrupt is appropriate.
 +  * Back up over the (now missing) int3 and run
 +  * the original instruction.
 +  */
 + regs-ip = (unsigned long)addr;
 + return 1;
 + }
 I think this block changing would better be separated from this patch,
 because it changes code flow logically.
 
 Agreed. I'll address this in another thread, but for now it is safest
 to check it for all cases right at the entry of kprobe_handler().

Thanks,

 -ss_probe:
 - ret = 1;
 -#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
 - if (p-ainsn.boostable == 1  !p-post_handler) {
 - /* Boost up -- we can execute copied instructions directly */
 - reset_current_kprobe();
 - regs-ip = (unsigned long)p-ainsn.insn;
 - goto preempt_out;
 - }
 -#endif
 - prepare_singlestep(p, regs);
 - kcb-kprobe_status = KPROBE_HIT_SS;
 - goto 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Abhishek Sagar
On 1/2/08, Harvey Harrison <[EMAIL PROTECTED]> wrote:
> > +#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
> > +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
> > *regs)
> > +{
> > + if (p->ainsn.boostable == 1 && !p->post_handler) {
> > + /* Boost up -- we can execute copied instructions directly */
> > + reset_current_kprobe();
> > + regs->ip = (unsigned long)p->ainsn.insn;
> > + preempt_enable_no_resched();
> > + return 0;
> > + }
> > + return 1;
> > +}
> > +#else
> > +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
> > *regs)
> > +{
> > + return 1;
> > +}
> > +#endif
> > +
> In the kernel __always_inline == inline, also I think it's nicer to only
> have one function declaration, and then ifdef the body of the function.
>
> Something like:
>
> static inline int setup_boost(struct kprobe *p, struct pt_regs *regs)
> {
> #if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
> if (p->ainsn.boostable == 1 && !p->post_handler) {
> /* Boost up -- we can execute copied instructions directly */
> reset_current_kprobe();
> regs->ip = (unsigned long)p->ainsn.insn;
> preempt_enable_no_resched();
> return 0;
> }
> #endif
> return 1;
> }

Ok...will include this after I pick up some more comments.

> >  static int __kprobes kprobe_handler(struct pt_regs *regs)
> >  {
> > - struct kprobe *p;
> >   int ret = 0;
> >   kprobe_opcode_t *addr;
> > + struct kprobe *p, *cur;
> >   struct kprobe_ctlblk *kcb;
> >
> >   addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t));
> > + if (*addr != BREAKPOINT_INSTRUCTION) {
> > + /*
> > +  * The breakpoint instruction was removed right
> > +  * after we hit it.  Another cpu has removed
> > +  * either a probepoint or a debugger breakpoint
> > +  * at this address.  In either case, no further
> > +  * handling of this interrupt is appropriate.
> > +  * Back up over the (now missing) int3 and run
> > +  * the original instruction.
> > +  */
> > + regs->ip = (unsigned long)addr;
> > + return 1;
> > + }
>
> This return is fine I guess, but after the preempt_disable() I like
> the goto approach as it will be easier to see what paths enable
> preemption again and which don'tbonus points if we can move this
> to the caller or make sure we reenable in all cases before returning
> and pull in the code in the caller that does this for us.
>
> But I guess your approach of using ret to test whether we need to
> reenable preemption or not would work as a signal to the caller that
> they need to reenable preemption.

Hmm...since enabling preemption is tied to 'ret', anyone reading
kprobe_handler will have to follow around all calls which modify it.
There are some checks in the current kprobe_handler definition made
just to do what you're saying, i.e, to push all preemption
enable/disables in krpobe_handler. LIke this one (from the current x86
kprobe_handler):


ret = reenter_kprobe(p, regs, kcb);
if (kcb->kprobe_status == KPROBE_REENTER)
{
ret = 1;
goto out;
}
goto preempt_out;

-

This is just confusing because we're not actually making any
exceptions here for the KPROBE_REENTER case (which has been partially
handled in reenter_kprobe), rather just tricking our way out of
preemption enabling for a cpl of cases in reenter_kprobe.

> Cheers,
>
> Harvey

Thanks,
Abhishek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Abhishek Sagar
On 1/1/08, Masami Hiramatsu <[EMAIL PROTECTED]> wrote:

> Could you separate changing logic from cleanup and explain
> why the logic should be changed?

The major portion of logical changes is re-routing of code flow by
removing gotos. Hopefully all the cases have been covered. There are a
couple of changes that I'd like to address (see below).

> > +static __always_inline void setup_singlestep(struct kprobe *p,
> > +  struct pt_regs *regs,
> > +  struct kprobe_ctlblk *kcb)
> > +{
> > +#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
> > + if (p->ainsn.boostable == 1 && !p->post_handler) {
> > + /* Boost up -- we can execute copied instructions directly */
> > + reset_current_kprobe();
> > + regs->eip = (unsigned long)p->ainsn.insn;
> > + preempt_enable_no_resched();
> > + } else {
> > + prepare_singlestep(p, regs);
> > + kcb->kprobe_status = KPROBE_HIT_SS;
> > + }
> > +#else
> > + prepare_singlestep(p, regs);
> > + kcb->kprobe_status = KPROBE_HIT_SS;
>
> please avoid code duplication.

I've addressed it in the latest patch.

> >  static int __kprobes kprobe_handler(struct pt_regs *regs)
> >  {
> > - struct kprobe *p;
> >   int ret = 0;
> >   kprobe_opcode_t *addr;
> > + struct kprobe *p, *cur;
> >   struct kprobe_ctlblk *kcb;
> >
> >   addr = (kprobe_opcode_t *)(regs->eip - sizeof(kprobe_opcode_t));
> > + if (*addr != BREAKPOINT_INSTRUCTION) {
> > + /*
> > +  * The breakpoint instruction was removed right
> > +  * after we hit it.  Another cpu has removed
> > +  * either a probepoint or a debugger breakpoint
> > +  * at this address.  In either case, no further
> > +  * handling of this interrupt is appropriate.
> > +  * Back up over the (now missing) int3 and run
> > +  * the original instruction.
> > +  */
> > + regs->eip -= sizeof(kprobe_opcode_t);
> > + return 1;
> > + }

I have moved the above breakpoint removal check at the beginning of
the function. The current check is for '!p' case only, whereas IMO
this check should be performed for all cases. An external debugger may
very well plant and remove a breakpoint on an address which has probe
on it. This check is a race nevertheless, so its relative placing
within kprobe_handler() would be best where its duplication can be
avoided.

> >
> > - /* Check we're not actually recursing */
> > - if (kprobe_running()) {
> > - p = get_kprobe(addr);
> > - if (p) {
> > - if (kcb->kprobe_status == KPROBE_HIT_SS &&
> > - *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
> > - regs->eflags &= ~TF_MASK;
> > - regs->eflags |= kcb->kprobe_saved_eflags;
> > - goto no_kprobe;
> > - }
> > - /* We have reentered the kprobe_handler(), since
> > -  * another probe was hit while within the handler.
> > -  * We here save the original kprobes variables and
> > -  * just single step on the instruction of the new 
> > probe
> > -  * without calling any user handlers.
> > -  */
> > - save_previous_kprobe(kcb);
> > - set_current_kprobe(p, regs, kcb);
> > - kprobes_inc_nmissed_count(p);
> > - prepare_singlestep(p, regs);
> > - kcb->kprobe_status = KPROBE_REENTER;
> > - return 1;
> > - } else {
> > - if (*addr != BREAKPOINT_INSTRUCTION) {
> > - /* The breakpoint instruction was removed by
> > -  * another cpu right after we hit, no further
> > -  * handling of this interrupt is appropriate
> > -  */
> > - regs->eip -= sizeof(kprobe_opcode_t);
> > + if (p) {
> > + if (cur) {
> > + switch (kcb->kprobe_status) {
> > + case KPROBE_HIT_ACTIVE:
> > + case KPROBE_HIT_SSDONE:
> > + /* a probe has been hit inside a
> > +  * user handler */
> > + save_previous_kprobe(kcb);
> > + set_current_kprobe(p, regs, kcb);
> > + kprobes_inc_nmissed_count(p);
> > + prepare_singlestep(p, regs);
> > + kcb->kprobe_status = KPROBE_REENTER;
> >   ret = 1;
> > - goto no_kprobe;
> > - }
> > 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Harvey Harrison
Just a few nitpicks.  I need to look closer at the reenter_kprobe
changes, but it looks like this should lead to clearer flow than
before.  The whole !p/kprobe_running() differences were pretty
twisty before.

On Wed, 2008-01-02 at 01:10 +0530, Abhishek Sagar wrote:
> Thanks for pointing me to the right tree. I have made some code 
> re-arrangements in this one.
> 
> Signed-off-by: Abhishek Sagar <[EMAIL PROTECTED]>
> Signed-off-by: Quentin Barnes <[EMAIL PROTECTED]>
> ---
> 
> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> index a72e02b..45adc8e 100644
> --- a/arch/x86/kernel/kprobes.c
> +++ b/arch/x86/kernel/kprobes.c
> @@ -441,6 +441,26 @@ void __kprobes arch_prepare_kretprobe(struct 
> kretprobe_instance *ri,
>   /* Replace the return addr with trampoline addr */
>   *sara = (unsigned long) _trampoline;
>  }
> +
> +#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
> +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
> *regs)
> +{
> + if (p->ainsn.boostable == 1 && !p->post_handler) {
> + /* Boost up -- we can execute copied instructions directly */
> + reset_current_kprobe();
> + regs->ip = (unsigned long)p->ainsn.insn;
> + preempt_enable_no_resched();
> + return 0;
> + }
> + return 1;
> +}
> +#else
> +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
> *regs)
> +{
> + return 1;
> +}
> +#endif
> +
In the kernel __always_inline == inline, also I think it's nicer to only
have one function declaration, and then ifdef the body of the function.

Something like:

static inline int setup_boost(struct kprobe *p, struct pt_regs *regs)
{
#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
if (p->ainsn.boostable == 1 && !p->post_handler) {
/* Boost up -- we can execute copied instructions directly */
reset_current_kprobe();
regs->ip = (unsigned long)p->ainsn.insn;
preempt_enable_no_resched();
return 0;
}
#endif
return 1;
}

> 
> 
>  /*
>   * We have reentered the kprobe_handler(), since another probe was hit while
>   * within the handler. We save the original kprobes variables and just single
> @@ -449,29 +469,47 @@ void __kprobes arch_prepare_kretprobe(struct 
> kretprobe_instance *ri,
>  static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
>   struct kprobe_ctlblk *kcb)
>  {
> - if (kcb->kprobe_status == KPROBE_HIT_SS &&
> - *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
> - regs->flags &= ~X86_EFLAGS_TF;
> - regs->flags |= kcb->kprobe_saved_flags;
> - return 0;
> + int ret = 0;
> +
> + switch (kcb->kprobe_status) {
> + case KPROBE_HIT_SSDONE:
>  #ifdef CONFIG_X86_64
> - } else if (kcb->kprobe_status == KPROBE_HIT_SSDONE) {
> - /* TODO: Provide re-entrancy from post_kprobes_handler() and
> -  * avoid exception stack corruption while single-stepping on
> + /* TODO: Provide re-entrancy from
> +  * post_kprobes_handler() and avoid exception
> +  * stack corruption while single-stepping on
>* the instruction of the new probe.
>*/
>   arch_disarm_kprobe(p);
>   regs->ip = (unsigned long)p->addr;
>   reset_current_kprobe();
> - return 1;
> + preempt_enable_no_resched();
> + ret = 1;
> + break;
>  #endif
> + case KPROBE_HIT_ACTIVE:
> + /* a probe has been hit inside a
> +  * user handler */
> + save_previous_kprobe(kcb);
> + set_current_kprobe(p, regs, kcb);
> + kprobes_inc_nmissed_count(p);
> + prepare_singlestep(p, regs);
> + kcb->kprobe_status = KPROBE_REENTER;
> + ret = 1;
> + break;
> + case KPROBE_HIT_SS:
> + if (*p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
> + regs->flags &= ~TF_MASK;
> + regs->flags |= kcb->kprobe_saved_flags;
> + } else {
> + /* BUG? */
> + }
> + break;
> + default:
> + /* impossible cases */
> + BUG();
>   }
> - save_previous_kprobe(kcb);
> - set_current_kprobe(p, regs, kcb);
> - kprobes_inc_nmissed_count(p);
> - prepare_singlestep(p, regs);
> - kcb->kprobe_status = KPROBE_REENTER;
> - return 1;
> +
> + return ret;
>  }
>  
>  /*
> @@ -480,82 +518,67 @@ static int __kprobes reenter_kprobe(struct kprobe *p, 
> struct pt_regs *regs,
>   */
>  static int __kprobes kprobe_handler(struct pt_regs *regs)
>  {
> - struct kprobe *p;
>   int ret = 0;
>   kprobe_opcode_t *addr;
> + struct kprobe *p, *cur;
>   struct kprobe_ctlblk *kcb;
>  
>   addr = 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Abhishek Sagar
Ingo Molnar wrote:
> hm, this patch does not apply to x86.git#mm, due to the fixes, 
> unifications and cleanups done there. Could you send a patch against -mm 
> or against x86.git? (see the tree-fetching instructions below) Thanks,
> 
>   Ingo
> 
> --{ x86.git instructions }-->
> 
>  git-clone 
> git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git 
> linux-2.6.git
>  cd linux-2.6.git
>  git-branch x86
>  git-checkout x86
>  git-pull git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86.git 
> mm
> 
> (do subsequent pulls via "git-pull --force", as we frequently rebase the
> git tree. NOTE: this might override your own local changes, so do this
> only if you dont mind about losing thse changes in that tree.)
> 

Thanks for pointing me to the right tree. I have made some code re-arrangements 
in this one.

Signed-off-by: Abhishek Sagar <[EMAIL PROTECTED]>
Signed-off-by: Quentin Barnes <[EMAIL PROTECTED]>
---

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index a72e02b..45adc8e 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -441,6 +441,26 @@ void __kprobes arch_prepare_kretprobe(struct 
kretprobe_instance *ri,
/* Replace the return addr with trampoline addr */
*sara = (unsigned long) _trampoline;
 }
+
+#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
+static __always_inline int setup_boost(struct kprobe *p, struct pt_regs *regs)
+{
+   if (p->ainsn.boostable == 1 && !p->post_handler) {
+   /* Boost up -- we can execute copied instructions directly */
+   reset_current_kprobe();
+   regs->ip = (unsigned long)p->ainsn.insn;
+   preempt_enable_no_resched();
+   return 0;
+   }
+   return 1;
+}
+#else
+static __always_inline int setup_boost(struct kprobe *p, struct pt_regs *regs)
+{
+   return 1;
+}
+#endif
+
 /*
  * We have reentered the kprobe_handler(), since another probe was hit while
  * within the handler. We save the original kprobes variables and just single
@@ -449,29 +469,47 @@ void __kprobes arch_prepare_kretprobe(struct 
kretprobe_instance *ri,
 static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
 {
-   if (kcb->kprobe_status == KPROBE_HIT_SS &&
-   *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
-   regs->flags &= ~X86_EFLAGS_TF;
-   regs->flags |= kcb->kprobe_saved_flags;
-   return 0;
+   int ret = 0;
+
+   switch (kcb->kprobe_status) {
+   case KPROBE_HIT_SSDONE:
 #ifdef CONFIG_X86_64
-   } else if (kcb->kprobe_status == KPROBE_HIT_SSDONE) {
-   /* TODO: Provide re-entrancy from post_kprobes_handler() and
-* avoid exception stack corruption while single-stepping on
+   /* TODO: Provide re-entrancy from
+* post_kprobes_handler() and avoid exception
+* stack corruption while single-stepping on
 * the instruction of the new probe.
 */
arch_disarm_kprobe(p);
regs->ip = (unsigned long)p->addr;
reset_current_kprobe();
-   return 1;
+   preempt_enable_no_resched();
+   ret = 1;
+   break;
 #endif
+   case KPROBE_HIT_ACTIVE:
+   /* a probe has been hit inside a
+* user handler */
+   save_previous_kprobe(kcb);
+   set_current_kprobe(p, regs, kcb);
+   kprobes_inc_nmissed_count(p);
+   prepare_singlestep(p, regs);
+   kcb->kprobe_status = KPROBE_REENTER;
+   ret = 1;
+   break;
+   case KPROBE_HIT_SS:
+   if (*p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
+   regs->flags &= ~TF_MASK;
+   regs->flags |= kcb->kprobe_saved_flags;
+   } else {
+   /* BUG? */
+   }
+   break;
+   default:
+   /* impossible cases */
+   BUG();
}
-   save_previous_kprobe(kcb);
-   set_current_kprobe(p, regs, kcb);
-   kprobes_inc_nmissed_count(p);
-   prepare_singlestep(p, regs);
-   kcb->kprobe_status = KPROBE_REENTER;
-   return 1;
+
+   return ret;
 }
 
 /*
@@ -480,82 +518,67 @@ static int __kprobes reenter_kprobe(struct kprobe *p, 
struct pt_regs *regs,
  */
 static int __kprobes kprobe_handler(struct pt_regs *regs)
 {
-   struct kprobe *p;
int ret = 0;
kprobe_opcode_t *addr;
+   struct kprobe *p, *cur;
struct kprobe_ctlblk *kcb;
 
addr = (kprobe_opcode_t *)(regs->ip - sizeof(kprobe_opcode_t));
+   if (*addr != BREAKPOINT_INSTRUCTION) {
+   /*
+* The breakpoint instruction was removed right
+* after we hit it.  Another cpu 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Masami Hiramatsu
Hello Abhishek,

Abhishek Sagar wrote:
> Harvey Harrison wrote:
>> Make the control flow of kprobe_handler more obvious.
>>
>> Collapse the separate if blocks/gotos with if/else blocks
>> this unifies the duplication of the check for a breakpoint
>> instruction race with another cpu.
> 
> This is a patch derived from kprobe_handler of the ARM kprobes
> port. This further simplifies the current x86 kprobe_handler.
> The resulting definition is smaller, more readable, has no
> goto's and contains only a single call to get_kprobe.

Thank you.
As far as I can see, this patch did more than cleanup.
Could you separate changing logic from cleanup and explain
why the logic should be changed?
That will be very useful for reviewers.

> Signed-off-by: Abhishek Sagar <[EMAIL PROTECTED]>
> Signed-off-by: Quentin Barnes <[EMAIL PROTECTED]>
> ---
> 
> diff --git a/arch/x86/kernel/kprobes_32.c b/arch/x86/kernel/kprobes_32.c
> index 3a020f7..5a626fd 100644
> --- a/arch/x86/kernel/kprobes_32.c
> +++ b/arch/x86/kernel/kprobes_32.c
> @@ -240,108 +240,110 @@ void __kprobes arch_prepare_kretprobe(struct 
> kretprobe_instance *ri,
>   *sara = (unsigned long) _trampoline;
>  }
>  
> +static __always_inline void setup_singlestep(struct kprobe *p,
> +  struct pt_regs *regs,
> +  struct kprobe_ctlblk *kcb)
> +{
> +#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
> + if (p->ainsn.boostable == 1 && !p->post_handler) {
> + /* Boost up -- we can execute copied instructions directly */
> + reset_current_kprobe();
> + regs->eip = (unsigned long)p->ainsn.insn;
> + preempt_enable_no_resched();
> + } else {
> + prepare_singlestep(p, regs);
> + kcb->kprobe_status = KPROBE_HIT_SS;
> + }
> +#else
> + prepare_singlestep(p, regs);
> + kcb->kprobe_status = KPROBE_HIT_SS;

please avoid code duplication.

> +#endif
> +}
> +
>  /*
>   * Interrupts are disabled on entry as trap3 is an interrupt gate and they
>   * remain disabled thorough out this function.
>   */
>  static int __kprobes kprobe_handler(struct pt_regs *regs)
>  {
> - struct kprobe *p;
>   int ret = 0;
>   kprobe_opcode_t *addr;
> + struct kprobe *p, *cur;
>   struct kprobe_ctlblk *kcb;
>  
>   addr = (kprobe_opcode_t *)(regs->eip - sizeof(kprobe_opcode_t));
> + if (*addr != BREAKPOINT_INSTRUCTION) {
> + /*
> +  * The breakpoint instruction was removed right
> +  * after we hit it.  Another cpu has removed
> +  * either a probepoint or a debugger breakpoint
> +  * at this address.  In either case, no further
> +  * handling of this interrupt is appropriate.
> +  * Back up over the (now missing) int3 and run
> +  * the original instruction.
> +  */
> + regs->eip -= sizeof(kprobe_opcode_t);
> + return 1;
> + }
>  
> - /*
> -  * We don't want to be preempted for the entire
> -  * duration of kprobe processing
> -  */
>   preempt_disable();
>   kcb = get_kprobe_ctlblk();
> + cur = kprobe_running();
> + p = get_kprobe(addr);
>  
> - /* Check we're not actually recursing */
> - if (kprobe_running()) {
> - p = get_kprobe(addr);
> - if (p) {
> - if (kcb->kprobe_status == KPROBE_HIT_SS &&
> - *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
> - regs->eflags &= ~TF_MASK;
> - regs->eflags |= kcb->kprobe_saved_eflags;
> - goto no_kprobe;
> - }
> - /* We have reentered the kprobe_handler(), since
> -  * another probe was hit while within the handler.
> -  * We here save the original kprobes variables and
> -  * just single step on the instruction of the new probe
> -  * without calling any user handlers.
> -  */
> - save_previous_kprobe(kcb);
> - set_current_kprobe(p, regs, kcb);
> - kprobes_inc_nmissed_count(p);
> - prepare_singlestep(p, regs);
> - kcb->kprobe_status = KPROBE_REENTER;
> - return 1;
> - } else {
> - if (*addr != BREAKPOINT_INSTRUCTION) {
> - /* The breakpoint instruction was removed by
> -  * another cpu right after we hit, no further
> -  * handling of this interrupt is appropriate
> -  */
> - regs->eip -= sizeof(kprobe_opcode_t);
> + if (p) {
> + if (cur) {
> + switch (kcb->kprobe_status) {
> + case 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Ingo Molnar

* Abhishek Sagar <[EMAIL PROTECTED]> wrote:

> Harvey Harrison wrote:
> > Make the control flow of kprobe_handler more obvious.
> > 
> > Collapse the separate if blocks/gotos with if/else blocks
> > this unifies the duplication of the check for a breakpoint
> > instruction race with another cpu.
> 
> This is a patch derived from kprobe_handler of the ARM kprobes port. 
> This further simplifies the current x86 kprobe_handler. The resulting 
> definition is smaller, more readable, has no goto's and contains only 
> a single call to get_kprobe.

hm, this patch does not apply to x86.git#mm, due to the fixes, 
unifications and cleanups done there. Could you send a patch against -mm 
or against x86.git? (see the tree-fetching instructions below) Thanks,

Ingo

--{ x86.git instructions }-->

 git-clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git 
linux-2.6.git
 cd linux-2.6.git
 git-branch x86
 git-checkout x86
 git-pull git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86.git mm

(do subsequent pulls via "git-pull --force", as we frequently rebase the
git tree. NOTE: this might override your own local changes, so do this
only if you dont mind about losing thse changes in that tree.)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Ingo Molnar

* Abhishek Sagar [EMAIL PROTECTED] wrote:

 Harvey Harrison wrote:
  Make the control flow of kprobe_handler more obvious.
  
  Collapse the separate if blocks/gotos with if/else blocks
  this unifies the duplication of the check for a breakpoint
  instruction race with another cpu.
 
 This is a patch derived from kprobe_handler of the ARM kprobes port. 
 This further simplifies the current x86 kprobe_handler. The resulting 
 definition is smaller, more readable, has no goto's and contains only 
 a single call to get_kprobe.

hm, this patch does not apply to x86.git#mm, due to the fixes, 
unifications and cleanups done there. Could you send a patch against -mm 
or against x86.git? (see the tree-fetching instructions below) Thanks,

Ingo

--{ x86.git instructions }--

 git-clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git 
linux-2.6.git
 cd linux-2.6.git
 git-branch x86
 git-checkout x86
 git-pull git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86.git mm

(do subsequent pulls via git-pull --force, as we frequently rebase the
git tree. NOTE: this might override your own local changes, so do this
only if you dont mind about losing thse changes in that tree.)

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Masami Hiramatsu
Hello Abhishek,

Abhishek Sagar wrote:
 Harvey Harrison wrote:
 Make the control flow of kprobe_handler more obvious.

 Collapse the separate if blocks/gotos with if/else blocks
 this unifies the duplication of the check for a breakpoint
 instruction race with another cpu.
 
 This is a patch derived from kprobe_handler of the ARM kprobes
 port. This further simplifies the current x86 kprobe_handler.
 The resulting definition is smaller, more readable, has no
 goto's and contains only a single call to get_kprobe.

Thank you.
As far as I can see, this patch did more than cleanup.
Could you separate changing logic from cleanup and explain
why the logic should be changed?
That will be very useful for reviewers.

 Signed-off-by: Abhishek Sagar [EMAIL PROTECTED]
 Signed-off-by: Quentin Barnes [EMAIL PROTECTED]
 ---
 
 diff --git a/arch/x86/kernel/kprobes_32.c b/arch/x86/kernel/kprobes_32.c
 index 3a020f7..5a626fd 100644
 --- a/arch/x86/kernel/kprobes_32.c
 +++ b/arch/x86/kernel/kprobes_32.c
 @@ -240,108 +240,110 @@ void __kprobes arch_prepare_kretprobe(struct 
 kretprobe_instance *ri,
   *sara = (unsigned long) kretprobe_trampoline;
  }
  
 +static __always_inline void setup_singlestep(struct kprobe *p,
 +  struct pt_regs *regs,
 +  struct kprobe_ctlblk *kcb)
 +{
 +#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
 + if (p-ainsn.boostable == 1  !p-post_handler) {
 + /* Boost up -- we can execute copied instructions directly */
 + reset_current_kprobe();
 + regs-eip = (unsigned long)p-ainsn.insn;
 + preempt_enable_no_resched();
 + } else {
 + prepare_singlestep(p, regs);
 + kcb-kprobe_status = KPROBE_HIT_SS;
 + }
 +#else
 + prepare_singlestep(p, regs);
 + kcb-kprobe_status = KPROBE_HIT_SS;

please avoid code duplication.

 +#endif
 +}
 +
  /*
   * Interrupts are disabled on entry as trap3 is an interrupt gate and they
   * remain disabled thorough out this function.
   */
  static int __kprobes kprobe_handler(struct pt_regs *regs)
  {
 - struct kprobe *p;
   int ret = 0;
   kprobe_opcode_t *addr;
 + struct kprobe *p, *cur;
   struct kprobe_ctlblk *kcb;
  
   addr = (kprobe_opcode_t *)(regs-eip - sizeof(kprobe_opcode_t));
 + if (*addr != BREAKPOINT_INSTRUCTION) {
 + /*
 +  * The breakpoint instruction was removed right
 +  * after we hit it.  Another cpu has removed
 +  * either a probepoint or a debugger breakpoint
 +  * at this address.  In either case, no further
 +  * handling of this interrupt is appropriate.
 +  * Back up over the (now missing) int3 and run
 +  * the original instruction.
 +  */
 + regs-eip -= sizeof(kprobe_opcode_t);
 + return 1;
 + }
  
 - /*
 -  * We don't want to be preempted for the entire
 -  * duration of kprobe processing
 -  */
   preempt_disable();
   kcb = get_kprobe_ctlblk();
 + cur = kprobe_running();
 + p = get_kprobe(addr);
  
 - /* Check we're not actually recursing */
 - if (kprobe_running()) {
 - p = get_kprobe(addr);
 - if (p) {
 - if (kcb-kprobe_status == KPROBE_HIT_SS 
 - *p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
 - regs-eflags = ~TF_MASK;
 - regs-eflags |= kcb-kprobe_saved_eflags;
 - goto no_kprobe;
 - }
 - /* We have reentered the kprobe_handler(), since
 -  * another probe was hit while within the handler.
 -  * We here save the original kprobes variables and
 -  * just single step on the instruction of the new probe
 -  * without calling any user handlers.
 -  */
 - save_previous_kprobe(kcb);
 - set_current_kprobe(p, regs, kcb);
 - kprobes_inc_nmissed_count(p);
 - prepare_singlestep(p, regs);
 - kcb-kprobe_status = KPROBE_REENTER;
 - return 1;
 - } else {
 - if (*addr != BREAKPOINT_INSTRUCTION) {
 - /* The breakpoint instruction was removed by
 -  * another cpu right after we hit, no further
 -  * handling of this interrupt is appropriate
 -  */
 - regs-eip -= sizeof(kprobe_opcode_t);
 + if (p) {
 + if (cur) {
 + switch (kcb-kprobe_status) {
 + case KPROBE_HIT_ACTIVE:
 + case KPROBE_HIT_SSDONE:
 + /* a probe has been hit inside a
 + 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Abhishek Sagar
Ingo Molnar wrote:
 hm, this patch does not apply to x86.git#mm, due to the fixes, 
 unifications and cleanups done there. Could you send a patch against -mm 
 or against x86.git? (see the tree-fetching instructions below) Thanks,
 
   Ingo
 
 --{ x86.git instructions }--
 
  git-clone 
 git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git 
 linux-2.6.git
  cd linux-2.6.git
  git-branch x86
  git-checkout x86
  git-pull git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86.git 
 mm
 
 (do subsequent pulls via git-pull --force, as we frequently rebase the
 git tree. NOTE: this might override your own local changes, so do this
 only if you dont mind about losing thse changes in that tree.)
 

Thanks for pointing me to the right tree. I have made some code re-arrangements 
in this one.

Signed-off-by: Abhishek Sagar [EMAIL PROTECTED]
Signed-off-by: Quentin Barnes [EMAIL PROTECTED]
---

diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index a72e02b..45adc8e 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -441,6 +441,26 @@ void __kprobes arch_prepare_kretprobe(struct 
kretprobe_instance *ri,
/* Replace the return addr with trampoline addr */
*sara = (unsigned long) kretprobe_trampoline;
 }
+
+#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
+static __always_inline int setup_boost(struct kprobe *p, struct pt_regs *regs)
+{
+   if (p-ainsn.boostable == 1  !p-post_handler) {
+   /* Boost up -- we can execute copied instructions directly */
+   reset_current_kprobe();
+   regs-ip = (unsigned long)p-ainsn.insn;
+   preempt_enable_no_resched();
+   return 0;
+   }
+   return 1;
+}
+#else
+static __always_inline int setup_boost(struct kprobe *p, struct pt_regs *regs)
+{
+   return 1;
+}
+#endif
+
 /*
  * We have reentered the kprobe_handler(), since another probe was hit while
  * within the handler. We save the original kprobes variables and just single
@@ -449,29 +469,47 @@ void __kprobes arch_prepare_kretprobe(struct 
kretprobe_instance *ri,
 static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
 {
-   if (kcb-kprobe_status == KPROBE_HIT_SS 
-   *p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
-   regs-flags = ~X86_EFLAGS_TF;
-   regs-flags |= kcb-kprobe_saved_flags;
-   return 0;
+   int ret = 0;
+
+   switch (kcb-kprobe_status) {
+   case KPROBE_HIT_SSDONE:
 #ifdef CONFIG_X86_64
-   } else if (kcb-kprobe_status == KPROBE_HIT_SSDONE) {
-   /* TODO: Provide re-entrancy from post_kprobes_handler() and
-* avoid exception stack corruption while single-stepping on
+   /* TODO: Provide re-entrancy from
+* post_kprobes_handler() and avoid exception
+* stack corruption while single-stepping on
 * the instruction of the new probe.
 */
arch_disarm_kprobe(p);
regs-ip = (unsigned long)p-addr;
reset_current_kprobe();
-   return 1;
+   preempt_enable_no_resched();
+   ret = 1;
+   break;
 #endif
+   case KPROBE_HIT_ACTIVE:
+   /* a probe has been hit inside a
+* user handler */
+   save_previous_kprobe(kcb);
+   set_current_kprobe(p, regs, kcb);
+   kprobes_inc_nmissed_count(p);
+   prepare_singlestep(p, regs);
+   kcb-kprobe_status = KPROBE_REENTER;
+   ret = 1;
+   break;
+   case KPROBE_HIT_SS:
+   if (*p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
+   regs-flags = ~TF_MASK;
+   regs-flags |= kcb-kprobe_saved_flags;
+   } else {
+   /* BUG? */
+   }
+   break;
+   default:
+   /* impossible cases */
+   BUG();
}
-   save_previous_kprobe(kcb);
-   set_current_kprobe(p, regs, kcb);
-   kprobes_inc_nmissed_count(p);
-   prepare_singlestep(p, regs);
-   kcb-kprobe_status = KPROBE_REENTER;
-   return 1;
+
+   return ret;
 }
 
 /*
@@ -480,82 +518,67 @@ static int __kprobes reenter_kprobe(struct kprobe *p, 
struct pt_regs *regs,
  */
 static int __kprobes kprobe_handler(struct pt_regs *regs)
 {
-   struct kprobe *p;
int ret = 0;
kprobe_opcode_t *addr;
+   struct kprobe *p, *cur;
struct kprobe_ctlblk *kcb;
 
addr = (kprobe_opcode_t *)(regs-ip - sizeof(kprobe_opcode_t));
+   if (*addr != BREAKPOINT_INSTRUCTION) {
+   /*
+* The breakpoint instruction was removed right
+* after we hit it.  Another cpu has removed
+* either a 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Harvey Harrison
Just a few nitpicks.  I need to look closer at the reenter_kprobe
changes, but it looks like this should lead to clearer flow than
before.  The whole !p/kprobe_running() differences were pretty
twisty before.

On Wed, 2008-01-02 at 01:10 +0530, Abhishek Sagar wrote:
 Thanks for pointing me to the right tree. I have made some code 
 re-arrangements in this one.
 
 Signed-off-by: Abhishek Sagar [EMAIL PROTECTED]
 Signed-off-by: Quentin Barnes [EMAIL PROTECTED]
 ---
 
 diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
 index a72e02b..45adc8e 100644
 --- a/arch/x86/kernel/kprobes.c
 +++ b/arch/x86/kernel/kprobes.c
 @@ -441,6 +441,26 @@ void __kprobes arch_prepare_kretprobe(struct 
 kretprobe_instance *ri,
   /* Replace the return addr with trampoline addr */
   *sara = (unsigned long) kretprobe_trampoline;
  }
 +
 +#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
 +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
 *regs)
 +{
 + if (p-ainsn.boostable == 1  !p-post_handler) {
 + /* Boost up -- we can execute copied instructions directly */
 + reset_current_kprobe();
 + regs-ip = (unsigned long)p-ainsn.insn;
 + preempt_enable_no_resched();
 + return 0;
 + }
 + return 1;
 +}
 +#else
 +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
 *regs)
 +{
 + return 1;
 +}
 +#endif
 +
In the kernel __always_inline == inline, also I think it's nicer to only
have one function declaration, and then ifdef the body of the function.

Something like:

static inline int setup_boost(struct kprobe *p, struct pt_regs *regs)
{
#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
if (p-ainsn.boostable == 1  !p-post_handler) {
/* Boost up -- we can execute copied instructions directly */
reset_current_kprobe();
regs-ip = (unsigned long)p-ainsn.insn;
preempt_enable_no_resched();
return 0;
}
#endif
return 1;
}

 
 
  /*
   * We have reentered the kprobe_handler(), since another probe was hit while
   * within the handler. We save the original kprobes variables and just single
 @@ -449,29 +469,47 @@ void __kprobes arch_prepare_kretprobe(struct 
 kretprobe_instance *ri,
  static int __kprobes reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
   struct kprobe_ctlblk *kcb)
  {
 - if (kcb-kprobe_status == KPROBE_HIT_SS 
 - *p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
 - regs-flags = ~X86_EFLAGS_TF;
 - regs-flags |= kcb-kprobe_saved_flags;
 - return 0;
 + int ret = 0;
 +
 + switch (kcb-kprobe_status) {
 + case KPROBE_HIT_SSDONE:
  #ifdef CONFIG_X86_64
 - } else if (kcb-kprobe_status == KPROBE_HIT_SSDONE) {
 - /* TODO: Provide re-entrancy from post_kprobes_handler() and
 -  * avoid exception stack corruption while single-stepping on
 + /* TODO: Provide re-entrancy from
 +  * post_kprobes_handler() and avoid exception
 +  * stack corruption while single-stepping on
* the instruction of the new probe.
*/
   arch_disarm_kprobe(p);
   regs-ip = (unsigned long)p-addr;
   reset_current_kprobe();
 - return 1;
 + preempt_enable_no_resched();
 + ret = 1;
 + break;
  #endif
 + case KPROBE_HIT_ACTIVE:
 + /* a probe has been hit inside a
 +  * user handler */
 + save_previous_kprobe(kcb);
 + set_current_kprobe(p, regs, kcb);
 + kprobes_inc_nmissed_count(p);
 + prepare_singlestep(p, regs);
 + kcb-kprobe_status = KPROBE_REENTER;
 + ret = 1;
 + break;
 + case KPROBE_HIT_SS:
 + if (*p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
 + regs-flags = ~TF_MASK;
 + regs-flags |= kcb-kprobe_saved_flags;
 + } else {
 + /* BUG? */
 + }
 + break;
 + default:
 + /* impossible cases */
 + BUG();
   }
 - save_previous_kprobe(kcb);
 - set_current_kprobe(p, regs, kcb);
 - kprobes_inc_nmissed_count(p);
 - prepare_singlestep(p, regs);
 - kcb-kprobe_status = KPROBE_REENTER;
 - return 1;
 +
 + return ret;
  }
  
  /*
 @@ -480,82 +518,67 @@ static int __kprobes reenter_kprobe(struct kprobe *p, 
 struct pt_regs *regs,
   */
  static int __kprobes kprobe_handler(struct pt_regs *regs)
  {
 - struct kprobe *p;
   int ret = 0;
   kprobe_opcode_t *addr;
 + struct kprobe *p, *cur;
   struct kprobe_ctlblk *kcb;
  
   addr = (kprobe_opcode_t *)(regs-ip - sizeof(kprobe_opcode_t));
 + if (*addr != BREAKPOINT_INSTRUCTION) {
 + /*
 +  * The breakpoint 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Abhishek Sagar
On 1/1/08, Masami Hiramatsu [EMAIL PROTECTED] wrote:

 Could you separate changing logic from cleanup and explain
 why the logic should be changed?

The major portion of logical changes is re-routing of code flow by
removing gotos. Hopefully all the cases have been covered. There are a
couple of changes that I'd like to address (see below).

  +static __always_inline void setup_singlestep(struct kprobe *p,
  +  struct pt_regs *regs,
  +  struct kprobe_ctlblk *kcb)
  +{
  +#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
  + if (p-ainsn.boostable == 1  !p-post_handler) {
  + /* Boost up -- we can execute copied instructions directly */
  + reset_current_kprobe();
  + regs-eip = (unsigned long)p-ainsn.insn;
  + preempt_enable_no_resched();
  + } else {
  + prepare_singlestep(p, regs);
  + kcb-kprobe_status = KPROBE_HIT_SS;
  + }
  +#else
  + prepare_singlestep(p, regs);
  + kcb-kprobe_status = KPROBE_HIT_SS;

 please avoid code duplication.

I've addressed it in the latest patch.

   static int __kprobes kprobe_handler(struct pt_regs *regs)
   {
  - struct kprobe *p;
int ret = 0;
kprobe_opcode_t *addr;
  + struct kprobe *p, *cur;
struct kprobe_ctlblk *kcb;
 
addr = (kprobe_opcode_t *)(regs-eip - sizeof(kprobe_opcode_t));
  + if (*addr != BREAKPOINT_INSTRUCTION) {
  + /*
  +  * The breakpoint instruction was removed right
  +  * after we hit it.  Another cpu has removed
  +  * either a probepoint or a debugger breakpoint
  +  * at this address.  In either case, no further
  +  * handling of this interrupt is appropriate.
  +  * Back up over the (now missing) int3 and run
  +  * the original instruction.
  +  */
  + regs-eip -= sizeof(kprobe_opcode_t);
  + return 1;
  + }

I have moved the above breakpoint removal check at the beginning of
the function. The current check is for '!p' case only, whereas IMO
this check should be performed for all cases. An external debugger may
very well plant and remove a breakpoint on an address which has probe
on it. This check is a race nevertheless, so its relative placing
within kprobe_handler() would be best where its duplication can be
avoided.

 
  - /* Check we're not actually recursing */
  - if (kprobe_running()) {
  - p = get_kprobe(addr);
  - if (p) {
  - if (kcb-kprobe_status == KPROBE_HIT_SS 
  - *p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
  - regs-eflags = ~TF_MASK;
  - regs-eflags |= kcb-kprobe_saved_eflags;
  - goto no_kprobe;
  - }
  - /* We have reentered the kprobe_handler(), since
  -  * another probe was hit while within the handler.
  -  * We here save the original kprobes variables and
  -  * just single step on the instruction of the new 
  probe
  -  * without calling any user handlers.
  -  */
  - save_previous_kprobe(kcb);
  - set_current_kprobe(p, regs, kcb);
  - kprobes_inc_nmissed_count(p);
  - prepare_singlestep(p, regs);
  - kcb-kprobe_status = KPROBE_REENTER;
  - return 1;
  - } else {
  - if (*addr != BREAKPOINT_INSTRUCTION) {
  - /* The breakpoint instruction was removed by
  -  * another cpu right after we hit, no further
  -  * handling of this interrupt is appropriate
  -  */
  - regs-eip -= sizeof(kprobe_opcode_t);
  + if (p) {
  + if (cur) {
  + switch (kcb-kprobe_status) {
  + case KPROBE_HIT_ACTIVE:
  + case KPROBE_HIT_SSDONE:
  + /* a probe has been hit inside a
  +  * user handler */
  + save_previous_kprobe(kcb);
  + set_current_kprobe(p, regs, kcb);
  + kprobes_inc_nmissed_count(p);
  + prepare_singlestep(p, regs);
  + kcb-kprobe_status = KPROBE_REENTER;
ret = 1;
  - goto no_kprobe;
  - }
  - p = __get_cpu_var(current_kprobe);
  - if (p-break_handler  p-break_handler(p, regs)) {
  - goto ss_probe;
  +

Re: [PATCH] x86: kprobes change kprobe_handler flow

2008-01-01 Thread Abhishek Sagar
On 1/2/08, Harvey Harrison [EMAIL PROTECTED] wrote:
  +#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
  +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
  *regs)
  +{
  + if (p-ainsn.boostable == 1  !p-post_handler) {
  + /* Boost up -- we can execute copied instructions directly */
  + reset_current_kprobe();
  + regs-ip = (unsigned long)p-ainsn.insn;
  + preempt_enable_no_resched();
  + return 0;
  + }
  + return 1;
  +}
  +#else
  +static __always_inline int setup_boost(struct kprobe *p, struct pt_regs 
  *regs)
  +{
  + return 1;
  +}
  +#endif
  +
 In the kernel __always_inline == inline, also I think it's nicer to only
 have one function declaration, and then ifdef the body of the function.

 Something like:

 static inline int setup_boost(struct kprobe *p, struct pt_regs *regs)
 {
 #if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
 if (p-ainsn.boostable == 1  !p-post_handler) {
 /* Boost up -- we can execute copied instructions directly */
 reset_current_kprobe();
 regs-ip = (unsigned long)p-ainsn.insn;
 preempt_enable_no_resched();
 return 0;
 }
 #endif
 return 1;
 }

Ok...will include this after I pick up some more comments.

   static int __kprobes kprobe_handler(struct pt_regs *regs)
   {
  - struct kprobe *p;
int ret = 0;
kprobe_opcode_t *addr;
  + struct kprobe *p, *cur;
struct kprobe_ctlblk *kcb;
 
addr = (kprobe_opcode_t *)(regs-ip - sizeof(kprobe_opcode_t));
  + if (*addr != BREAKPOINT_INSTRUCTION) {
  + /*
  +  * The breakpoint instruction was removed right
  +  * after we hit it.  Another cpu has removed
  +  * either a probepoint or a debugger breakpoint
  +  * at this address.  In either case, no further
  +  * handling of this interrupt is appropriate.
  +  * Back up over the (now missing) int3 and run
  +  * the original instruction.
  +  */
  + regs-ip = (unsigned long)addr;
  + return 1;
  + }

 This return is fine I guess, but after the preempt_disable() I like
 the goto approach as it will be easier to see what paths enable
 preemption again and which don'tbonus points if we can move this
 to the caller or make sure we reenable in all cases before returning
 and pull in the code in the caller that does this for us.

 But I guess your approach of using ret to test whether we need to
 reenable preemption or not would work as a signal to the caller that
 they need to reenable preemption.

Hmm...since enabling preemption is tied to 'ret', anyone reading
kprobe_handler will have to follow around all calls which modify it.
There are some checks in the current kprobe_handler definition made
just to do what you're saying, i.e, to push all preemption
enable/disables in krpobe_handler. LIke this one (from the current x86
kprobe_handler):


ret = reenter_kprobe(p, regs, kcb);
if (kcb-kprobe_status == KPROBE_REENTER)
{
ret = 1;
goto out;
}
goto preempt_out;

-

This is just confusing because we're not actually making any
exceptions here for the KPROBE_REENTER case (which has been partially
handled in reenter_kprobe), rather just tricking our way out of
preemption enabling for a cpl of cases in reenter_kprobe.

 Cheers,

 Harvey

Thanks,
Abhishek
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2007-12-31 Thread Abhishek Sagar
Harvey Harrison wrote:
> Make the control flow of kprobe_handler more obvious.
> 
> Collapse the separate if blocks/gotos with if/else blocks
> this unifies the duplication of the check for a breakpoint
> instruction race with another cpu.

This is a patch derived from kprobe_handler of the ARM kprobes port. This 
further simplifies the current x86 kprobe_handler. The resulting definition is 
smaller, more readable, has no goto's and contains only a single call to 
get_kprobe.

Signed-off-by: Abhishek Sagar <[EMAIL PROTECTED]>
Signed-off-by: Quentin Barnes <[EMAIL PROTECTED]>
---

diff --git a/arch/x86/kernel/kprobes_32.c b/arch/x86/kernel/kprobes_32.c
index 3a020f7..5a626fd 100644
--- a/arch/x86/kernel/kprobes_32.c
+++ b/arch/x86/kernel/kprobes_32.c
@@ -240,108 +240,110 @@ void __kprobes arch_prepare_kretprobe(struct 
kretprobe_instance *ri,
*sara = (unsigned long) _trampoline;
 }
 
+static __always_inline void setup_singlestep(struct kprobe *p,
+struct pt_regs *regs,
+struct kprobe_ctlblk *kcb)
+{
+#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
+   if (p->ainsn.boostable == 1 && !p->post_handler) {
+   /* Boost up -- we can execute copied instructions directly */
+   reset_current_kprobe();
+   regs->eip = (unsigned long)p->ainsn.insn;
+   preempt_enable_no_resched();
+   } else {
+   prepare_singlestep(p, regs);
+   kcb->kprobe_status = KPROBE_HIT_SS;
+   }
+#else
+   prepare_singlestep(p, regs);
+   kcb->kprobe_status = KPROBE_HIT_SS;
+#endif
+}
+
 /*
  * Interrupts are disabled on entry as trap3 is an interrupt gate and they
  * remain disabled thorough out this function.
  */
 static int __kprobes kprobe_handler(struct pt_regs *regs)
 {
-   struct kprobe *p;
int ret = 0;
kprobe_opcode_t *addr;
+   struct kprobe *p, *cur;
struct kprobe_ctlblk *kcb;
 
addr = (kprobe_opcode_t *)(regs->eip - sizeof(kprobe_opcode_t));
+   if (*addr != BREAKPOINT_INSTRUCTION) {
+   /*
+* The breakpoint instruction was removed right
+* after we hit it.  Another cpu has removed
+* either a probepoint or a debugger breakpoint
+* at this address.  In either case, no further
+* handling of this interrupt is appropriate.
+* Back up over the (now missing) int3 and run
+* the original instruction.
+*/
+   regs->eip -= sizeof(kprobe_opcode_t);
+   return 1;
+   }
 
-   /*
-* We don't want to be preempted for the entire
-* duration of kprobe processing
-*/
preempt_disable();
kcb = get_kprobe_ctlblk();
+   cur = kprobe_running();
+   p = get_kprobe(addr);
 
-   /* Check we're not actually recursing */
-   if (kprobe_running()) {
-   p = get_kprobe(addr);
-   if (p) {
-   if (kcb->kprobe_status == KPROBE_HIT_SS &&
-   *p->ainsn.insn == BREAKPOINT_INSTRUCTION) {
-   regs->eflags &= ~TF_MASK;
-   regs->eflags |= kcb->kprobe_saved_eflags;
-   goto no_kprobe;
-   }
-   /* We have reentered the kprobe_handler(), since
-* another probe was hit while within the handler.
-* We here save the original kprobes variables and
-* just single step on the instruction of the new probe
-* without calling any user handlers.
-*/
-   save_previous_kprobe(kcb);
-   set_current_kprobe(p, regs, kcb);
-   kprobes_inc_nmissed_count(p);
-   prepare_singlestep(p, regs);
-   kcb->kprobe_status = KPROBE_REENTER;
-   return 1;
-   } else {
-   if (*addr != BREAKPOINT_INSTRUCTION) {
-   /* The breakpoint instruction was removed by
-* another cpu right after we hit, no further
-* handling of this interrupt is appropriate
-*/
-   regs->eip -= sizeof(kprobe_opcode_t);
+   if (p) {
+   if (cur) {
+   switch (kcb->kprobe_status) {
+   case KPROBE_HIT_ACTIVE:
+   case KPROBE_HIT_SSDONE:
+   /* a probe has been hit inside a
+* user handler */
+   save_previous_kprobe(kcb);
+   set_current_kprobe(p, regs, kcb);
+   

Re: [PATCH] x86: kprobes change kprobe_handler flow

2007-12-31 Thread Abhishek Sagar
Harvey Harrison wrote:
 Make the control flow of kprobe_handler more obvious.
 
 Collapse the separate if blocks/gotos with if/else blocks
 this unifies the duplication of the check for a breakpoint
 instruction race with another cpu.

This is a patch derived from kprobe_handler of the ARM kprobes port. This 
further simplifies the current x86 kprobe_handler. The resulting definition is 
smaller, more readable, has no goto's and contains only a single call to 
get_kprobe.

Signed-off-by: Abhishek Sagar [EMAIL PROTECTED]
Signed-off-by: Quentin Barnes [EMAIL PROTECTED]
---

diff --git a/arch/x86/kernel/kprobes_32.c b/arch/x86/kernel/kprobes_32.c
index 3a020f7..5a626fd 100644
--- a/arch/x86/kernel/kprobes_32.c
+++ b/arch/x86/kernel/kprobes_32.c
@@ -240,108 +240,110 @@ void __kprobes arch_prepare_kretprobe(struct 
kretprobe_instance *ri,
*sara = (unsigned long) kretprobe_trampoline;
 }
 
+static __always_inline void setup_singlestep(struct kprobe *p,
+struct pt_regs *regs,
+struct kprobe_ctlblk *kcb)
+{
+#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
+   if (p-ainsn.boostable == 1  !p-post_handler) {
+   /* Boost up -- we can execute copied instructions directly */
+   reset_current_kprobe();
+   regs-eip = (unsigned long)p-ainsn.insn;
+   preempt_enable_no_resched();
+   } else {
+   prepare_singlestep(p, regs);
+   kcb-kprobe_status = KPROBE_HIT_SS;
+   }
+#else
+   prepare_singlestep(p, regs);
+   kcb-kprobe_status = KPROBE_HIT_SS;
+#endif
+}
+
 /*
  * Interrupts are disabled on entry as trap3 is an interrupt gate and they
  * remain disabled thorough out this function.
  */
 static int __kprobes kprobe_handler(struct pt_regs *regs)
 {
-   struct kprobe *p;
int ret = 0;
kprobe_opcode_t *addr;
+   struct kprobe *p, *cur;
struct kprobe_ctlblk *kcb;
 
addr = (kprobe_opcode_t *)(regs-eip - sizeof(kprobe_opcode_t));
+   if (*addr != BREAKPOINT_INSTRUCTION) {
+   /*
+* The breakpoint instruction was removed right
+* after we hit it.  Another cpu has removed
+* either a probepoint or a debugger breakpoint
+* at this address.  In either case, no further
+* handling of this interrupt is appropriate.
+* Back up over the (now missing) int3 and run
+* the original instruction.
+*/
+   regs-eip -= sizeof(kprobe_opcode_t);
+   return 1;
+   }
 
-   /*
-* We don't want to be preempted for the entire
-* duration of kprobe processing
-*/
preempt_disable();
kcb = get_kprobe_ctlblk();
+   cur = kprobe_running();
+   p = get_kprobe(addr);
 
-   /* Check we're not actually recursing */
-   if (kprobe_running()) {
-   p = get_kprobe(addr);
-   if (p) {
-   if (kcb-kprobe_status == KPROBE_HIT_SS 
-   *p-ainsn.insn == BREAKPOINT_INSTRUCTION) {
-   regs-eflags = ~TF_MASK;
-   regs-eflags |= kcb-kprobe_saved_eflags;
-   goto no_kprobe;
-   }
-   /* We have reentered the kprobe_handler(), since
-* another probe was hit while within the handler.
-* We here save the original kprobes variables and
-* just single step on the instruction of the new probe
-* without calling any user handlers.
-*/
-   save_previous_kprobe(kcb);
-   set_current_kprobe(p, regs, kcb);
-   kprobes_inc_nmissed_count(p);
-   prepare_singlestep(p, regs);
-   kcb-kprobe_status = KPROBE_REENTER;
-   return 1;
-   } else {
-   if (*addr != BREAKPOINT_INSTRUCTION) {
-   /* The breakpoint instruction was removed by
-* another cpu right after we hit, no further
-* handling of this interrupt is appropriate
-*/
-   regs-eip -= sizeof(kprobe_opcode_t);
+   if (p) {
+   if (cur) {
+   switch (kcb-kprobe_status) {
+   case KPROBE_HIT_ACTIVE:
+   case KPROBE_HIT_SSDONE:
+   /* a probe has been hit inside a
+* user handler */
+   save_previous_kprobe(kcb);
+   set_current_kprobe(p, regs, kcb);
+   kprobes_inc_nmissed_count(p);
+ 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2007-12-30 Thread Ingo Molnar

* Masami Hiramatsu <[EMAIL PROTECTED]> wrote:

> And also, I prefer "return 1" to "{ret = 1; goto out;}" for 
> simplicity.

it seems that the code already uses 'ret', and sets it to 1 in certain 
cases - in that light it's a tiny bit cleaner to just have a single 
return code flow.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: kprobes change kprobe_handler flow

2007-12-30 Thread Masami Hiramatsu
Hello Harvey,

Thank you for your great works!

Harvey Harrison wrote:
> Make the control flow of kprobe_handler more obvious.
> 
> Collapse the separate if blocks/gotos with if/else blocks
> this unifies the duplication of the check for a breakpoint
> instruction race with another cpu.

I agree it is good to unify the duplications of
breakpoint checking and get_kprobe() calling.

> 
> Create two jump targets:
>   preempt_out: re-enables preemption before returning ret
>   out: only returns ret

However, I'm not sure we should change "no_kprobe".
That label is commonly used in arch/*/kernel/kprobes.c.

And also, I prefer "return 1" to "{ret = 1; goto out;}"
for simplicity.
Or, how about initializing "ret" as 1 instead of 0?

Ananth, Jim,
I'd like to hear your comments on it.

> Signed-off-by: Harvey Harrison <[EMAIL PROTECTED]>
> ---
> Masami, noticed a small bug in the previous version in the !p
> case when the breakpoint was the kernel's.  Please review this
> version.
> 
>  arch/x86/kernel/kprobes.c |   60 
> +
>  1 files changed, 28 insertions(+), 32 deletions(-)
> 
> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> index 4e33329..f8c7ac1 100644
> --- a/arch/x86/kernel/kprobes.c
> +++ b/arch/x86/kernel/kprobes.c
> @@ -480,32 +480,28 @@ static int __kprobes kprobe_handler(struct pt_regs 
> *regs)
>   preempt_disable();
>   kcb = get_kprobe_ctlblk();
>  
> - /* Check we're not actually recursing */
> - if (kprobe_running()) {
> - p = get_kprobe(addr);
> - if (p) {
> + p = get_kprobe(addr);
> + if (p) {
> + /* Check we're not actually recursing */
> + if (kprobe_running()) {
>   ret = reenter_kprobe(p, regs, kcb);
>   if (kcb->kprobe_status == KPROBE_REENTER)
> - return 1;
> + {
> + ret = 1;
> + goto out;

I think "return 1" is better.

> + }
> + goto preempt_out;
>   } else {
> - if (*addr != BREAKPOINT_INSTRUCTION) {
> - /* The breakpoint instruction was removed by
> -  * another cpu right after we hit, no further
> -  * handling of this interrupt is appropriate
> -  */
> - regs->ip = (unsigned long)addr;
> + set_current_kprobe(p, regs, kcb);
> + kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> + if (p->pre_handler && p->pre_handler(p, regs))
> + {
> + /* handler set things up, skip ss setup */
>   ret = 1;
> - goto no_kprobe;
> + goto out;

Ditto.

>   }
> - p = __get_cpu_var(current_kprobe);
> - if (p->break_handler && p->break_handler(p, regs))
> - goto ss_probe;
>   }
> - goto no_kprobe;
> - }
> -
> - p = get_kprobe(addr);
> - if (!p) {
> + } else {

I think you'd better move "!p" block forward, because
this block means "relatively rare" cases. (sure, I know jprobe uses this block.)

>   if (*addr != BREAKPOINT_INSTRUCTION) {
>   /*
>* The breakpoint instruction was removed right
> @@ -518,34 +514,34 @@ static int __kprobes kprobe_handler(struct pt_regs 
> *regs)
>*/
>   regs->ip = (unsigned long)addr;
>   ret = 1;
> + goto preempt_out;
> + }
> + if (kprobe_running()) {
> + p = __get_cpu_var(current_kprobe);
> + if (p->break_handler && p->break_handler(p, regs))
> + goto ss_probe;
>   }
>   /* Not one of ours: let kernel handle it */
> - goto no_kprobe;
> + goto preempt_out;
>   }
>  
> - set_current_kprobe(p, regs, kcb);
> - kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> -
> - if (p->pre_handler && p->pre_handler(p, regs))
> - /* handler has already set things up, so skip ss setup */
> - return 1;
> -
>  ss_probe:
> + ret = 1;
>  #if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
>   if (p->ainsn.boostable == 1 && !p->post_handler) {
>   /* Boost up -- we can execute copied instructions directly */
>   reset_current_kprobe();
>   regs->ip = (unsigned long)p->ainsn.insn;
> - preempt_enable_no_resched();
> - return 1;
> + goto preempt_out;
>   }
>  #endif
>   prepare_singlestep(p, regs);
>   kcb->kprobe_status = KPROBE_HIT_SS;
> - 

Re: [PATCH] x86: kprobes change kprobe_handler flow

2007-12-30 Thread Masami Hiramatsu
Hello Harvey,

Thank you for your great works!

Harvey Harrison wrote:
 Make the control flow of kprobe_handler more obvious.
 
 Collapse the separate if blocks/gotos with if/else blocks
 this unifies the duplication of the check for a breakpoint
 instruction race with another cpu.

I agree it is good to unify the duplications of
breakpoint checking and get_kprobe() calling.

 
 Create two jump targets:
   preempt_out: re-enables preemption before returning ret
   out: only returns ret

However, I'm not sure we should change no_kprobe.
That label is commonly used in arch/*/kernel/kprobes.c.

And also, I prefer return 1 to {ret = 1; goto out;}
for simplicity.
Or, how about initializing ret as 1 instead of 0?

Ananth, Jim,
I'd like to hear your comments on it.

 Signed-off-by: Harvey Harrison [EMAIL PROTECTED]
 ---
 Masami, noticed a small bug in the previous version in the !p
 case when the breakpoint was the kernel's.  Please review this
 version.
 
  arch/x86/kernel/kprobes.c |   60 
 +
  1 files changed, 28 insertions(+), 32 deletions(-)
 
 diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
 index 4e33329..f8c7ac1 100644
 --- a/arch/x86/kernel/kprobes.c
 +++ b/arch/x86/kernel/kprobes.c
 @@ -480,32 +480,28 @@ static int __kprobes kprobe_handler(struct pt_regs 
 *regs)
   preempt_disable();
   kcb = get_kprobe_ctlblk();
  
 - /* Check we're not actually recursing */
 - if (kprobe_running()) {
 - p = get_kprobe(addr);
 - if (p) {
 + p = get_kprobe(addr);
 + if (p) {
 + /* Check we're not actually recursing */
 + if (kprobe_running()) {
   ret = reenter_kprobe(p, regs, kcb);
   if (kcb-kprobe_status == KPROBE_REENTER)
 - return 1;
 + {
 + ret = 1;
 + goto out;

I think return 1 is better.

 + }
 + goto preempt_out;
   } else {
 - if (*addr != BREAKPOINT_INSTRUCTION) {
 - /* The breakpoint instruction was removed by
 -  * another cpu right after we hit, no further
 -  * handling of this interrupt is appropriate
 -  */
 - regs-ip = (unsigned long)addr;
 + set_current_kprobe(p, regs, kcb);
 + kcb-kprobe_status = KPROBE_HIT_ACTIVE;
 + if (p-pre_handler  p-pre_handler(p, regs))
 + {
 + /* handler set things up, skip ss setup */
   ret = 1;
 - goto no_kprobe;
 + goto out;

Ditto.

   }
 - p = __get_cpu_var(current_kprobe);
 - if (p-break_handler  p-break_handler(p, regs))
 - goto ss_probe;
   }
 - goto no_kprobe;
 - }
 -
 - p = get_kprobe(addr);
 - if (!p) {
 + } else {

I think you'd better move !p block forward, because
this block means relatively rare cases. (sure, I know jprobe uses this block.)

   if (*addr != BREAKPOINT_INSTRUCTION) {
   /*
* The breakpoint instruction was removed right
 @@ -518,34 +514,34 @@ static int __kprobes kprobe_handler(struct pt_regs 
 *regs)
*/
   regs-ip = (unsigned long)addr;
   ret = 1;
 + goto preempt_out;
 + }
 + if (kprobe_running()) {
 + p = __get_cpu_var(current_kprobe);
 + if (p-break_handler  p-break_handler(p, regs))
 + goto ss_probe;
   }
   /* Not one of ours: let kernel handle it */
 - goto no_kprobe;
 + goto preempt_out;
   }
  
 - set_current_kprobe(p, regs, kcb);
 - kcb-kprobe_status = KPROBE_HIT_ACTIVE;
 -
 - if (p-pre_handler  p-pre_handler(p, regs))
 - /* handler has already set things up, so skip ss setup */
 - return 1;
 -
  ss_probe:
 + ret = 1;
  #if !defined(CONFIG_PREEMPT) || defined(CONFIG_PM)
   if (p-ainsn.boostable == 1  !p-post_handler) {
   /* Boost up -- we can execute copied instructions directly */
   reset_current_kprobe();
   regs-ip = (unsigned long)p-ainsn.insn;
 - preempt_enable_no_resched();
 - return 1;
 + goto preempt_out;
   }
  #endif
   prepare_singlestep(p, regs);
   kcb-kprobe_status = KPROBE_HIT_SS;
 - return 1;
 + goto out;

I think return 1 is better.

  
 -no_kprobe:
 +preempt_out:
   preempt_enable_no_resched();
 +out:
   return ret;
  }
  

Re: [PATCH] x86: kprobes change kprobe_handler flow

2007-12-30 Thread Ingo Molnar

* Masami Hiramatsu [EMAIL PROTECTED] wrote:

 And also, I prefer return 1 to {ret = 1; goto out;} for 
 simplicity.

it seems that the code already uses 'ret', and sets it to 1 in certain 
cases - in that light it's a tiny bit cleaner to just have a single 
return code flow.

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/