Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2018-01-08 Thread Masami Hiramatsu
On Sun, 7 Jan 2018 19:01:57 -0800
Alexei Starovoitov  wrote:

> On 12/29/17 12:20 AM, Masami Hiramatsu wrote:
> >> Please run Josef's test in the !ftrace setup.
> > Yes, I'll add the result of the test case.
> 
> if Josef's test is passing in !ftrace config,
> please resend your patches.
> I think 2 and 3 were nice simplifications.
> and patch 1 is good too if it's passes the test.
> Would be great to get them in for this merge window.

OK, I'll try to do that.

Thanks!

> 
> Thanks!
> 


-- 
Masami Hiramatsu 


Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2018-01-07 Thread Alexei Starovoitov

On 12/29/17 12:20 AM, Masami Hiramatsu wrote:

Please run Josef's test in the !ftrace setup.

Yes, I'll add the result of the test case.


if Josef's test is passing in !ftrace config,
please resend your patches.
I think 2 and 3 were nice simplifications.
and patch 1 is good too if it's passes the test.
Would be great to get them in for this merge window.

Thanks!



Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2017-12-29 Thread Masami Hiramatsu
On Thu, 28 Dec 2017 17:03:24 -0800
Alexei Starovoitov  wrote:

> On 12/28/17 12:20 AM, Masami Hiramatsu wrote:
> > On Wed, 27 Dec 2017 20:32:07 -0800
> > Alexei Starovoitov  wrote:
> >
> >> On 12/27/17 8:16 PM, Steven Rostedt wrote:
> >>> On Wed, 27 Dec 2017 19:45:42 -0800
> >>> Alexei Starovoitov  wrote:
> >>>
>  I don't think that's the case. My reading of current
>  trace_kprobe_ftrace() -> arch_check_ftrace_location()
>  is that it will not be true for old mcount case.
> >>>
> >>> In the old mcount case, you can't use ftrace to return without calling
> >>> the function. That is, no modification of the return ip, unless you
> >>> created a trampoline that could handle arbitrary stack frames, and
> >>> remove them from the stack before returning back to the function.
> >>
> >> correct. I was saying that trace_kprobe_ftrace() won't let us do
> >> bpf_override_return with old mcount.
> >
> > No, trace_kprobe_ftrace() just checks the given address will be
> > managed by ftrace. you can see arch_check_ftrace_location() in 
> > kernel/kprobes.c.
> >
> > FYI, CONFIG_KPROBES_ON_FTRACE depends on DYNAMIC_FTRACE_WITH_REGS, and
> > DYNAMIC_FTRACE_WITH_REGS doesn't depend on CC_USING_FENTRY.
> > This means if you compile kernel with old gcc and enable DYNAMIC_FTRACE,
> > kprobes uses ftrace on mcount address which is NOT the entry point
> > of target function.
> 
> ok. fair enough. I think we can gate the feature to !mcount only.
> 
> > On the other hand, changing IP feature has been implemented originaly
> > by kprobes with int3 (sw breakpoint). This means you can use kprobes
> > at correct address (the entry address of the function) you can hijack
> > the function, as jprobe did.
> >
>  As far as the rest of your arguments it very much puzzles me that
>  you claim that this patch suppose to work based on historical
>  reasoning whereas you did NOT test it.
> >>>
> >>> I believe that Masami is saying that the modification of the IP from
> >>> kprobes has been very well tested. But I'm guessing that you still want
> >>> a test case for using kprobes in this particular instance. It's not the
> >>> implementation of modifying the IP that you are worried about, but the
> >>> implementation of BPF using it in this case. Right?
> >>
> >> exactly. No doubt that old code works.
> >> But it doesn't mean that bpf_override_return() will continue to
> >> work in kprobes that are not ftrace based.
> >> I suspect Josef's existing test case will cover this situation.
> >> Probably only special .config is needed to disable ftrace, so
> >> "kprobe on entry but not ftrace" check will kick in.
> >
> > Right. If you need to test it, you can run Josef's test case without
> > CONFIG_DYNAMIC_FTRACE.
> 
> It should be obvious that the person who submits the patch
> must run the tests.
> 
> >> But I didn't get an impression that this situation was tested.
> >> Instead I see only logical reasoning that it's _supposed_ to work.
> >> That's not enough.
> >
> > OK, so would you just ask me to run samples/bpf ?
> 
> Please run Josef's test in the !ftrace setup.

Yes, I'll add the result of the test case.

Thank you,


-- 
Masami Hiramatsu 


Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2017-12-28 Thread Alexei Starovoitov

On 12/28/17 12:20 AM, Masami Hiramatsu wrote:

On Wed, 27 Dec 2017 20:32:07 -0800
Alexei Starovoitov  wrote:


On 12/27/17 8:16 PM, Steven Rostedt wrote:

On Wed, 27 Dec 2017 19:45:42 -0800
Alexei Starovoitov  wrote:


I don't think that's the case. My reading of current
trace_kprobe_ftrace() -> arch_check_ftrace_location()
is that it will not be true for old mcount case.


In the old mcount case, you can't use ftrace to return without calling
the function. That is, no modification of the return ip, unless you
created a trampoline that could handle arbitrary stack frames, and
remove them from the stack before returning back to the function.


correct. I was saying that trace_kprobe_ftrace() won't let us do
bpf_override_return with old mcount.


No, trace_kprobe_ftrace() just checks the given address will be
managed by ftrace. you can see arch_check_ftrace_location() in kernel/kprobes.c.

FYI, CONFIG_KPROBES_ON_FTRACE depends on DYNAMIC_FTRACE_WITH_REGS, and
DYNAMIC_FTRACE_WITH_REGS doesn't depend on CC_USING_FENTRY.
This means if you compile kernel with old gcc and enable DYNAMIC_FTRACE,
kprobes uses ftrace on mcount address which is NOT the entry point
of target function.


ok. fair enough. I think we can gate the feature to !mcount only.


On the other hand, changing IP feature has been implemented originaly
by kprobes with int3 (sw breakpoint). This means you can use kprobes
at correct address (the entry address of the function) you can hijack
the function, as jprobe did.


As far as the rest of your arguments it very much puzzles me that
you claim that this patch suppose to work based on historical
reasoning whereas you did NOT test it.


I believe that Masami is saying that the modification of the IP from
kprobes has been very well tested. But I'm guessing that you still want
a test case for using kprobes in this particular instance. It's not the
implementation of modifying the IP that you are worried about, but the
implementation of BPF using it in this case. Right?


exactly. No doubt that old code works.
But it doesn't mean that bpf_override_return() will continue to
work in kprobes that are not ftrace based.
I suspect Josef's existing test case will cover this situation.
Probably only special .config is needed to disable ftrace, so
"kprobe on entry but not ftrace" check will kick in.


Right. If you need to test it, you can run Josef's test case without
CONFIG_DYNAMIC_FTRACE.


It should be obvious that the person who submits the patch
must run the tests.


But I didn't get an impression that this situation was tested.
Instead I see only logical reasoning that it's _supposed_ to work.
That's not enough.


OK, so would you just ask me to run samples/bpf ?


Please run Josef's test in the !ftrace setup.



Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2017-12-28 Thread Masami Hiramatsu
On Wed, 27 Dec 2017 20:32:07 -0800
Alexei Starovoitov  wrote:

> On 12/27/17 8:16 PM, Steven Rostedt wrote:
> > On Wed, 27 Dec 2017 19:45:42 -0800
> > Alexei Starovoitov  wrote:
> >
> >> I don't think that's the case. My reading of current
> >> trace_kprobe_ftrace() -> arch_check_ftrace_location()
> >> is that it will not be true for old mcount case.
> >
> > In the old mcount case, you can't use ftrace to return without calling
> > the function. That is, no modification of the return ip, unless you
> > created a trampoline that could handle arbitrary stack frames, and
> > remove them from the stack before returning back to the function.
> 
> correct. I was saying that trace_kprobe_ftrace() won't let us do
> bpf_override_return with old mcount.

No, trace_kprobe_ftrace() just checks the given address will be
managed by ftrace. you can see arch_check_ftrace_location() in kernel/kprobes.c.

FYI, CONFIG_KPROBES_ON_FTRACE depends on DYNAMIC_FTRACE_WITH_REGS, and
DYNAMIC_FTRACE_WITH_REGS doesn't depend on CC_USING_FENTRY.
This means if you compile kernel with old gcc and enable DYNAMIC_FTRACE,
kprobes uses ftrace on mcount address which is NOT the entry point
of target function.

On the other hand, changing IP feature has been implemented originaly
by kprobes with int3 (sw breakpoint). This means you can use kprobes
at correct address (the entry address of the function) you can hijack
the function, as jprobe did.

> >> As far as the rest of your arguments it very much puzzles me that
> >> you claim that this patch suppose to work based on historical
> >> reasoning whereas you did NOT test it.
> >
> > I believe that Masami is saying that the modification of the IP from
> > kprobes has been very well tested. But I'm guessing that you still want
> > a test case for using kprobes in this particular instance. It's not the
> > implementation of modifying the IP that you are worried about, but the
> > implementation of BPF using it in this case. Right?
> 
> exactly. No doubt that old code works.
> But it doesn't mean that bpf_override_return() will continue to
> work in kprobes that are not ftrace based.
> I suspect Josef's existing test case will cover this situation.
> Probably only special .config is needed to disable ftrace, so
> "kprobe on entry but not ftrace" check will kick in.

Right. If you need to test it, you can run Josef's test case without
CONFIG_DYNAMIC_FTRACE.

> But I didn't get an impression that this situation was tested.
> Instead I see only logical reasoning that it's _supposed_ to work.
> That's not enough.

OK, so would you just ask me to run samples/bpf ?

Thanks,

-- 
Masami Hiramatsu 


Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2017-12-27 Thread Masami Hiramatsu
On Wed, 27 Dec 2017 19:45:42 -0800
Alexei Starovoitov  wrote:

> On 12/27/17 6:34 PM, Masami Hiramatsu wrote:
> > On Wed, 27 Dec 2017 14:46:24 -0800
> > Alexei Starovoitov  wrote:
> >
> >> On 12/26/17 9:56 PM, Masami Hiramatsu wrote:
> >>> On Tue, 26 Dec 2017 17:57:32 -0800
> >>> Alexei Starovoitov  wrote:
> >>>
>  On Tue, Dec 26, 2017 at 04:46:59PM +0900, Masami Hiramatsu wrote:
> > Check whether error injectable event is on function entry or not.
> > Currently it checks the event is ftrace-based kprobes or not,
> > but that is wrong. It should check if the event is on the entry
> > of target function. Since error injection will override a function
> > to just return with modified return value, that operation must
> > be done before the target function starts making stackframe.
> >
> > As a side effect, bpf error injection is no need to depend on
> > function-tracer. It can work with sw-breakpoint based kprobe
> > events too.
> >
> > Signed-off-by: Masami Hiramatsu 
> > ---
> >  kernel/trace/Kconfig|2 --
> >  kernel/trace/bpf_trace.c|6 +++---
> >  kernel/trace/trace_kprobe.c |8 +---
> >  kernel/trace/trace_probe.h  |   12 ++--
> >  4 files changed, 14 insertions(+), 14 deletions(-)
> >
> > diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
> > index ae3a2d519e50..6400e1bf97c5 100644
> > --- a/kernel/trace/Kconfig
> > +++ b/kernel/trace/Kconfig
> > @@ -533,9 +533,7 @@ config FUNCTION_PROFILER
> >  config BPF_KPROBE_OVERRIDE
> > bool "Enable BPF programs to override a kprobed function"
> > depends on BPF_EVENTS
> > -   depends on KPROBES_ON_FTRACE
> > depends on HAVE_KPROBE_OVERRIDE
> > -   depends on DYNAMIC_FTRACE_WITH_REGS
> > default n
> > help
> >  Allows BPF to override the execution of a probed function and
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index f6d2327ecb59..d663660f8392 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -800,11 +800,11 @@ int perf_event_attach_bpf_prog(struct perf_event 
> > *event,
> > int ret = -EEXIST;
> >
> > /*
> > -* Kprobe override only works for ftrace based kprobes, and 
> > only if they
> > -* are on the opt-in list.
> > +* Kprobe override only works if they are on the function entry,
> > +* and only if they are on the opt-in list.
> >  */
> > if (prog->kprobe_override &&
> > -   (!trace_kprobe_ftrace(event->tp_event) ||
> > +   (!trace_kprobe_on_func_entry(event->tp_event) ||
> >  !trace_kprobe_error_injectable(event->tp_event)))
> > return -EINVAL;
> >
> > diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> > index 91f4b57dab82..265e3e27e8dc 100644
> > --- a/kernel/trace/trace_kprobe.c
> > +++ b/kernel/trace/trace_kprobe.c
> > @@ -88,13 +88,15 @@ static nokprobe_inline unsigned long 
> > trace_kprobe_nhit(struct trace_kprobe *tk)
> > return nhit;
> >  }
> >
> > -int trace_kprobe_ftrace(struct trace_event_call *call)
> > +bool trace_kprobe_on_func_entry(struct trace_event_call *call)
> >  {
> > struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
> > -   return kprobe_ftrace(>rp.kp);
> > +
> > +   return kprobe_on_func_entry(tk->rp.kp.addr, 
> > tk->rp.kp.symbol_name,
> > +   tk->rp.kp.offset);
> 
>  That would be nice, but did you test this?
> >>>
> >>> Yes, because the jprobe, which was only official user of modifying 
> >>> execution
> >>> path using kprobe, did same way to check. (and kretprobe also does it)
> >>>
>  My understanding that kprobe will restore all regs and
>  here we need to override return ip _and_ value.
> >>>
> >>> yes, no problem. kprobe restore all regs from pt_regs, including regs->ip.
> >>>
>  Could you add a patch with the test the way Josef did
>  or describe the steps to test this new mode?
> >>>
> >>> Would you mean below patch? If so, it should work without any change.
> >>>
> >>>  [PATCH v10 4/5] samples/bpf: add a test for bpf_override_return
> >>
> >> yeah. I expect bpf_override_return test to work as-is.
> >> I'm asking for the test for new functionality added by this patch.
> >> In particular kprobe on func entry without ftrace.
> >> How did you test it?
> >
> > This function is used in kretprobe and jprobe. Jprobe was the user of
> > "modifying instruction pointer to another function" in kprobes.
> > If it doesn't work, jprobe also doesn't work, this means you can not
> > modify IP by 

Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2017-12-27 Thread Alexei Starovoitov

On 12/27/17 8:16 PM, Steven Rostedt wrote:

On Wed, 27 Dec 2017 19:45:42 -0800
Alexei Starovoitov  wrote:


I don't think that's the case. My reading of current
trace_kprobe_ftrace() -> arch_check_ftrace_location()
is that it will not be true for old mcount case.


In the old mcount case, you can't use ftrace to return without calling
the function. That is, no modification of the return ip, unless you
created a trampoline that could handle arbitrary stack frames, and
remove them from the stack before returning back to the function.


correct. I was saying that trace_kprobe_ftrace() won't let us do
bpf_override_return with old mcount.



As far as the rest of your arguments it very much puzzles me that
you claim that this patch suppose to work based on historical
reasoning whereas you did NOT test it.


I believe that Masami is saying that the modification of the IP from
kprobes has been very well tested. But I'm guessing that you still want
a test case for using kprobes in this particular instance. It's not the
implementation of modifying the IP that you are worried about, but the
implementation of BPF using it in this case. Right?


exactly. No doubt that old code works.
But it doesn't mean that bpf_override_return() will continue to
work in kprobes that are not ftrace based.
I suspect Josef's existing test case will cover this situation.
Probably only special .config is needed to disable ftrace, so
"kprobe on entry but not ftrace" check will kick in.
But I didn't get an impression that this situation was tested.
Instead I see only logical reasoning that it's _supposed_ to work.
That's not enough.



Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2017-12-27 Thread Steven Rostedt
On Wed, 27 Dec 2017 19:45:42 -0800
Alexei Starovoitov  wrote:

> I don't think that's the case. My reading of current
> trace_kprobe_ftrace() -> arch_check_ftrace_location()
> is that it will not be true for old mcount case.

In the old mcount case, you can't use ftrace to return without calling
the function. That is, no modification of the return ip, unless you
created a trampoline that could handle arbitrary stack frames, and
remove them from the stack before returning back to the function.

> 
> As far as the rest of your arguments it very much puzzles me that
> you claim that this patch suppose to work based on historical
> reasoning whereas you did NOT test it.

I believe that Masami is saying that the modification of the IP from
kprobes has been very well tested. But I'm guessing that you still want
a test case for using kprobes in this particular instance. It's not the
implementation of modifying the IP that you are worried about, but the
implementation of BPF using it in this case. Right?

-- Steve


Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2017-12-27 Thread Alexei Starovoitov

On 12/27/17 6:34 PM, Masami Hiramatsu wrote:

On Wed, 27 Dec 2017 14:46:24 -0800
Alexei Starovoitov  wrote:


On 12/26/17 9:56 PM, Masami Hiramatsu wrote:

On Tue, 26 Dec 2017 17:57:32 -0800
Alexei Starovoitov  wrote:


On Tue, Dec 26, 2017 at 04:46:59PM +0900, Masami Hiramatsu wrote:

Check whether error injectable event is on function entry or not.
Currently it checks the event is ftrace-based kprobes or not,
but that is wrong. It should check if the event is on the entry
of target function. Since error injection will override a function
to just return with modified return value, that operation must
be done before the target function starts making stackframe.

As a side effect, bpf error injection is no need to depend on
function-tracer. It can work with sw-breakpoint based kprobe
events too.

Signed-off-by: Masami Hiramatsu 
---
 kernel/trace/Kconfig|2 --
 kernel/trace/bpf_trace.c|6 +++---
 kernel/trace/trace_kprobe.c |8 +---
 kernel/trace/trace_probe.h  |   12 ++--
 4 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index ae3a2d519e50..6400e1bf97c5 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -533,9 +533,7 @@ config FUNCTION_PROFILER
 config BPF_KPROBE_OVERRIDE
bool "Enable BPF programs to override a kprobed function"
depends on BPF_EVENTS
-   depends on KPROBES_ON_FTRACE
depends on HAVE_KPROBE_OVERRIDE
-   depends on DYNAMIC_FTRACE_WITH_REGS
default n
help
 Allows BPF to override the execution of a probed function and
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index f6d2327ecb59..d663660f8392 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -800,11 +800,11 @@ int perf_event_attach_bpf_prog(struct perf_event *event,
int ret = -EEXIST;

/*
-* Kprobe override only works for ftrace based kprobes, and only if they
-* are on the opt-in list.
+* Kprobe override only works if they are on the function entry,
+* and only if they are on the opt-in list.
 */
if (prog->kprobe_override &&
-   (!trace_kprobe_ftrace(event->tp_event) ||
+   (!trace_kprobe_on_func_entry(event->tp_event) ||
 !trace_kprobe_error_injectable(event->tp_event)))
return -EINVAL;

diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 91f4b57dab82..265e3e27e8dc 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -88,13 +88,15 @@ static nokprobe_inline unsigned long 
trace_kprobe_nhit(struct trace_kprobe *tk)
return nhit;
 }

-int trace_kprobe_ftrace(struct trace_event_call *call)
+bool trace_kprobe_on_func_entry(struct trace_event_call *call)
 {
struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
-   return kprobe_ftrace(>rp.kp);
+
+   return kprobe_on_func_entry(tk->rp.kp.addr, tk->rp.kp.symbol_name,
+   tk->rp.kp.offset);


That would be nice, but did you test this?


Yes, because the jprobe, which was only official user of modifying execution
path using kprobe, did same way to check. (and kretprobe also does it)


My understanding that kprobe will restore all regs and
here we need to override return ip _and_ value.


yes, no problem. kprobe restore all regs from pt_regs, including regs->ip.


Could you add a patch with the test the way Josef did
or describe the steps to test this new mode?


Would you mean below patch? If so, it should work without any change.

 [PATCH v10 4/5] samples/bpf: add a test for bpf_override_return


yeah. I expect bpf_override_return test to work as-is.
I'm asking for the test for new functionality added by this patch.
In particular kprobe on func entry without ftrace.
How did you test it?


This function is used in kretprobe and jprobe. Jprobe was the user of
"modifying instruction pointer to another function" in kprobes.
If it doesn't work, jprobe also doesn't work, this means you can not
modify IP by kprobes anymore.
Anyway, until linux-4.13, that was well tested by kprobe smoke test.


and how I can repeat the test?
I'm still not sure that it works correctly.


That works correctly because it checks given address is on the entry
point (the 1st instruction) of a function, using kallsyms.

The reason why I made another flag for ftrace was, there are 2 modes
for ftrace dynamic instrumentation, fentry and mcount.
With new fentry mode, ftrace will be put on the first instruction
of the function, so it will work as you expected.
With traditional gcc mcount, ftrace will be called after making call
frame for _mcount(). This means if you modify ip, it will not work
or cause a trouble because _mcount call frame is still on the stack.

So, current ftrace-based checker doesn't work, it depends on the case.
Of course, in most 

Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2017-12-27 Thread Masami Hiramatsu
On Wed, 27 Dec 2017 14:46:24 -0800
Alexei Starovoitov  wrote:

> On 12/26/17 9:56 PM, Masami Hiramatsu wrote:
> > On Tue, 26 Dec 2017 17:57:32 -0800
> > Alexei Starovoitov  wrote:
> >
> >> On Tue, Dec 26, 2017 at 04:46:59PM +0900, Masami Hiramatsu wrote:
> >>> Check whether error injectable event is on function entry or not.
> >>> Currently it checks the event is ftrace-based kprobes or not,
> >>> but that is wrong. It should check if the event is on the entry
> >>> of target function. Since error injection will override a function
> >>> to just return with modified return value, that operation must
> >>> be done before the target function starts making stackframe.
> >>>
> >>> As a side effect, bpf error injection is no need to depend on
> >>> function-tracer. It can work with sw-breakpoint based kprobe
> >>> events too.
> >>>
> >>> Signed-off-by: Masami Hiramatsu 
> >>> ---
> >>>  kernel/trace/Kconfig|2 --
> >>>  kernel/trace/bpf_trace.c|6 +++---
> >>>  kernel/trace/trace_kprobe.c |8 +---
> >>>  kernel/trace/trace_probe.h  |   12 ++--
> >>>  4 files changed, 14 insertions(+), 14 deletions(-)
> >>>
> >>> diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
> >>> index ae3a2d519e50..6400e1bf97c5 100644
> >>> --- a/kernel/trace/Kconfig
> >>> +++ b/kernel/trace/Kconfig
> >>> @@ -533,9 +533,7 @@ config FUNCTION_PROFILER
> >>>  config BPF_KPROBE_OVERRIDE
> >>>   bool "Enable BPF programs to override a kprobed function"
> >>>   depends on BPF_EVENTS
> >>> - depends on KPROBES_ON_FTRACE
> >>>   depends on HAVE_KPROBE_OVERRIDE
> >>> - depends on DYNAMIC_FTRACE_WITH_REGS
> >>>   default n
> >>>   help
> >>>Allows BPF to override the execution of a probed function and
> >>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> >>> index f6d2327ecb59..d663660f8392 100644
> >>> --- a/kernel/trace/bpf_trace.c
> >>> +++ b/kernel/trace/bpf_trace.c
> >>> @@ -800,11 +800,11 @@ int perf_event_attach_bpf_prog(struct perf_event 
> >>> *event,
> >>>   int ret = -EEXIST;
> >>>
> >>>   /*
> >>> -  * Kprobe override only works for ftrace based kprobes, and only if they
> >>> -  * are on the opt-in list.
> >>> +  * Kprobe override only works if they are on the function entry,
> >>> +  * and only if they are on the opt-in list.
> >>>*/
> >>>   if (prog->kprobe_override &&
> >>> - (!trace_kprobe_ftrace(event->tp_event) ||
> >>> + (!trace_kprobe_on_func_entry(event->tp_event) ||
> >>>!trace_kprobe_error_injectable(event->tp_event)))
> >>>   return -EINVAL;
> >>>
> >>> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> >>> index 91f4b57dab82..265e3e27e8dc 100644
> >>> --- a/kernel/trace/trace_kprobe.c
> >>> +++ b/kernel/trace/trace_kprobe.c
> >>> @@ -88,13 +88,15 @@ static nokprobe_inline unsigned long 
> >>> trace_kprobe_nhit(struct trace_kprobe *tk)
> >>>   return nhit;
> >>>  }
> >>>
> >>> -int trace_kprobe_ftrace(struct trace_event_call *call)
> >>> +bool trace_kprobe_on_func_entry(struct trace_event_call *call)
> >>>  {
> >>>   struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
> >>> - return kprobe_ftrace(>rp.kp);
> >>> +
> >>> + return kprobe_on_func_entry(tk->rp.kp.addr, tk->rp.kp.symbol_name,
> >>> + tk->rp.kp.offset);
> >>
> >> That would be nice, but did you test this?
> >
> > Yes, because the jprobe, which was only official user of modifying execution
> > path using kprobe, did same way to check. (and kretprobe also does it)
> >
> >> My understanding that kprobe will restore all regs and
> >> here we need to override return ip _and_ value.
> >
> > yes, no problem. kprobe restore all regs from pt_regs, including regs->ip.
> >
> >> Could you add a patch with the test the way Josef did
> >> or describe the steps to test this new mode?
> >
> > Would you mean below patch? If so, it should work without any change.
> >
> >  [PATCH v10 4/5] samples/bpf: add a test for bpf_override_return
> 
> yeah. I expect bpf_override_return test to work as-is.
> I'm asking for the test for new functionality added by this patch.
> In particular kprobe on func entry without ftrace.
> How did you test it?

This function is used in kretprobe and jprobe. Jprobe was the user of
"modifying instruction pointer to another function" in kprobes.
If it doesn't work, jprobe also doesn't work, this means you can not
modify IP by kprobes anymore.
Anyway, until linux-4.13, that was well tested by kprobe smoke test.

> and how I can repeat the test?
> I'm still not sure that it works correctly.

That works correctly because it checks given address is on the entry
point (the 1st instruction) of a function, using kallsyms.

The reason why I made another flag for ftrace was, there are 2 modes
for ftrace dynamic instrumentation, fentry and mcount.
With new fentry mode, ftrace will be put on the first instruction
of the function, so it will work as you 

Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2017-12-27 Thread Alexei Starovoitov

On 12/26/17 9:56 PM, Masami Hiramatsu wrote:

On Tue, 26 Dec 2017 17:57:32 -0800
Alexei Starovoitov  wrote:


On Tue, Dec 26, 2017 at 04:46:59PM +0900, Masami Hiramatsu wrote:

Check whether error injectable event is on function entry or not.
Currently it checks the event is ftrace-based kprobes or not,
but that is wrong. It should check if the event is on the entry
of target function. Since error injection will override a function
to just return with modified return value, that operation must
be done before the target function starts making stackframe.

As a side effect, bpf error injection is no need to depend on
function-tracer. It can work with sw-breakpoint based kprobe
events too.

Signed-off-by: Masami Hiramatsu 
---
 kernel/trace/Kconfig|2 --
 kernel/trace/bpf_trace.c|6 +++---
 kernel/trace/trace_kprobe.c |8 +---
 kernel/trace/trace_probe.h  |   12 ++--
 4 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index ae3a2d519e50..6400e1bf97c5 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -533,9 +533,7 @@ config FUNCTION_PROFILER
 config BPF_KPROBE_OVERRIDE
bool "Enable BPF programs to override a kprobed function"
depends on BPF_EVENTS
-   depends on KPROBES_ON_FTRACE
depends on HAVE_KPROBE_OVERRIDE
-   depends on DYNAMIC_FTRACE_WITH_REGS
default n
help
 Allows BPF to override the execution of a probed function and
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index f6d2327ecb59..d663660f8392 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -800,11 +800,11 @@ int perf_event_attach_bpf_prog(struct perf_event *event,
int ret = -EEXIST;

/*
-* Kprobe override only works for ftrace based kprobes, and only if they
-* are on the opt-in list.
+* Kprobe override only works if they are on the function entry,
+* and only if they are on the opt-in list.
 */
if (prog->kprobe_override &&
-   (!trace_kprobe_ftrace(event->tp_event) ||
+   (!trace_kprobe_on_func_entry(event->tp_event) ||
 !trace_kprobe_error_injectable(event->tp_event)))
return -EINVAL;

diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 91f4b57dab82..265e3e27e8dc 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -88,13 +88,15 @@ static nokprobe_inline unsigned long 
trace_kprobe_nhit(struct trace_kprobe *tk)
return nhit;
 }

-int trace_kprobe_ftrace(struct trace_event_call *call)
+bool trace_kprobe_on_func_entry(struct trace_event_call *call)
 {
struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
-   return kprobe_ftrace(>rp.kp);
+
+   return kprobe_on_func_entry(tk->rp.kp.addr, tk->rp.kp.symbol_name,
+   tk->rp.kp.offset);


That would be nice, but did you test this?


Yes, because the jprobe, which was only official user of modifying execution
path using kprobe, did same way to check. (and kretprobe also does it)


My understanding that kprobe will restore all regs and
here we need to override return ip _and_ value.


yes, no problem. kprobe restore all regs from pt_regs, including regs->ip.


Could you add a patch with the test the way Josef did
or describe the steps to test this new mode?


Would you mean below patch? If so, it should work without any change.

 [PATCH v10 4/5] samples/bpf: add a test for bpf_override_return


yeah. I expect bpf_override_return test to work as-is.
I'm asking for the test for new functionality added by this patch.
In particular kprobe on func entry without ftrace.
How did you test it?
and how I can repeat the test?
I'm still not sure that it works correctly.



Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2017-12-26 Thread Masami Hiramatsu
On Tue, 26 Dec 2017 17:57:32 -0800
Alexei Starovoitov  wrote:

> On Tue, Dec 26, 2017 at 04:46:59PM +0900, Masami Hiramatsu wrote:
> > Check whether error injectable event is on function entry or not.
> > Currently it checks the event is ftrace-based kprobes or not,
> > but that is wrong. It should check if the event is on the entry
> > of target function. Since error injection will override a function
> > to just return with modified return value, that operation must
> > be done before the target function starts making stackframe.
> > 
> > As a side effect, bpf error injection is no need to depend on
> > function-tracer. It can work with sw-breakpoint based kprobe
> > events too.
> > 
> > Signed-off-by: Masami Hiramatsu 
> > ---
> >  kernel/trace/Kconfig|2 --
> >  kernel/trace/bpf_trace.c|6 +++---
> >  kernel/trace/trace_kprobe.c |8 +---
> >  kernel/trace/trace_probe.h  |   12 ++--
> >  4 files changed, 14 insertions(+), 14 deletions(-)
> > 
> > diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
> > index ae3a2d519e50..6400e1bf97c5 100644
> > --- a/kernel/trace/Kconfig
> > +++ b/kernel/trace/Kconfig
> > @@ -533,9 +533,7 @@ config FUNCTION_PROFILER
> >  config BPF_KPROBE_OVERRIDE
> > bool "Enable BPF programs to override a kprobed function"
> > depends on BPF_EVENTS
> > -   depends on KPROBES_ON_FTRACE
> > depends on HAVE_KPROBE_OVERRIDE
> > -   depends on DYNAMIC_FTRACE_WITH_REGS
> > default n
> > help
> >  Allows BPF to override the execution of a probed function and
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index f6d2327ecb59..d663660f8392 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -800,11 +800,11 @@ int perf_event_attach_bpf_prog(struct perf_event 
> > *event,
> > int ret = -EEXIST;
> >  
> > /*
> > -* Kprobe override only works for ftrace based kprobes, and only if they
> > -* are on the opt-in list.
> > +* Kprobe override only works if they are on the function entry,
> > +* and only if they are on the opt-in list.
> >  */
> > if (prog->kprobe_override &&
> > -   (!trace_kprobe_ftrace(event->tp_event) ||
> > +   (!trace_kprobe_on_func_entry(event->tp_event) ||
> >  !trace_kprobe_error_injectable(event->tp_event)))
> > return -EINVAL;
> >  
> > diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> > index 91f4b57dab82..265e3e27e8dc 100644
> > --- a/kernel/trace/trace_kprobe.c
> > +++ b/kernel/trace/trace_kprobe.c
> > @@ -88,13 +88,15 @@ static nokprobe_inline unsigned long 
> > trace_kprobe_nhit(struct trace_kprobe *tk)
> > return nhit;
> >  }
> >  
> > -int trace_kprobe_ftrace(struct trace_event_call *call)
> > +bool trace_kprobe_on_func_entry(struct trace_event_call *call)
> >  {
> > struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
> > -   return kprobe_ftrace(>rp.kp);
> > +
> > +   return kprobe_on_func_entry(tk->rp.kp.addr, tk->rp.kp.symbol_name,
> > +   tk->rp.kp.offset);
> 
> That would be nice, but did you test this?

Yes, because the jprobe, which was only official user of modifying execution
path using kprobe, did same way to check. (and kretprobe also does it)

> My understanding that kprobe will restore all regs and
> here we need to override return ip _and_ value.

yes, no problem. kprobe restore all regs from pt_regs, including regs->ip.

> Could you add a patch with the test the way Josef did
> or describe the steps to test this new mode?

Would you mean below patch? If so, it should work without any change.

 [PATCH v10 4/5] samples/bpf: add a test for bpf_override_return

Thank you,

-- 
Masami Hiramatsu 


Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry

2017-12-26 Thread Alexei Starovoitov
On Tue, Dec 26, 2017 at 04:46:59PM +0900, Masami Hiramatsu wrote:
> Check whether error injectable event is on function entry or not.
> Currently it checks the event is ftrace-based kprobes or not,
> but that is wrong. It should check if the event is on the entry
> of target function. Since error injection will override a function
> to just return with modified return value, that operation must
> be done before the target function starts making stackframe.
> 
> As a side effect, bpf error injection is no need to depend on
> function-tracer. It can work with sw-breakpoint based kprobe
> events too.
> 
> Signed-off-by: Masami Hiramatsu 
> ---
>  kernel/trace/Kconfig|2 --
>  kernel/trace/bpf_trace.c|6 +++---
>  kernel/trace/trace_kprobe.c |8 +---
>  kernel/trace/trace_probe.h  |   12 ++--
>  4 files changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
> index ae3a2d519e50..6400e1bf97c5 100644
> --- a/kernel/trace/Kconfig
> +++ b/kernel/trace/Kconfig
> @@ -533,9 +533,7 @@ config FUNCTION_PROFILER
>  config BPF_KPROBE_OVERRIDE
>   bool "Enable BPF programs to override a kprobed function"
>   depends on BPF_EVENTS
> - depends on KPROBES_ON_FTRACE
>   depends on HAVE_KPROBE_OVERRIDE
> - depends on DYNAMIC_FTRACE_WITH_REGS
>   default n
>   help
>Allows BPF to override the execution of a probed function and
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index f6d2327ecb59..d663660f8392 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -800,11 +800,11 @@ int perf_event_attach_bpf_prog(struct perf_event *event,
>   int ret = -EEXIST;
>  
>   /*
> -  * Kprobe override only works for ftrace based kprobes, and only if they
> -  * are on the opt-in list.
> +  * Kprobe override only works if they are on the function entry,
> +  * and only if they are on the opt-in list.
>*/
>   if (prog->kprobe_override &&
> - (!trace_kprobe_ftrace(event->tp_event) ||
> + (!trace_kprobe_on_func_entry(event->tp_event) ||
>!trace_kprobe_error_injectable(event->tp_event)))
>   return -EINVAL;
>  
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index 91f4b57dab82..265e3e27e8dc 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -88,13 +88,15 @@ static nokprobe_inline unsigned long 
> trace_kprobe_nhit(struct trace_kprobe *tk)
>   return nhit;
>  }
>  
> -int trace_kprobe_ftrace(struct trace_event_call *call)
> +bool trace_kprobe_on_func_entry(struct trace_event_call *call)
>  {
>   struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
> - return kprobe_ftrace(>rp.kp);
> +
> + return kprobe_on_func_entry(tk->rp.kp.addr, tk->rp.kp.symbol_name,
> + tk->rp.kp.offset);

That would be nice, but did you test this?
My understanding that kprobe will restore all regs and
here we need to override return ip _and_ value.
Could you add a patch with the test the way Josef did
or describe the steps to test this new mode?