On 3/12/15 9:23 AM, Steven Rostedt wrote:
On Thu, 12 Mar 2015 09:18:34 -0700
Alexei Starovoitov wrote:
You've so far tried very hard to not get into tracing; and then you call
rcu_read_lock() :-)
So either document why this isn't a problem, provide
rcu_read_lock_notrace() or switch to
On Thu, 12 Mar 2015 09:43:54 -0700
Alexei Starovoitov wrote:
> sure. consider it done. should I respin right away or you can review
> the rest?
Hold off for a bit, to let the review continue.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a
On Thu, 12 Mar 2015 09:18:34 -0700
Alexei Starovoitov wrote:
> > You've so far tried very hard to not get into tracing; and then you call
> > rcu_read_lock() :-)
> >
> > So either document why this isn't a problem, provide
> > rcu_read_lock_notrace() or switch to RCU-sched and thereby avoid the
On 3/12/15 8:15 AM, Peter Zijlstra wrote:
On Tue, Mar 10, 2015 at 09:18:48PM -0700, Alexei Starovoitov wrote:
+unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx)
+{
+ unsigned int ret;
+ int cpu;
+
+ if (in_nmi()) /* not supported yet */
+ return 1;
+
On Tue, Mar 10, 2015 at 09:18:48PM -0700, Alexei Starovoitov wrote:
> +unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx)
> +{
> + unsigned int ret;
> + int cpu;
> +
> + if (in_nmi()) /* not supported yet */
> + return 1;
> +
> + preempt_disable_notrace();
>
On 3/12/15 8:15 AM, Peter Zijlstra wrote:
On Tue, Mar 10, 2015 at 09:18:48PM -0700, Alexei Starovoitov wrote:
+unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx)
+{
+ unsigned int ret;
+ int cpu;
+
+ if (in_nmi()) /* not supported yet */
+ return 1;
+
On Thu, 12 Mar 2015 09:18:34 -0700
Alexei Starovoitov a...@plumgrid.com wrote:
You've so far tried very hard to not get into tracing; and then you call
rcu_read_lock() :-)
So either document why this isn't a problem, provide
rcu_read_lock_notrace() or switch to RCU-sched and thereby
On Tue, Mar 10, 2015 at 09:18:48PM -0700, Alexei Starovoitov wrote:
+unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx)
+{
+ unsigned int ret;
+ int cpu;
+
+ if (in_nmi()) /* not supported yet */
+ return 1;
+
+ preempt_disable_notrace();
+
+
On 3/12/15 9:23 AM, Steven Rostedt wrote:
On Thu, 12 Mar 2015 09:18:34 -0700
Alexei Starovoitov a...@plumgrid.com wrote:
You've so far tried very hard to not get into tracing; and then you call
rcu_read_lock() :-)
So either document why this isn't a problem, provide
rcu_read_lock_notrace() or
On Thu, 12 Mar 2015 09:43:54 -0700
Alexei Starovoitov a...@plumgrid.com wrote:
sure. consider it done. should I respin right away or you can review
the rest?
Hold off for a bit, to let the review continue.
-- Steve
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
User interface:
struct perf_event_attr attr = {.type = PERF_TYPE_TRACEPOINT, .config =
event_id, ...};
event_fd = perf_event_open(,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
prog_fd is a file descriptor associated with BPF program previously loaded.
event_id is an ID of created
User interface:
struct perf_event_attr attr = {.type = PERF_TYPE_TRACEPOINT, .config =
event_id, ...};
event_fd = perf_event_open(attr,...);
ioctl(event_fd, PERF_EVENT_IOC_SET_BPF, prog_fd);
prog_fd is a file descriptor associated with BPF program previously loaded.
event_id is an ID of created
12 matches
Mail list logo