On Tue, Mar 10, 2015 at 09:18:48PM -0700, Alexei Starovoitov wrote:
> +unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx)
> +{
> +     unsigned int ret;
> +     int cpu;
> +
> +     if (in_nmi()) /* not supported yet */
> +             return 1;
> +
> +     preempt_disable_notrace();
> +
> +     cpu = raw_smp_processor_id();
> +     if (unlikely(per_cpu(bpf_prog_active, cpu)++ != 0)) {
> +             /* since some bpf program is already running on this cpu,
> +              * don't call into another bpf program (same or different)
> +              * and don't send kprobe event into ring-buffer,
> +              * so return zero here
> +              */
> +             ret = 0;
> +             goto out;
> +     }
> +
> +     rcu_read_lock();

You've so far tried very hard to not get into tracing; and then you call
rcu_read_lock() :-)

So either document why this isn't a problem, provide
rcu_read_lock_notrace() or switch to RCU-sched and thereby avoid the
problem.

> +     ret = BPF_PROG_RUN(prog, ctx);
> +     rcu_read_unlock();
> +
> + out:
> +     per_cpu(bpf_prog_active, cpu)--;
> +     preempt_enable_notrace();
> +
> +     return ret;
> +}
> +EXPORT_SYMBOL_GPL(trace_call_bpf);
--
To unsubscribe from this list: send the line "unsubscribe linux-api" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to