于 2015/8/5 21:53, Peter Zijlstra 写道:
> On Wed, Aug 05, 2015 at 12:04:25PM +0200, Peter Zijlstra wrote:
>> Also, you probably want a WARN_ON(in_nmi()) there, this function is
>> _NOT_ NMI safe.
>
> I had a wee think about that, and I think the below is safe.
>
> (with the obvious problem that
On Wed, Aug 05, 2015 at 09:08:32AM -0700, Alexei Starovoitov wrote:
> On 8/5/15 6:53 AM, Peter Zijlstra wrote:
> >+/*
> >+ * If the event is currently on this CPU, its either a per-task event,
> >+ * or local to this CPU. Furthermore it means its ACTIVE (otherwise
> >+ * oncpu ==
On 8/5/15 6:53 AM, Peter Zijlstra wrote:
+ /*
+* If the event is currently on this CPU, its either a per-task event,
+* or local to this CPU. Furthermore it means its ACTIVE (otherwise
+* oncpu == -1).
+*/
+ if (event->oncpu == smp_processor_id())
+
On 8/5/15 3:15 AM, Peter Zijlstra wrote:
On Wed, Aug 05, 2015 at 12:04:25PM +0200, Peter Zijlstra wrote:
On Tue, Aug 04, 2015 at 08:58:15AM +, Kaixu Xia wrote:
+ event->ctx->task != current)
Strictly speaking we should hold rcu_read_lock around dereferencing
event->ctx (or
On 8/5/15 3:04 AM, Peter Zijlstra wrote:
>+ __perf_event_read(event);
>+ return perf_event_count(event);
>+}
Also, you probably want a WARN_ON(in_nmi()) there, this function is
_NOT_ NMI safe.
we check that very early on:
unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx)
{
On Wed, Aug 05, 2015 at 03:53:17PM +0200, Peter Zijlstra wrote:
> +/*
> + * NMI-safe method to read a local event, that is an event that
> + * is:
> + * - either for the current task, or for this CPU
> + * - does not have inherit set, for inherited task events
> + * will not be local and
On Wed, Aug 05, 2015 at 12:04:25PM +0200, Peter Zijlstra wrote:
> Also, you probably want a WARN_ON(in_nmi()) there, this function is
> _NOT_ NMI safe.
I had a wee think about that, and I think the below is safe.
(with the obvious problem that WARN from NMI context is not safe)
It does not give
于 2015/8/5 18:04, Peter Zijlstra 写道:
> On Tue, Aug 04, 2015 at 08:58:15AM +, Kaixu Xia wrote:
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 6251b53..726ca1b 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -8599,6 +8599,25 @@ struct
On Wed, Aug 05, 2015 at 12:04:25PM +0200, Peter Zijlstra wrote:
> On Tue, Aug 04, 2015 at 08:58:15AM +, Kaixu Xia wrote:
> > + event->ctx->task != current)
Strictly speaking we should hold rcu_read_lock around dereferencing
event->ctx (or have IRQs disabled -- although I know Paul
On Tue, Aug 04, 2015 at 08:58:15AM +, Kaixu Xia wrote:
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 6251b53..726ca1b 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -8599,6 +8599,25 @@ struct perf_event_attr *perf_event_attrs(struct
> perf_event
On Tue, Aug 04, 2015 at 08:58:15AM +, Kaixu Xia wrote:
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 6251b53..726ca1b 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -8599,6 +8599,25 @@ struct perf_event_attr *perf_event_attrs(struct
perf_event *event)
On Wed, Aug 05, 2015 at 12:04:25PM +0200, Peter Zijlstra wrote:
On Tue, Aug 04, 2015 at 08:58:15AM +, Kaixu Xia wrote:
+ event-ctx-task != current)
Strictly speaking we should hold rcu_read_lock around dereferencing
event-ctx (or have IRQs disabled -- although I know Paul doesn't
于 2015/8/5 18:04, Peter Zijlstra 写道:
On Tue, Aug 04, 2015 at 08:58:15AM +, Kaixu Xia wrote:
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 6251b53..726ca1b 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -8599,6 +8599,25 @@ struct perf_event_attr
On Wed, Aug 05, 2015 at 12:04:25PM +0200, Peter Zijlstra wrote:
Also, you probably want a WARN_ON(in_nmi()) there, this function is
_NOT_ NMI safe.
I had a wee think about that, and I think the below is safe.
(with the obvious problem that WARN from NMI context is not safe)
It does not give
On Wed, Aug 05, 2015 at 03:53:17PM +0200, Peter Zijlstra wrote:
+/*
+ * NMI-safe method to read a local event, that is an event that
+ * is:
+ * - either for the current task, or for this CPU
+ * - does not have inherit set, for inherited task events
+ * will not be local and we
On 8/5/15 3:04 AM, Peter Zijlstra wrote:
+ __perf_event_read(event);
+ return perf_event_count(event);
+}
Also, you probably want a WARN_ON(in_nmi()) there, this function is
_NOT_ NMI safe.
we check that very early on:
unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx)
{
On 8/5/15 3:15 AM, Peter Zijlstra wrote:
On Wed, Aug 05, 2015 at 12:04:25PM +0200, Peter Zijlstra wrote:
On Tue, Aug 04, 2015 at 08:58:15AM +, Kaixu Xia wrote:
+ event-ctx-task != current)
Strictly speaking we should hold rcu_read_lock around dereferencing
event-ctx (or have
On 8/5/15 6:53 AM, Peter Zijlstra wrote:
+ /*
+* If the event is currently on this CPU, its either a per-task event,
+* or local to this CPU. Furthermore it means its ACTIVE (otherwise
+* oncpu == -1).
+*/
+ if (event-oncpu == smp_processor_id())
+
On Wed, Aug 05, 2015 at 09:08:32AM -0700, Alexei Starovoitov wrote:
On 8/5/15 6:53 AM, Peter Zijlstra wrote:
+/*
+ * If the event is currently on this CPU, its either a per-task event,
+ * or local to this CPU. Furthermore it means its ACTIVE (otherwise
+ * oncpu == -1).
+
于 2015/8/5 21:53, Peter Zijlstra 写道:
On Wed, Aug 05, 2015 at 12:04:25PM +0200, Peter Zijlstra wrote:
Also, you probably want a WARN_ON(in_nmi()) there, this function is
_NOT_ NMI safe.
I had a wee think about that, and I think the below is safe.
(with the obvious problem that WARN from
于 2015/8/5 1:55, Alexei Starovoitov 写道:
> On 8/4/15 1:58 AM, Kaixu Xia wrote:
>> +static int check_func_limit(struct bpf_map **mapp, int func_id)
>
> how about 'check_map_func_compatibility' or 'check_map_func_affinity' ?
>
>> +{
>> +struct bpf_map *map = *mapp;
>
> why pass pointer to a
On 8/4/15 1:58 AM, Kaixu Xia wrote:
+static int check_func_limit(struct bpf_map **mapp, int func_id)
how about 'check_map_func_compatibility' or 'check_map_func_affinity' ?
+{
+ struct bpf_map *map = *mapp;
why pass pointer to a pointer? single pointer would be be fine.
+
According to the perf_event_map_fd and index, the function
bpf_perf_event_read() can convert the corresponding map
value to the pointer to struct perf_event and return the
Hardware PMU counter value.
Signed-off-by: Kaixu Xia
---
include/linux/bpf.h| 1 +
include/linux/perf_event.h | 2
According to the perf_event_map_fd and index, the function
bpf_perf_event_read() can convert the corresponding map
value to the pointer to struct perf_event and return the
Hardware PMU counter value.
Signed-off-by: Kaixu Xia xiaka...@huawei.com
---
include/linux/bpf.h| 1 +
于 2015/8/5 1:55, Alexei Starovoitov 写道:
On 8/4/15 1:58 AM, Kaixu Xia wrote:
+static int check_func_limit(struct bpf_map **mapp, int func_id)
how about 'check_map_func_compatibility' or 'check_map_func_affinity' ?
+{
+struct bpf_map *map = *mapp;
why pass pointer to a pointer?
On 8/4/15 1:58 AM, Kaixu Xia wrote:
+static int check_func_limit(struct bpf_map **mapp, int func_id)
how about 'check_map_func_compatibility' or 'check_map_func_affinity' ?
+{
+ struct bpf_map *map = *mapp;
why pass pointer to a pointer? single pointer would be be fine.
+
26 matches
Mail list logo