On 11/8/17 4:06 PM, David Miller wrote:
From: Yonghong Song <y...@fb.com>
Date: Wed, 8 Nov 2017 13:37:12 -0800
Uprobe is a tracing mechanism for userspace programs.
Typical uprobe will incur overhead of two traps.
First trap is caused by replaced trap insn, and
the second trap is to e
1.754.0
You can see that this patch significantly reduced the overhead,
50% for uprobe and 44% for uretprobe on x86_64, and even more
on x86_32.
Signed-off-by: Yonghong Song <y...@fb.com>
---
arch/x86/include/asm/uprobes.h | 10
arch/x86/kernel/uprobes.c | 115 +++
perf_event.h is updated in previous patch, this patch applies same
changes to the tools/ version. This is part is put in a separate
patch in case the two files are back ported separately.
Signed-off-by: Song Liu <songliubrav...@fb.com>
Reviewed-by: Yonghong Song <y...@fb.com>
Review
) takes 5.077558
seconds
Cleaning 1000 kprobes with PERF_TYPE_PROBE (function name) takes 81.241354
seconds
Creating 1000 kprobes with PERF_TYPE_PROBE (function addr) takes 5.218255
seconds
Cleaning 1000 kprobes with PERF_TYPE_PROBE (function addr) takes 80.010731
seconds
Signed-off-by: Song Liu
New kernel API allows creating [k,u]probe with perf_event_open.
This patch tries to use the new API. If the new API doesn't work,
we fall back to old API.
bpf_detach_probe() looks up the event being removed. If the event
is not found, we skip the clean up procedure.
Signed-off-by: Song Liu
with the pointer, we will (in the following patches) copy probe_desc to
__aligned_u64 before using it as pointer.
Signed-off-by: Song Liu <songliubrav...@fb.com>
Reviewed-by: Yonghong Song <y...@fb.com>
Reviewed-by: Josef Bacik <jba...@fb.com>
Acked-by: Alexei Starovoitov <a...@kernel.o
-by: Song Liu <songliubrav...@fb.com>
Reviewed-by: Yonghong Song <y...@fb.com>
Reviewed-by: Josef Bacik <jba...@fb.com>
---
kernel/trace/trace_event_perf.c | 33 ++-
kernel/trace/trace_probe.h | 4 ++
kernel/trace/trace_uprobe.c | 90 ++
ub.com/liu-song-6/bcc/tree/new_perf_event_opn
Thanks,
Song
man-pages patch:
perf_event_open.2: add new type PERF_TYPE_PROBE
bcc patch:
bcc: Try use new API to create [k,u]probe with perf_event_open
kernel patches:
Song Liu (6):
perf: Add new type PERF_TYPE_PROBE
perf: copy new perf_event.
are not included
in this patch.
write_backward : 1
namespaces : 1
Signed-off-by: Song Liu <songliubrav...@fb.com>
---
man2/perf_event_open.2 | 82 --
1 file changed, 80 insertions(+), 2 deletions(-)
diff --git
Function load_and_attach() is updated to be able to create kprobes
with either old text based API, or the new PERF_TYPE_PROBE API.
A global flag use_perf_type_probe is added to select between the
two APIs.
Signed-off-by: Song Liu <songliubrav...@fb.com>
Reviewed-by: Josef Bacik <jba.
, create_local_trace_kprobe() and
destroy_local_trace_kprobe() are added to created and destroy these
local trace_kprobe.
Signed-off-by: Song Liu <songliubrav...@fb.com>
Reviewed-by: Yonghong Song <y...@fb.com>
Reviewed-by: Josef Bacik <jba...@fb.com>
---
include/linux/trace_events.h|
On Tue, Nov 7, 2017 at 4:29 PM, Atish Patra <atish.pa...@oracle.com> wrote:
> On 11/07/2017 04:42 PM, Y Song wrote:
>>
>> On Tue, Nov 7, 2017 at 2:04 PM, Alexei Starovoitov <a...@fb.com> wrote:
>>>
>>> On 11/8/17 6:47 AM, Y Song wrote:
>>
On Tue, Nov 7, 2017 at 2:04 PM, Alexei Starovoitov <a...@fb.com> wrote:
> On 11/8/17 6:47 AM, Y Song wrote:
>>
>> On Tue, Nov 7, 2017 at 1:39 PM, Alexei Starovoitov <a...@fb.com> wrote:
>>>
>>> On 11/8/17 6:14 AM, Y Song wrote:
>>>>
On Tue, Nov 7, 2017 at 1:39 PM, Alexei Starovoitov <a...@fb.com> wrote:
> On 11/8/17 6:14 AM, Y Song wrote:
>>
>> On Tue, Nov 7, 2017 at 12:37 AM, Naveen N. Rao
>> <naveen.n@linux.vnet.ibm.com> wrote:
>>>
>>> Alexei Starovoitov wrote:
>
On Tue, Nov 7, 2017 at 1:31 PM, Atish Patra <atish.pa...@oracle.com> wrote:
> On 11/07/2017 03:14 PM, Y Song wrote:
>>
>> On Tue, Nov 7, 2017 at 12:37 AM, Naveen N. Rao
>> <naveen.n@linux.vnet.ibm.com> wrote:
>>>
>>> Alexei Starovoitov wrote
On Tue, Nov 7, 2017 at 12:37 AM, Naveen N. Rao
wrote:
> Alexei Starovoitov wrote:
>>
>> On 11/7/17 12:55 AM, Naveen N. Rao wrote:
I thought such struct shouldn't change layout.
If it is we need to fix include/linux/compiler-clang.h to do that
is fixed in subsequent patch set.
FYI, we noticed the following commit (built with gcc-4.8):
commit: 76cdd39f4117a6cbd520b5d09993ac87acbdcfd8 ("bpf: permit multiple bpf
attachments for a single perf event")
url:
https://github.com/0day-ci/linux/commits/Yonghong-Song/bpf-permit-mu
CCing key audience of the patch.
Thanks,
Song
> On Oct 30, 2017, at 2:41 PM, Song Liu <songliubrav...@fb.com> wrote:
>
> This tracepoint can be used to trace synack retransmits. It maintains
> pointer to struct request_sock.
>
> We cannot simply reuse trace_tcp_retran
these warnings. To the
best of our knowledge, these warnings are harmless.
Signed-off-by: Song Liu <songliubrav...@fb.com>
Acked-by: Alexei Starovoitov <a...@kernel.org>
Acked-by: Martin KaFai Lau <ka...@fb.com>
---
include/trace/events/tcp.h | 56 ++
Change from v1: Updated commit message to highlight potential sparse warning.
Song Liu (1):
tcp: add tracepoint trace_tcp_retransmit_synack()
include/trace/events/tcp.h | 56 ++
net/ipv4/tcp_output.c | 1 +
2 files changed, 57 insertions
dering.
Verified through "make C=2" that sparse
locking check still happy with the new change.
Also change the label name in perf_event_{attach,detach}_bpf_prog
from "out" to "unlock" to reflect the code action after the label.
Signed-off-by: Yonghong Song <y.
> On Oct 26, 2017, at 7:01 PM, Cong Wang <xiyou.wangc...@gmail.com> wrote:
>
> On Thu, Oct 26, 2017 at 4:50 PM, Song Liu <songliubrav...@fb.com> wrote:
>> In this case, we are putting CONFIG_IPV6 in TRACE_EVENT macro, which
>> generates
>> warnings like:
> On Oct 25, 2017, at 8:13 PM, kbuild test robot <l...@intel.com> wrote:
>
> Hi Song,
>
> [auto build test WARNING on net-next/master]
>
> url:
> https://github.com/0day-ci/linux/commits/Song-Liu/tcp-add-tracepoint-trace_tcp_retransmit_synack/20171026-010651
&
On 10/26/17 6:56 AM, Peter Zijlstra wrote:
On Mon, Oct 23, 2017 at 10:58:04AM -0700, Yonghong Song wrote:
This patch enables multiple bpf attachments for a
kprobe/uprobe/tracepoint single trace event.
This forgets to explain _why_ this is a good thing to do.
Before this patch, each perf
CCing key audience of the patch.
Thanks,
Song
> On Oct 24, 2017, at 4:57 PM, Song Liu <songliubrav...@fb.com> wrote:
>
> This tracepoint can be used to trace synack retransmits. It maintains
> pointer to struct request_sock.
>
> We cannot simply reuse trace_tcp_retran
This tracepoint can be used to trace synack retransmits. It maintains
pointer to struct request_sock.
We cannot simply reuse trace_tcp_retransmit_skb() here, because the
sk here is the LISTEN socket. The IP addresses and ports should be
extracted from struct request_sock.
Signed-off-by: Song Liu
The bpf sample program syscall_tp is modified to
show attachment of more than bpf programs
for a particular kernel tracepoint.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@kernel.org>
Acked-by: Martin KaFai Lau <ka...@fb.com>
---
samples/bpf
This is a cleanup such that doing the same check in
perf_event_free_bpf_prog as we already do in
perf_event_set_bpf_prog step.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@kernel.org>
Acked-by: Martin KaFai Lau <ka...@fb.com>
---
kernel
, a dummy do-nothing program
will replace to-be-detached program in-place.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@kernel.org>
Acked-by: Martin KaFai Lau <ka...@fb.com>
---
include/linux/bpf.h | 30 +---
include/linux/tr
v3:
. fix compilation error.
v1 -> v2:
. fix a potential deadlock issue discovered by Daniel.
. fix some coding style issues.
Yonghong Song (3):
bpf: use the same condition in perf event set/free bpf handler
bpf: permit multiple bpf attachments for a single perf event
bpf: add a tes
This is a cleanup such that doing the same check in
perf_event_free_bpf_prog as we already do in
perf_event_set_bpf_prog step.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@kernel.org>
Acked-by: Martin KaFai Lau <ka...@fb.com>
---
kernel
The bpf sample program syscall_tp is modified to
show attachment of more than bpf programs
for a particular kernel tracepoint.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@kernel.org>
Acked-by: Martin KaFai Lau <ka...@fb.com>
---
samples/bpf
v2:
. fix a potential deadlock issue discovered by Daniel.
. fix some coding style issues.
Yonghong Song (3):
bpf: use the same condition in perf event set/free bpf handler
bpf: permit multiple bpf attachments for a single perf event
bpf: add a test case to test single tp multiple
, a dummy do-nothing program
will replace to-be-detached program in-place.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@kernel.org>
Acked-by: Martin KaFai Lau <ka...@fb.com>
---
include/linux/bpf.h | 30 +---
include/linux/tr
On 10/23/17 1:52 PM, Daniel Borkmann wrote:
On 10/23/2017 07:58 PM, Yonghong Song wrote:
[...]
__this_cpu_dec(bpf_prog_active);
@@ -741,3 +754,63 @@ const struct bpf_verifier_ops
perf_event_verifier_ops = {
const struct bpf_prog_ops perf_event_prog_ops = {
};
+
+static
This patch set adds support to permit multiple bpf prog attachments
for a single perf tracepoint event. Patch 1 does some cleanup such
that perf_event_{set|free}_bpf_handler is called under the
same condition. Patch 2 has the core implementation, and
Patch 3 adds a test case.
Yonghong Song (3
The bpf sample program syscall_tp is modified to
show attachment of more than bpf programs
for a particular kernel tracepoint.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@kernel.org>
Acked-by: Martin KaFai Lau <ka...@fb.com>
---
samples/bpf
, a dummy do-nothing program
will replace to-be-detached program in-place.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@kernel.org>
Acked-by: Martin KaFai Lau <ka...@fb.com>
---
include/linux/bpf.h | 30 +---
include/linux/tr
This is a cleanup such that doing the same check in
perf_event_free_bpf_prog as we already do in
perf_event_set_bpf_prog step.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@kernel.org>
Acked-by: Martin KaFai Lau <ka...@fb.com>
---
kernel
CCing key audience of these patches.
Thanks,
Song
> On Oct 23, 2017, at 9:20 AM, Song Liu <songliubrav...@fb.com> wrote:
>
> Changes from v1:
>
> Fix build error (with ipv6 as ko) by adding EXPORT_TRACEPOINT_SYMBOL_GPL
> for trace_tcp_send_reset.
>
> T
Introduce event class tcp_event_sk_skb for tcp tracepoints that
have arguments sk and skb.
Existing tracepoint trace_tcp_retransmit_skb() falls into this class.
This patch rewrites the definition of trace_tcp_retransmit_skb() with
tcp_event_sk_skb.
Signed-off-by: Song Liu <songliubrav...@fb.
New tracepoint trace_tcp_send_reset is added and called from
tcp_v4_send_reset(), tcp_v6_send_reset() and tcp_send_active_reset().
Signed-off-by: Song Liu <songliubrav...@fb.com>
---
include/trace/events/tcp.h | 11 +++
net/core/net-traces.c | 2 ++
net/ipv4/tcp_ipv4.c
This patch adds trace event trace_tcp_destroy_sock.
Signed-off-by: Song Liu <songliubrav...@fb.com>
---
include/trace/events/tcp.h | 7 +++
net/ipv4/tcp_ipv4.c| 2 ++
2 files changed, 9 insertions(+)
diff --git a/include/trace/events/tcp.h b/include/trace/events/tcp.h
index c
New tracepoint trace_tcp_receive_reset is added and called from
tcp_reset(). This tracepoint is define with a new class tcp_event_sk.
Signed-off-by: Song Liu <songliubrav...@fb.com>
---
include/trace/events/tcp.h | 66 ++
net/ipv4/tcp_i
This patch adds tracepoint trace_tcp_set_state. Besides usual fields
(s/d ports, IP addresses), old and new state of the socket is also
printed with TP_printk, with __print_symbolic().
Signed-off-by: Song Liu <songliubrav...@fb.com>
---
include/trace/events/tcp.
Some functions that we plan to add trace points require const sk
and/or skb. So we mark these fields as const in the tracepoint.
Signed-off-by: Song Liu <songliubrav...@fb.com>
---
include/trace/events/tcp.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/i
int kprobe__tcp_v6_send_reset
int kprobe__tcp_v4_destroy_sock
int kprobe__tcp_set_state
int kprobe__tcp_retransmit_skb
These tracepoints will help us simplify this work.
Thanks,
Song
Song Liu (6):
tcp: add trace event class tcp_event_sk_skb
tcp: mark trace event arguments sk and skb as const
CCing key audience of these patches.
I am really sorry if I have spammed your inbox multiple times with
the same set of patches. I have been struggling to get git-send-email
sending email to netdev. The only solution I found is to be reduce CC
list.
Best Regards,
Song
> On Oct 20, 2
kprobe__tcp_v4_destroy_sock
int kprobe__tcp_set_state
int kprobe__tcp_retransmit_skb
These tracepoints will help us simplify this work.
Thanks,
Song
Song Liu (6):
tcp: add trace event class tcp_event_sk_skb
tcp: mark trace event arguments sk and skb as const
tcp: add tracepoint
Introduce event class tcp_event_sk_skb for tcp tracepoints that
have arguments sk and skb.
Existing tracepoint trace_tcp_retransmit_skb() falls into this class.
This patch rewrites the definition of trace_tcp_retransmit_skb() with
tcp_event_sk_skb.
Signed-off-by: Song Liu <songliubrav...@fb.
This patch adds tracepoint trace_tcp_set_state. Besides usual fields
(s/d ports, IP addresses), old and new state of the socket is also
printed with TP_printk, with __print_symbolic().
Signed-off-by: Song Liu <songliubrav...@fb.com>
---
include/trace/events/tcp.
New tracepoint trace_tcp_send_reset is added and called from
tcp_v4_send_reset(), tcp_v6_send_reset() and tcp_send_active_reset().
Signed-off-by: Song Liu <songliubrav...@fb.com>
---
include/trace/events/tcp.h | 11 +++
net/ipv4/tcp_ipv4.c| 6 +-
net/ipv4/tcp_ou
Some functions that we plan to add trace points require const sk
and/or skb. So we mark these fields as const in the tracepoint.
Signed-off-by: Song Liu <songliubrav...@fb.com>
---
include/trace/events/tcp.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/i
This patch adds trace event trace_tcp_destroy_sock.
Signed-off-by: Song Liu <songliubrav...@fb.com>
---
include/trace/events/tcp.h | 7 +++
net/ipv4/tcp_ipv4.c| 2 ++
2 files changed, 9 insertions(+)
diff --git a/include/trace/events/tcp.h b/include/trace/events/tcp.h
index c
New tracepoint trace_tcp_receive_reset is added and called from
tcp_reset(). This tracepoint is define with a new class tcp_event_sk.
Signed-off-by: Song Liu <songliubrav...@fb.com>
---
include/trace/events/tcp.h | 66 ++
net/ipv4/tcp_i
;> Signed-off-by: David Ahern <dsah...@gmail.com>
>> ---
>
> Reviewed-by: Eric Dumazet <eduma...@google.com>
>
> Thanks !
>
Tested-by: Song Liu <songliubrav...@fb.com>
> > Remove use of ipv6_pinfo in favor of data in sock_common.
> >
> > Fixes: e086101b150a ("tcp: add a tracepoint for tcp retransmission")
> > Signed-off-by: David Ahern <dsah...@gmail.com>
> > ---
>
> Reviewed-by: Eric Dumazet <edum
. In such cases, if it is desirable to get scaling factor
between two bpf invocations, users can can save the time values in a map,
and use the value from the map and the current value to calculate
the scaling factor.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a.
The bpf sample program trace_event is enhanced to use the new
helper to print out enabled/running time.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@fb.com>
Acked-by: Daniel Borkmann <dan...@iogearbox.net>
---
samples/bpf/trace_event_kern
he logic to ensure the result is valid.
Yonghong Song (5):
bpf: perf event change needed for subsequent bpf helpers
bpf: add helper bpf_perf_event_read_value for perf event array map
bpf: add a test case for helper bpf_perf_event_read_value
bpf: add helper bpf_perf_prog_read_value
bpf: add a te
This patch does not impact existing functionalities.
It contains the changes in perf event area needed for
subsequent bpf_perf_event_read_value and
bpf_perf_prog_read_value helpers.
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/perf_event.h | 7 +--
kernel/bpf/arra
The bpf sample program tracex6 is enhanced to use the new
helper to read enabled/running time as well.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@fb.com>
Acked-by: Daniel Borkmann <dan...@iogearbox.net>
---
samples/bpf/tracex6_kern.c
and do the calculation inside the
bpf program.
Signed-off-by: Yonghong Song <y...@fb.com>
Acked-by: Alexei Starovoitov <a...@fb.com>
Acked-by: Daniel Borkmann <dan...@iogearbox.net>
---
include/uapi/linux/bpf.h | 21 +++--
kernel/bpf/verifier.c| 4 +++-
kernel
which contains the logic to ensure the result is valid.
Yonghong Song (4):
bpf: add helper bpf_perf_event_read_value for perf event array map
bpf: add a test case for helper bpf_perf_event_read_value
bpf: add helper bpf_perf_prog_read_value
bpf: add a test case for helper bpf_perf_prog_read_
The bpf sample program tracex6 is enhanced to use the new
helper to read enabled/running time as well.
Signed-off-by: Yonghong Song <y...@fb.com>
---
samples/bpf/tracex6_kern.c| 26 ++
samples/bpf/tracex6_user.c| 13 -
and do the calculation inside the
bpf program.
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/perf_event.h | 6 --
include/uapi/linux/bpf.h | 20 ++--
kernel/bpf/arraymap.c | 2 +-
kernel/bpf/verifier.c | 4 +++-
kernel/events/core.c
The bpf sample program trace_event is enhanced to use the new
helper to print out enabled/running time.
Signed-off-by: Yonghong Song <y...@fb.com>
---
samples/bpf/trace_event_kern.c| 10 ++
samples/bpf/trace_event_user.c| 13 -
tools/include/uapi
. In such cases, if it is desirable to get scaling factor
between two bpf invocations, users can can save the time values in a map,
and use the value from the map and the current value to calculate
the scaling factor.
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/perf_event.
On 9/21/17 8:15 AM, Alexei Starovoitov wrote:
On 9/20/17 4:07 PM, David Miller wrote:
From: Peter Zijlstra
Date: Wed, 20 Sep 2017 19:26:51 +0200
Dave, could we have this in a topic tree of sorts, because I have a
pending series to rework all the timekeeping and it
On Fri, Sep 22, 2017 at 9:23 AM, Edward Cree wrote:
> On 22/09/17 16:16, Alexei Starovoitov wrote:
>> looks like we're converging on
>> "be16/be32/be64/le16/le32/le64 #register" for BPF_END.
>> I guess it can live with that. I would prefer more C like syntax
>> to match the
On Fri, Sep 22, 2017 at 7:11 AM, Y Song <ys114...@gmail.com> wrote:
> On Fri, Sep 22, 2017 at 6:46 AM, Edward Cree <ec...@solarflare.com> wrote:
>> On 22/09/17 00:11, Y Song wrote:
>>> On Thu, Sep 21, 2017 at 12:58 PM, Edward Cree <ec...@solarflare.com> w
On Fri, Sep 22, 2017 at 6:46 AM, Edward Cree <ec...@solarflare.com> wrote:
> On 22/09/17 00:11, Y Song wrote:
>> On Thu, Sep 21, 2017 at 12:58 PM, Edward Cree <ec...@solarflare.com> wrote:
>>> On 21/09/17 20:44, Alexei Starovoitov wrote:
>>>> On Thu
On Thu, Sep 21, 2017 at 12:58 PM, Edward Cree wrote:
> On 21/09/17 20:44, Alexei Starovoitov wrote:
>> On Thu, Sep 21, 2017 at 09:29:33PM +0200, Daniel Borkmann wrote:
>>> More intuitive, but agree on the from_be/le. Maybe we should
>>> just drop the "to_" prefix altogether,
On Thu, Sep 21, 2017 at 9:24 AM, Edward Cree wrote:
> On 21/09/17 16:52, Alexei Starovoitov wrote:
>> On Thu, Sep 21, 2017 at 04:09:34PM +0100, Edward Cree wrote:
>>> print_bpf_insn() was treating all BPF_ALU[64] the same, but BPF_END has a
>>> different structure: it has a
On Thu, Sep 21, 2017 at 8:52 AM, Alexei Starovoitov
wrote:
> On Thu, Sep 21, 2017 at 04:09:34PM +0100, Edward Cree wrote:
>> print_bpf_insn() was treating all BPF_ALU[64] the same, but BPF_END has a
>> different structure: it has a size in insn->imm (even if it's
On 9/20/17 10:17 PM, Yonghong Song wrote:
On 9/20/17 6:41 PM, Steven Rostedt wrote:
On Mon, 18 Sep 2017 16:38:36 -0700
Yonghong Song <y...@fb.com> wrote:
This patch fixes a bug exhibited by the following scenario:
1. fd1 = perf_event_open with attr.config = ID1
2. attach bpf p
On 9/20/17 6:41 PM, Steven Rostedt wrote:
On Mon, 18 Sep 2017 16:38:36 -0700
Yonghong Song <y...@fb.com> wrote:
This patch fixes a bug exhibited by the following scenario:
1. fd1 = perf_event_open with attr.config = ID1
2. attach bpf program prog1 to fd1
3. fd2 = perf_even
The bpf sample program tracex6 is enhanced to use the new
helper to read enabled/running time as well.
Signed-off-by: Yonghong Song <y...@fb.com>
---
samples/bpf/tracex6_kern.c| 26 ++
samples/bpf/tracex6_user.c| 13 -
time should be together with reading counters
which contains the logic to ensure the result is valid.
Yonghong Song (4):
bpf: add helper bpf_perf_event_read_value for perf event array map
bpf: add a test case for helper bpf_perf_event_read_value
bpf: add helper bpf_perf_prog_read_value
bpf
The bpf sample program trace_event is enhanced to use the new
helper to print out enabled/running time.
Signed-off-by: Yonghong Song <y...@fb.com>
---
samples/bpf/trace_event_kern.c| 10 ++
samples/bpf/trace_event_user.c| 13 -
tools/include/uapi
and do the calculation inside the
bpf program.
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/perf_event.h | 6 --
include/uapi/linux/bpf.h | 19 ++-
kernel/bpf/arraymap.c | 2 +-
kernel/bpf/verifier.c | 4 +++-
kernel/events/core.c
. In such cases, if it is desirable to get scaling factor
between two bpf invocations, users can can save the time values in a map,
and use the value from the map and the current value to calculate
the scaling factor.
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/perf_event.
The bpf sample program trace_event is enhanced to use the new
helper to print out enabled/running time.
Signed-off-by: Yonghong Song <y...@fb.com>
---
samples/bpf/trace_event_kern.c| 10 ++
samples/bpf/trace_event_user.c| 13 -
tools/include/uapi
. In such cases, if it is desirable to get scaling factor
between two bpf invocations, users can can save the time values in a map,
and use the value from the map and the current value to calculate
the scaling factor.
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/perf_event.
o read enabled/running time. This is to
prevent that counters and enabled/running time may be read separately.
v1->v2:
. reading enabled/running time should be together with reading counters
which contains the logic to ensure the result is valid.
Yonghong Song (4):
bpf: a
and do the calculation inside the
bpf program.
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/perf_event.h | 6 --
include/uapi/linux/bpf.h | 18 +-
kernel/bpf/arraymap.c | 2 +-
kernel/bpf/verifier.c | 4 +++-
kernel/events/core.c
The bpf sample program tracex6 is enhanced to use the new
helper to read enabled/running time as well.
Signed-off-by: Yonghong Song <y...@fb.com>
---
samples/bpf/tracex6_kern.c| 26 ++
samples/bpf/tracex6_user.c| 13 -
fix is to free tp_event->prog only when the closing fd
corresponds to the one which registered the program.
Signed-off-by: Yonghong Song <y...@fb.com>
---
Additional context: discussed with Alexei internally but did not find
a solution which can avoid introducing the additional field in
trac
The bpf sample program tracex6 is enhanced to use the new
helper to read enabled/running time as well.
Signed-off-by: Yonghong Song <y...@fb.com>
---
samples/bpf/tracex6_kern.c| 26 ++
samples/bpf/tracex6_user.c| 13 -
and do the calculation inside the
bpf program.
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/perf_event.h | 3 ++-
include/uapi/linux/bpf.h | 18 +-
kernel/bpf/arraymap.c | 2 +-
kernel/bpf/verifier.c | 4 +++-
kernel/events/core.c
. In such cases, if it is desirable to get scaling factor
between two bpf invocations, users can can save the time values in a map,
and use the value from the map and the current value to calculate
the scaling factor.
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/perf_event.
ime. This is to
prevent that counters and enabled/running time may be read separately.
v1->v2:
. reading enabled/running time should be together with reading counters
which contains the logic to ensure the result is valid.
Yonghong Song (4):
bpf: add helper bpf_perf_event_read_value for perf
The bpf sample program trace_event is enhanced to use the new
helper to print out enabled/running time.
Signed-off-by: Yonghong Song <y...@fb.com>
---
samples/bpf/trace_event_kern.c| 10 ++
samples/bpf/trace_event_user.c| 13 -
tools/include/uapi
On Thu, Sep 14, 2017 at 11:14 AM, David Miller wrote:
> From: Edward Cree
> Date: Thu, 14 Sep 2017 18:53:17 +0100
>
>> Is BPF_END supposed to only be used with BPF_ALU, never with BPF_ALU64?
Yes, only BPF_ALU. The below is LLVM bpf swap insn encoding:
args[sys_data->nb_args];
^
The fix is to use a fixed array length instead.
Reported-by: Nick Desaulniers <ndesaulni...@google.com>
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/syscalls.h | 2 ++
kernel/trace/trace_syscalls.c | 2 +-
The bpf sample program trace_event is enhanced to use the new
helper to print out enabled/running time.
Signed-off-by: Yonghong Song <y...@fb.com>
---
samples/bpf/trace_event_kern.c| 10 ++
samples/bpf/trace_event_user.c| 13 -
tools/testing/sel
and do the calculation inside the
bpf program.
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/perf_event.h | 3 ++-
include/uapi/linux/bpf.h | 21 -
kernel/bpf/arraymap.c | 2 +-
kernel/bpf/verifier.c | 4 +++-
kernel/events/core.c
bpf_perf_read_counter_time reads counter/time_enabled/time_running
for perf event array map. The helper bpf_perf_prog_read_time read
time_enabled/time_running for bpf prog with type BPF_PROG_TYPE_PERF_EVENT.
Yonghong Song (4):
bpf: add helper bpf_perf_read_counter_time for perf event array map
bpf: add
, if it is desirable to get scaling factor
between two bpf invocations, users can can save the time values in a map,
and use the value from the map and the current value to calculate
the scaling factor.
Signed-off-by: Yonghong Song <y...@fb.com>
---
include/linux/perf_event.h | 1 +
include/uapi
The bpf sample program tracex6 is enhanced to use the new
helper to read enabled/running time as well.
Signed-off-by: Yonghong Song <y...@fb.com>
---
samples/bpf/tracex6_kern.c| 26 ++
samples/bpf/tracex6_user.c| 13 -
901 - 1000 of 1050 matches
Mail list logo