On Mon, Mar 2, 2015 at 12:43 PM, Steven Rostedt rost...@goodmis.org wrote:
On Mon, 2 Mar 2015 15:39:48 -0500
Steven Rostedt rost...@goodmis.org wrote:
On Mon, 2 Mar 2015 12:33:34 -0800
Alexei Starovoitov a...@plumgrid.com wrote:
On Mon, Mar 2, 2015 at 11:49 AM, Karim Yaghmour
karim.yaghm
On Mon, Mar 2, 2015 at 4:26 AM, Daniel Borkmann dan...@iogearbox.net wrote:
On 03/02/2015 12:51 PM, Masami Hiramatsu wrote:
(2015/03/02 20:10), Daniel Borkmann wrote:
Well, currently all possible map types (hash table, array map) that
would actually call into bpf_register_map_type() are only
On Mon, Mar 2, 2015 at 10:03 AM, Tom Zanussi
tom.zanu...@linux.intel.com wrote:
On Mon, 2015-03-02 at 09:58 -0800, Alexei Starovoitov wrote:
On Mon, Mar 2, 2015 at 8:46 AM, Tom Zanussi tom.zanu...@linux.intel.com
wrote:
On Mon, 2015-03-02 at 11:37 -0500, Steven Rostedt wrote:
On Mon, 2
On Mon, Mar 2, 2015 at 10:25 AM, Tom Zanussi
tom.zanu...@linux.intel.com wrote:
On Mon, 2015-03-02 at 10:12 -0800, Alexei Starovoitov wrote:
On Mon, Mar 2, 2015 at 10:03 AM, Tom Zanussi
tom.zanu...@linux.intel.com wrote:
On Mon, 2015-03-02 at 09:58 -0800, Alexei Starovoitov wrote:
On Mon
On Mon, Mar 2, 2015 at 8:46 AM, Tom Zanussi tom.zanu...@linux.intel.com wrote:
On Mon, 2015-03-02 at 11:37 -0500, Steven Rostedt wrote:
On Mon, 2 Mar 2015 10:01:00 -0600
Tom Zanussi tom.zanu...@linux.intel.com wrote:
Add a gfp flag that allows kmalloc() et al to be used in tracing
On Mon, Mar 2, 2015 at 8:00 AM, Tom Zanussi tom.zanu...@linux.intel.com wrote:
# echo 'hist:keys=common_pid.execname,id.syscall:vals=hitcount' \
/sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
# cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
On Mon, Mar 2, 2015 at 10:54 AM, Tom Zanussi
tom.zanu...@linux.intel.com wrote:
The idea would be that instead of getting your individually kmalloc'ed
elements on-demand from kmalloc while in the handler, you'd get them
from a pool you've pre-allocated when you set up the table. This
On Mon, Mar 2, 2015 at 11:33 AM, Steven Rostedt rost...@goodmis.org wrote:
On Mon, 2 Mar 2015 11:24:04 -0800
Alexei Starovoitov a...@plumgrid.com wrote:
well, percentage of tracepoints called from NMI is tiny
comparing to the rest, so assuming nmi context
everywhere is very inefficient
On Mon, Mar 2, 2015 at 10:43 AM, Steven Rostedt rost...@goodmis.org wrote:
On Mon, 2 Mar 2015 10:12:32 -0800
Alexei Starovoitov a...@plumgrid.com wrote:
I'm not sure what would be the meaning of hash map that has all
elements pre-allocated...
As I'm reading your cover letter, I agree, we
On Mon, Mar 2, 2015 at 11:31 AM, Steven Rostedt rost...@goodmis.org wrote:
On Mon, 2 Mar 2015 11:14:54 -0800
Alexei Starovoitov a...@plumgrid.com wrote:
I think we both want to see in-kernel aggregation.
This 'hist' stuff is trying to do counting and even map sorting
in the kernel, whereas
63103.382684: : skb 880466b1d300 len 84
ping-19826 [000] d.s2 63104.382533: : skb 880466b1ca00 len 84
ping-19826 [000] d.s2 63104.382594: : skb 880466b1d300 len 84
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
samples/bpf/Makefile|4
-by: Alexei Starovoitov a...@plumgrid.com
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex3_kern.c | 89 ++
samples/bpf/tracex3_user.c | 150
3 files changed, 243 insertions(+)
create mode 100644 samples/bpf/tracex3_kern.c
for expert users (I presume one day the default setting
of it might change, though), but code making use of it should not care if
it's actually enabled or not.
Instead, hide this via header files and let the rest deal with it.
Signed-off-by: Daniel Borkmann dan...@iogearbox.net
Signed-off-by: Alexei
/unix/af_unix.c:1231
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex2_kern.c | 86 +++
samples/bpf/tracex2_user.c | 95
3 files changed, 185 insertions
bpf_ktime_get_ns() is used by programs to compue time delta between events
or as a timestamp
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 10 ++
2 files changed, 11 insertions(+)
diff --git a/include/uapi/linux
buffers
and emits big 'this is debug only' banner.
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 68 ++
2 files changed, 69 insertions(+)
diff --git a/include/uapi/linux/bpf.h b
*' as an input
('struct pt_regs' is architecture dependent)
Note, kprobes are _not_ a stable kernel ABI, so bpf programs attached to
kprobes must be recompiled for every kernel version and user must supply correct
LINUX_VERSION_CODE in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei
.
Though kprobes are slow comparing to tracepoints, they are good enough
for prototyping and trace_marker/debug_tracepoint ideas can accelerate
them in the future.
Alexei Starovoitov (6):
tracing: attach BPF programs to kprobes
tracing: allow BPF programs to call ktime_get_ns()
tracing: allow BPF
On Fri, Feb 27, 2015 at 4:08 PM, Alexei Starovoitov a...@plumgrid.com wrote:
Hi All,
This is targeting 'tip' tree, since most of the changes are perf_event
related.
V3 discussion:
https://lkml.org/lkml/2015/2/9/738
V3-V4:
- since the boundary of stable ABI in bpf+tracepoints
On Sun, Mar 1, 2015 at 3:27 PM, Alexei Starovoitov a...@plumgrid.com wrote:
Peter, Steven,
I think this set addresses everything we've discussed.
Please review/ack. Thanks!
icmp echo request
V4-V5:
- switched to ktime_get_mono_fast_ns() as suggested by Peter
- in libbpf.c fixed zero init
On Mon, Mar 2, 2015 at 11:21 PM, He Kuang heku...@huawei.com wrote:
TRACE_EVENT_FL_USE_CALL_FILTER flag in ftrace:functon event can be
removed. This flag was first introduced in commit
f306cc82a93d (tracing: Update event filters for multibuffer).
Now, the only place uses this flag is
On 3/3/15 7:47 AM, Tom Zanussi wrote:
On Mon, 2015-03-02 at 18:31 -0800, Alexei Starovoitov wrote:
On Mon, Mar 2, 2015 at 5:18 PM, Tom Zanussi tom.zanu...@linux.intel.com wrote:
I'm not proposing to use asm or C for this 'hist-bpf' tool.
Keep proposed 'hist:keys=...:vals=...' syntax
On Sat, Feb 28, 2015 at 2:20 AM, Peter Zijlstra pet...@infradead.org wrote:
+static u64 bpf_ktime_get_ns(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
+{
+ return ktime_get_ns();
+}
Please use ktime_get_mono_fast_ns() instead. If you ever want to allow
running BPF stuff from general
for expert users (I presume one day the default setting
of it might change, though), but code making use of it should not care if
it's actually enabled or not.
Instead, hide this via header files and let the rest deal with it.
Signed-off-by: Daniel Borkmann dan...@iogearbox.net
Signed-off-by: Alexei
*' as an input
('struct pt_regs' is architecture dependent)
Note, kprobes are _not_ a stable kernel ABI, so bpf programs attached to
kprobes must be recompiled for every kernel version and user must supply correct
LINUX_VERSION_CODE in attr.kern_version during bpf_prog_load() call.
Signed-off-by: Alexei
bpf_ktime_get_ns() is used by programs to compue time delta between events
or as a timestamp
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 11 +++
2 files changed, 12 insertions(+)
diff --git a/include/uapi/linux
-by: Alexei Starovoitov a...@plumgrid.com
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex3_kern.c | 89 ++
samples/bpf/tracex3_user.c | 150
3 files changed, 243 insertions(+)
create mode 100644 samples/bpf/tracex3_kern.c
buffers
and emits big 'this is debug only' banner.
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
include/uapi/linux/bpf.h |1 +
kernel/trace/bpf_trace.c | 68 ++
2 files changed, 69 insertions(+)
diff --git a/include/uapi/linux/bpf.h b
63103.382684: : skb 880466b1d300 len 84
ping-19826 [000] d.s2 63104.382533: : skb 880466b1ca00 len 84
ping-19826 [000] d.s2 63104.382594: : skb 880466b1d300 len 84
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
samples/bpf/Makefile|4
/unix/af_unix.c:1231
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
samples/bpf/Makefile |4 ++
samples/bpf/tracex2_kern.c | 86 +++
samples/bpf/tracex2_user.c | 95
3 files changed, 185 insertions
web10g) using
bpf+kprobe, but without adding any new code tcp stack.
Though kprobes are slow comparing to tracepoints, they are good enough
for prototyping and trace_marker/debug_tracepoint ideas can accelerate
them in the future.
Alexei Starovoitov (6):
tracing: attach BPF programs to kprobes
On Mon, Feb 16, 2015 at 6:26 AM, He Kuang heku...@huawei.com wrote:
Hi, Alexei
Another suggestion on bpf syscall interface. Currently, BPF +
syscalls/kprobes depends on CONFIG_BPF_SYSCALL. In kernel used on
commercial products, CONFIG_BPF_SYSCALL is probably disabled, in this
case, bpf
On 3/19/15 8:07 AM, Steven Rostedt wrote:
struct trace_print_flags {
unsigned long mask;
@@ -252,6 +253,7 @@ enum {
TRACE_EVENT_FL_WAS_ENABLED_BIT,
TRACE_EVENT_FL_USE_CALL_FILTER_BIT,
TRACE_EVENT_FL_TRACEPOINT_BIT,
+ TRACE_EVENT_FL_KPROBE_BIT,
On 3/19/15 8:29 AM, Steven Rostedt wrote:
+ /* check format string for allowed specifiers */
+ for (i = 0; i fmt_size; i++)
Even though there's only a single if statement after the for, it is
usually considered proper to add the brackets if the next line is
complex (more than one
On 3/19/15 8:50 AM, Steven Rostedt wrote:
I'm not going to review the sample code, as I'm a bit strapped for
time, and that's more userspace oriented anyway. I'm much more concerned
that the kernel modifications are correct.
sure. thanks a lot for thorough review!
--
To unsubscribe from
On 3/19/15 8:11 AM, Steven Rostedt wrote:
On Mon, 16 Mar 2015 14:49:39 -0700
Alexei Starovoitov a...@plumgrid.com wrote:
bpf_ktime_get_ns() is used by programs to compue time delta between events
compute
ok :)
+ [BPF_FUNC_ktime_get_ns] = {
+ .func = bpf_ktime_get_ns
On 3/23/15 12:35 AM, Ingo Molnar wrote:
* Alexei Starovoitov a...@plumgrid.com wrote:
+void read_trace_pipe(void)
+{
+ int trace_fd;
+
+ trace_fd = open(DEBUGFS trace_pipe, O_RDONLY, 0);
+ if (trace_fd 0)
+ return;
+
+ while (1) {
+ static
On 3/23/15 12:29 AM, Ingo Molnar wrote:
** **
** This means that this is a DEBUG kernel and it is **
** unsafe for production use. **
But I think printing that it's unsafe for production use is over the
top:
On 3/23/15 12:40 AM, Ingo Molnar wrote:
* Alexei Starovoitov a...@plumgrid.com wrote:
BPF C program attaches to blk_mq_start_request/blk_update_request kprobe events
to calculate IO latency.
...
+/* kprobe is NOT a stable ABI
+ * This bpf+kprobe example can stop working any time
On 3/23/15 5:07 AM, Ingo Molnar wrote:
* David Laight david.lai...@aculab.com wrote:
From: Alexei Starovoitov
Debugging of BPF programs needs some form of printk from the program,
so let programs call limited trace_printk() with %d %u %x %p modifiers only.
Should anyone be allowed to use
On Wed, Apr 29, 2015 at 03:37:33PM +0200, Nicolas Schichan wrote:
Greetings,
The following patches allow the use of the existing JIT code under
arch/arm for seccomp filters.
The first patch makes bpf_migrate_filter() available so that seccomp
code can use it.
The second patch invokes
On 4/30/15 5:51 AM, Daniel Borkmann wrote:
If there are some additional tests that are not yet covered by
lib/test_bpf.c,
I'd be happy if you could add them there. This can also be as a follow-up,
but if we can increase coverage for others as well, the better.
btw, Michael have been working
On 4/30/15 3:52 AM, Wang Nan wrote:
This series of patches is an approach to integrate eBPF with perf.
After applying these patches, users are allowed to use following
command to load eBPF program compiled by LLVM into kernel:
$ perf bpf sample_bpf.o
The required BPF code and the loading
On 5/1/15 4:49 AM, Ingo Molnar wrote:
* Peter Zijlstra pet...@infradead.org wrote:
On Thu, Apr 30, 2015 at 09:37:04PM -0700, Alexei Starovoitov wrote:
We're also working in parallel on creating a new tracing language
that together with llvm backend can be used as a single shared library
On 5/2/15 12:19 AM, Wang Nan wrote:
I'd like to do following works in the next version (based on my experience and
feedbacks):
1. Safely clean up kprobe points after unloading;
2. Add subcommand space to 'perf bpf'. Current staff should be reside in 'perf
bpf load';
3. Extract eBPF ELF
On 5/5/15 8:58 PM, Wang Nan wrote:
Two high level comments:
- can you collapse SEC(config) with SEC(func_name) ?
It seems that func_name is only used as reference inside config.
I understand that you're proposing one config section where multiple
descriptions are strcat together, but why?
On 5/5/15 3:10 AM, He Kuang wrote:
When all arguments in bpf config section are collected in register and
offset form, this patch will fetch them from bpf context register and
place them as bpf input parameters.
Bpf prologue is generated as the following steps:
1. alloc dst address in stack -
On 5/5/15 9:46 PM, Wang Nan wrote:
Hi Alexei Starovoitov,
Have you ever read this mail?
please don't top post.
all makes sense and your use case fits quite well into existing
bpf+kprobe model. I'm not sure why you're calling a 'problem'.
A problem of how to display that call stack from perf
On 5/5/15 3:10 AM, He Kuang wrote:
This patch set is based on https://lkml.org/lkml/2015/4/30/264
By using bpf 'config' section like this:
char _config2[] SEC(config) = generic_perform_write=generic_perform_write+122
file-f_mapping-a_ops bytes offset;
SEC(generic_perform_write)
int
On 5/5/15 3:10 AM, He Kuang wrote:
Including bpf instruction macros and register alias.
Signed-off-by: He Kuang heku...@huawei.com
---
tools/perf/util/bpf-loader.h | 188 +++
1 file changed, 188 insertions(+)
diff --git a/tools/perf/util/bpf-loader.h
On 5/5/15 3:10 AM, He Kuang wrote:
Convert register number in debuginfo to its index in pt_regs.
Signed-off-by: He Kuang heku...@huawei.com
---
tools/perf/arch/x86/util/dwarf-regs.c | 31 +++
1 file changed, 31 insertions(+)
diff --git
On 5/8/15 8:17 AM, Will Deacon wrote:
Ok, I plan to apply the patch below for 4.1.
great catch. Looks good to me.
Xi, could you send a separate patch for test_bpf update to net-next?
Thanks!
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to
On Fri, May 08, 2015 at 08:50:37AM -0700, Alexander Duyck wrote:
f0 80 8f e0 01 00 00 01 lock orb $0x1,0x1e0(%rdi)
This is your set bit operation. If you were to drop the whole WARN_ON
then this is the only thing you would be inlining.
It's up to networking people to decide. I
On 5/4/15 9:41 PM, Wang Nan wrote:
That's great. Could you please append the description of 'llvm -s' into your
README
or comments? It has cost me a lot of time for dumping eBPF instructions so I
decide to
add it into perf...
sure. it's just -filetype=asm flag to llc instead of
Starovoitov a...@plumgrid.com
Cc: Will Deacon will.dea...@arm.com
Signed-off-by: Xi Wang xi.w...@gmail.com
---
looks good. Thanks!
Acked-by: Alexei Starovoitov a...@plumgrid.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
777c2a0 vmlinux
nice code shrink. Looks good to me.
Acked-by: Alexei Starovoitov a...@plumgrid.com
btw, in the future please say [PATCH net-next] as part of subject
to make it clear what tree this patch is going to.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel
On 5/15/15 3:52 AM, Wang Nan wrote:
According to Alexei Starovoitov (http://lkml.org/lkml/2015/5/15/29),
there is racing between perf_event_fd and kprobe freeing:
...
And he suggest to call perf_event_free_bpf_prog() from __free_event()
instead of free_event_rcu() will fix the race
On 5/14/15 8:54 PM, Wangnan (F) wrote:
Hi Alexei Starovoitov and other,
I triggered a kernel panic when developing my 'perf bpf' facility. The
call stack is listed at the bottom of
this mail.
I attached two bpf programs on 'kmem_cache_free%return' and
'__alloc_pages_nodemask'. The programs
, ...); defailt debug printing
is NULL.
Signed-off-by: Wang Nan wangn...@huawei.com
---
Acked-by: Alexei Starovoitov a...@plumgrid.com
would be good to add a sentence to the commit log explaining
that perf and patch 30 makes use of this api:
+void libbpf_set_print(int (*warn)(const char *format
On 5/17/15 3:56 AM, Wang Nan wrote:
This is the first patch of libbpf. The goal of libbpf is to create a
standard way for accessing eBPF object files. This patch creates
Makefile and Build for it, allows 'make' to build libbpf.a and
libbpf.so, 'make install' to put them into proper directories.
On 5/17/15 3:56 AM, Wang Nan wrote:
bpf_open_object() and bpf_close_object() are open and close function of
eBPF object files. 'struct bpf_object' will be handler of one object
file. Its internal structure is hide to user.
Signed-off-by: Wang Nan wangn...@huawei.com
---
...
+
+struct
On 5/17/15 3:56 AM, Wang Nan wrote:
Original vmlinux_path__exit() doesn't revert vmlinux_path__nr_entries
to its original state. After the while loop vmlinux_path__nr_entries
becomes -1 instead of 0. This makes a problem that, if runs twice,
during the second run vmlinux_path__init() will set
On 5/17/15 3:56 AM, Wang Nan wrote:
there is a race between perf_event_free_bpf_prog() and free_trace_kprobe():
...
Fixes: 2541517c32be (tracing, perf: Implement BPF programs attached to
kprobes)
Reported-by: Wang Nan wangn...@huawei.com
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
On 5/17/15 3:56 AM, Wang Nan wrote:
This patch adds basic 'struct bpf_object' which will be used for eBPF
object file loading. eBPF object files are compiled by LLVM as ELF
format. In this patch, libelf is used to open those files, read EHDR
and do basic validation according to e_type and
On 5/17/15 3:56 AM, Wang Nan wrote:
bpf_obj_elf_collect() is introduced to iterate over each elf sections
to collection informations in eBPF object files. This function will
futher enhanced to collect license, kernel version, programs, configs
and map information.
Signed-off-by: Wang Nan
On 5/17/15 3:56 AM, Wang Nan wrote:
A 'config' section is allowed to enable eBPF object file to pass
something to user. libbpf doesn't use config string.
To make further processing easiler, this patch converts '\0' in the
whole config strings into '\n'
Signed-off-by: Wang Nan
order of
instructions and byte order of data
are not always same. Think about ARM. Therefore another choice is to
swap them in kernel,
keep user-kernel interface clean.
Alexei Starovoitov, do you think we should use uniformed instruction
byte order in both big and little
endian kernel on user-kernel
On 5/17/15 3:56 AM, Wang Nan wrote:
After all eBPF programs in an object file are loaded, related ELF
information is useless. Close the object file and free those memory.
Signed-off-by: Wang Nan wangn...@huawei.com
---
tools/lib/bpf/libbpf.c | 2 +-
1 file changed, 1 insertion(+), 1
On 5/17/15 3:56 AM, Wang Nan wrote:
This patch introduces bpf.c and bpf.h, which hold common functions
issuing bpf syscall. The goal of these two files is to hide syscall
completly from user. Note that bpf.c and bpf.h only deal with kernel
interface. Things like structure of 'map' section in
On 5/17/15 3:56 AM, Wang Nan wrote:
bpf_load_program() can be used to load bpf program into kernel. To make
loading faster, first try to load without logbuf. Try again with logbuf
if the first try failed.
Signed-off-by: Wang Nan wangn...@huawei.com
...
+ attr.insn_cnt =
On 5/17/15 3:56 AM, Wang Nan wrote:
Check endianess according to EHDR to support loading eBPF objects into
big endian machines. Code is taken from tools/perf/util/symbol-elf.c.
Signed-off-by: Wang Nan wangn...@huawei.com
---
...
+static int
+bpf_obj_swap_init(struct bpf_object *obj)
+{
+
On 5/17/15 3:56 AM, Wang Nan wrote:
Expand bpf_obj_elf_collect() to collect license and kernel version
information in eBPF object file. eBPF object file should have a section
named 'license', which contains a string. It should also have a section
named 'version', contains a u32
On 5/17/15 3:56 AM, Wang Nan wrote:
This patch records the indics of instructions which are needed to be
relocated. Those information are saved in 'reloc_desc' field in
'struct bpf_program'. In loading phase (this patch takes effect in
opening phase), the collected instructions will be replaced
On 5/17/15 3:56 AM, Wang Nan wrote:
This patch creates maps based on 'map' section in object file using
bpf_create_map(), and store the fds into an array in
'struct bpf_object'. Since the byte order of the object may differ
from the host, swap map definition before processing.
This is the first
On 5/17/15 3:56 AM, Wang Nan wrote:
In this patch, kprobe points are created using add_perf_probe_events.
Since all events are already grouped together in an array, calling
add_perf_probe_events() once creates all of them.
To ensure recover the system when existing, a bpf_unprobe() is also
On 5/17/15 3:56 AM, Wang Nan wrote:
This is the 3rd version of 'perf bpf' patch series, based on
v4.1-rc3.
The goal of this series of patches is to integrate eBPF with perf.
After applying these patches, users are allowed to use following
command to load eBPF program compiled by LLVM into
On 5/17/15 10:30 PM, He Kuang wrote:
Add new structure bpf_pt_regs, which contains both original
'ctx'(pt_regs) and trabe_probe pointer, and pass this new pointer to bpf
prog for variable fetching.
Signed-off-by: He Kuang heku...@huawei.com
---
kernel/trace/trace_kprobe.c | 11 +--
On 5/17/15 10:30 PM, He Kuang wrote:
This helper function uses kernel structure trace_probe and related fetch
functions for fetching variables described in 'SEC' to bpf stack.
Signed-off-by: He Kuang heku...@huawei.com
...
+/* Store the value of each argument */
+static void
On 5/17/15 10:30 PM, He Kuang wrote:
Always use $(obj) when referring to generated files and use $(src) when
referring to files located in the src tree.
Signed-off-by: He Kuang heku...@huawei.com
---
samples/bpf/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
On 5/18/15 1:44 PM, Arnaldo Carvalho de Melo wrote:
perf record --filter, to pass a filter to tracepoints, if I could
instead of a filter expression pass, say, filter_bpf.o, that would seem
natural for me, i.e. no new option, just an alternative type of filter,
one way more powerful.
...
I'd
On 5/18/15 2:20 PM, Arnaldo Carvalho de Melo wrote:
Em Mon, May 18, 2015 at 02:05:35PM -0700, Alexei Starovoitov escreveu:
On 5/18/15 1:44 PM, Arnaldo Carvalho de Melo wrote:
perf record --filter, to pass a filter to tracepoints, if I could
instead of a filter expression pass, say
On 5/18/15 1:34 PM, Arnaldo Carvalho de Melo wrote:
So, lets take a look at: tools/include/linux/kernel.h... bummer, that
one wasn't yet moved from tools/perf/util/include/linux/kernel.h there.
Will do, and then it has this:
#ifndef min
#define min(x, y) ({\
...@huawei.com
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
Wang, please test the fix to double check.
kernel/events/core.c |3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 84231a146dd5..b27bdc8b3558 100644
--- a/kernel
On 4/16/15 11:13 AM, Vince Weaver wrote:
This manpage patch relates to the addition of the
PERF_EVENT_IOC_SET_BPF ioctl in the following commit:
Signed-off-by: Vince Weaver vincent.wea...@maine.edu
+.TP
+.BR PERF_EVENT_IOC_SET_BPF (since Linux 4.1)
+.\ commit
On Sat, Apr 18, 2015 at 04:37:35PM -0700, Kees Cook wrote:
On Mon, Apr 6, 2015 at 8:29 AM, Michael Kerrisk (man-pages)
mtk.manpa...@gmail.com wrote:
Hi Kees,
I recently was asked about the point below, and had to go check the code
to be sure, since the man page said nothing. It would be
On 4/7/15 12:04 AM, Stephen Rothwell wrote:
Hi all,
Today's linux-next merge of the tip tree got a conflict in
samples/bpf/Makefile between commit 91bc4822c3d6 (tc: bpf: add
checksum helpers) from the net-next tree and commit b896c4f95ab4
(samples/bpf: Add simple non-portable kprobe filter
On 4/7/15 12:11 AM, Stephen Rothwell wrote:
Hi all,
Today's linux-next merge of the tip tree got a conflict in
include/uapi/linux/bpf.h between commit 96be4325f443 (ebpf: add
sched_cls_type and map it to sk_filter's verifier ops), 03e69b508b6f
(ebpf: add prandom helper for packet sampling),
On 4/7/15 4:13 AM, Daniel Borkmann wrote:
[ Cc'ing Dave, fyi ]
On 04/07/2015 11:05 AM, Stephen Rothwell wrote:
On Tue, 07 Apr 2015 10:56:13 +0200 Daniel Borkmann
dan...@iogearbox.net wrote:
On 04/07/2015 10:48 AM, Ingo Molnar wrote:
* Stephen Rothwell s...@canb.auug.org.au wrote:
After
On Fri, Apr 3, 2015 at 8:51 AM, Tom Zanussi tom.zanu...@linux.intel.com wrote:
+static struct hist_trigger_entry *
+tracing_map_insert(struct tracing_map *map, void *key)
+{
+ u32 idx, key_hash, test_key;
+
+ key_hash = jhash(key, map-key_size, 0);
+ idx = key_hash (32 -
, since whole kernel/trace/
directory is not compiled when !CONFIG_TRACING, but I missed the fact
that CONFIG_RING_BUFFER=y also enables kernel/trace/ which this ia64
.config exploited :( I'll add it to my set of configs.
Thanks again for the fix. It makes dependency clear.
Acked-by: Alexei Starovoitov
On 5/15/15 12:15 PM, Alexei Starovoitov wrote:
there is a race between perf_event_free_bpf_prog() and free_trace_kprobe():
__free_event()
event-destroy(event)
tp_perf_event_destroy()
perf_trace_destroy()
perf_trace_event_unreg()
which is dropping event-tp_event
is implemented as BPF_MAP_TYPE_PROG_ARRAY to reuse 'map'
abstraction, its user space API and all of verifier logic.
It's in the existing arraymap.c file, since several functions are
shared with regular array map.
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
include/linux/bpf.h | 22
always
initialize the stack before use, so any residue in the stack left by
the current program is not going be read. The same verifier checks are
done for the calls from the kernel into all bpf programs.
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
arch/x86/net/bpf_jit_comp.c | 150
On 5/19/15 8:48 PM, Wangnan (F) wrote:
+
+# Version of eBPF elf file
+FILE_VERSION = 1
what that comment suppose to mean?
The format of eBPF objects can be improved in futher. A version number
here is the precaution of backward compatibility. However this patch
doesn't
utilize it.
I'd
,
size=512)
sh-369 [000] d... 4.891747: : write(fd=1, buf=023d3000,
size=512)
sh-369 [000] d... 4.891747: : read(fd=1, buf=023d3000,
size=512)
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
samples/bpf/Makefile |4 +++
samples/bpf/bpf_helpers.h
shortcomings and v5,v6 fix few
more bugs
This last tail_call_v6 approach seems to be the best.
Alexei Starovoitov (4):
bpf: allow bpf programs to tail-call other bpf programs
x86: bpf_jit: implement bpf_tail_call() helper
samples/bpf: bpf_tail_call example for tracing
samples/bpf
On 5/19/15 3:05 PM, Arnaldo Carvalho de Melo wrote:
Ok, can you point me to this bpf_file.c, an example? So that we can talk
about the parts of it that would be short circuited when not loading the
bpf_file.o, etc.
There are different use cases that would fit in different perf commands.
- 1st
On 5/19/15 5:11 PM, Andy Lutomirski wrote:
On Tue, May 19, 2015 at 4:59 PM, Alexei Starovoitov a...@plumgrid.com wrote:
bpf_tail_call() arguments:
ctx - context pointer
jmp_table - one of BPF_MAP_TYPE_PROG_ARRAY maps used as the jump table
index - index in the jump table
In this implementation
can be parsed in similar manner.
Note, tail_call_cnt dynamic check limits the number of tail calls to 32.
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
samples/bpf/Makefile |4 +
samples/bpf/bpf_helpers.h |2 +
samples/bpf/sockex3_kern.c | 303
On 5/19/15 5:13 PM, Andy Lutomirski wrote:
IMO this is starting to get a bit ugly. Would it be possible to have
the program dereference the subprogram reference itself from the jump
table? There would have to be a verifier type that represents a
reference to a program tail-call entry point,
801 - 900 of 4142 matches
Mail list logo