On 5/20/15 9:05 AM, Andy Lutomirski wrote:
What causes the stack pointer to be right? Is there some reason that
the stack pointer is the same no matter where you are in the generated
code?
that's why I said 'it's _roughly_ expressed in C' this way.
Stack pointer doesn't change. It uses the
On 5/19/15 2:10 PM, Arnaldo Carvalho de Melo wrote:
This all should use infrastructure in perf for symbol resolving,
callcahins, etc.
yes. 100%
Right, but if we do a:
perf script -i perf.data bpf_file.c
Then there would be a short circuit effect of just doing the
aggregation and/or
On 5/19/15 9:40 AM, Arnaldo Carvalho de Melo wrote:
Em Wed, May 20, 2015 at 01:04:48AM +0900, Namhyung Kim escreveu:
On Tue, May 19, 2015 at 10:44:58AM -0300, Arnaldo Carvalho de Melo wrote:
Em Mon, May 18, 2015 at 02:45:58PM -0700, Alexei Starovoitov escreveu:
On 5/18/15 2:20 PM, Arnaldo
On 6/4/15 2:48 PM, Masami Hiramatsu wrote:
On 2015/06/05 1:22, Alexei Starovoitov wrote:
On 6/4/15 7:04 AM, Ingo Molnar wrote:
# perf record -e bpf_source.c cmdline
to create a eBPF filter from source,
Use
# perf record -e bpf_object.o cmdline
to create a eBPF filter from object
On 6/4/15 7:04 AM, Ingo Molnar wrote:
# perf record -e bpf_source.c cmdline
to create a eBPF filter from source,
Use
# perf record -e bpf_object.o cmdline
to create a eBPF filter from object intermedia.
Use
# perf bpf compile bpf_source.c --kbuild=kernel-build-dir -o bpf_object.o
to
On 6/4/15 3:56 AM, Markos Chandras wrote:
Here are some fixes for MIPS/BPF. The first 5 patches do some cleanup
and lay the groundwork for the final one which introduces assembly helpers
for MIPS and MIPS64. The goal is to speed up certain operations that do
not need to go through the common C
On 6/9/15 2:44 PM, Arnaldo Carvalho de Melo wrote:
btw we've been thinking how to make truly global programs
and maps, so that they can be used in 'perf probe' interface.
Right now in 'tc' we're using bpf_agent. It's a user space
demon that keeps prog_fd and map_fds and passes them to other
On 6/8/15 10:50 PM, Wang Nan wrote:
Although previous patch allows setting BPF compiler related options in
perfconfig, on some ad-hoc situation it still requires passing options
through cmdline. This patch introduces 4 options to 'perf record' for
this propose: --clang-path, --clang-opt,
On 6/8/15 10:50 PM, Wang Nan wrote:
perf_bpf_config() is added to parse 'bpf' section in perf config file.
Following is an example:
[bpf]
clang-path = /llvm/bin/x86_64-linux-clang
llc-path = /llvm/bin/x86_64-linux-llc
clang-opt = -nostdinc -isystem /llvm/lib/clang/include
On 6/8/15 10:50 PM, Wang Nan wrote:
+struct bpf_param bpf_param = {
+ .clang_path = clang,
+ .llc_path = llc,
+ .clang_opt = ,
+ .llc_opt = ,
+};
the defaults are ok-ish, but llc is never in PATH.
So most likely it won't work out of the box.
I think the cleanest option
On 6/8/15 10:50 PM, Wang Nan wrote:
# perf record --event lock_page.c ls /
Added new event:
perf_bpf_probe:lock_page (on __lock_page)
You can now use it in all perf tools, such as:
agree with Arnaldo. The output is misleading.
All these events will disappear when 'perf
On 6/9/15 5:06 PM, Wangnan (F) wrote:
On 2015/6/10 5:48, Alexei Starovoitov wrote:
On 6/8/15 10:50 PM, Wang Nan wrote:
+struct bpf_param bpf_param = {
+.clang_path = clang,
+.llc_path = llc,
+.clang_opt = ,
+.llc_opt = ,
+};
the defaults are ok-ish, but llc is never in PATH
On 6/9/15 5:47 PM, Wangnan (F) wrote:
On 2015/6/10 7:43, Alexei Starovoitov wrote:
On 6/8/15 10:50 PM, Wang Nan wrote:
perf_bpf_config() is added to parse 'bpf' section in perf config file.
Following is an example:
[bpf]
clang-path = /llvm/bin/x86_64-linux-clang
llc-path = /llvm
On 6/9/15 5:17 PM, Wangnan (F) wrote:
Could you please give me some URL to LLVM git repositories so I can
track your work on it?
traffic on llvm/clang is very heavy. probably as much as lkml.
you can subscribe to llvmweekly instead.
In the future I'll cc you on new things in that area.
--
To
On 6/9/15 7:23 PM, Wangnan (F) wrote:
I'll embed this script in my next version.
fine, let's use the script for now and inform the user
that they would need to manually copy the flags into .perfconfig
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a
On 6/4/15 3:17 AM, Wangnan (F) wrote:
Hi all,
I'd like to share my exprience on using 'perf record' BPF filter in a
real usecase to show the power and shortcome in my patch series:
thanks for sharing!
Here is another inconvenience. Currently I only concern on write
syscall issued by iozone.
On Sun, Jun 21, 2015 at 09:41:03PM +0200, Nicolai Stange wrote:
Fix compilation failer with allmodconfig on ARCH=um:
lib/test_bpf.c:50:0: warning: R8 redefined
#define R8 BPF_REG_8
^
In file included from arch/um/include/asm/ptrace-generic.h:11:0,
from
changed, 206 insertions(+)
create mode 100644 samples/bpf/lathist_kern.c
create mode 100644 samples/bpf/lathist_user.c
Thanks. That's a useful example.
Acked-by: Alexei Starovoitov a...@plumgrid.com
Dave,
this patch is for net-next and I hope it's not too late
for this merge window
On Thu, May 14, 2015 at 07:37:58PM -0300, Arnaldo Carvalho de Melo wrote:
From: He Kuang heku...@huawei.com
It is not easy for users to get the accurate byte offset or the line
number where a local variable can be probed.
With '--range' option, local variables in the scope of the probe
Fixes: e54bcde3d69d (arm64: eBPF JIT compiler)
Signed-off-by: Xi Wangxi.w...@gmail.com
Nice catch! Looks good to me.
Acked-by: Alexei Starovoitov a...@plumgrid.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
On 6/24/15 5:31 AM, Wang Nan wrote:
This patch renames bpf__for_each_program() to bpf__for_each_tev() and
makes it iterate over 'struct probe_trace_event' instead of
'struct probe_trace_event' during the loop. The callback (add_bpf_event())
now get rid of the iteration.
This is preparation for
On 6/24/15 5:31 AM, Wang Nan wrote:
Extract code for loading a 'struct bpf_insn' array into kernel to
load_program() and makes bpf_program__load() to call it. Now we have
function loads instructions into kernel. It will be used by further
patches, which creates different instances from a program
On 6/24/15 5:31 AM, Wang Nan wrote:
This patch generates prologue for a BPF program which fetch arguments
for it. With this patch, the program can have arguments as follow:
SEC(lock_page=__lock_page page-flags)
int lock_page(struct pt_regs *ctx, int err, unsigned long flags)
{
On 6/25/15 3:37 AM, Wang Nan wrote:
Before this patch, add_perf_probe_events() init symbol maps only for
uprobe if the first pev passed to it is a uprobe event. However, with
the incoming BPF uprobe support, now it will be possible to pass an
array with combined kprobe and uprobe events to
On 6/24/15 5:31 AM, Wang Nan wrote:
This patch generates prologue for each 'struct probe_trace_event' for
fetching arguments for BPF programs.
After bpf__probe(), iterate over each programs to check whether
prologue is required. If non of tev have argument, simply skip
preprocessor hooking. For
upper bits.
Cc: Zi Shen Limzlim@gmail.com
Cc: Alexei Starovoitova...@plumgrid.com
Fixes: e54bcde3d69d (arm64: eBPF JIT compiler)
Signed-off-by: Xi Wangxi.w...@gmail.com
Acked-by: Alexei Starovoitov a...@plumgrid.com
The current testsuite catches the 16-bit bugs. I'll send a separate
patch
On 6/24/15 5:31 AM, Wang Nan wrote:
The core stuffs in this series resides in 38/49 - 49/49, which allow
users to access kernel data through parameters of eBPF programs. Now
it is possible to write eBPF programs like this:
SEC(get_superblock=journal_get_superblock journal-j_errno)
int
On 6/25/15 3:37 AM, Wang Nan wrote:
This patch append new syntax to BPF object section name to support
probing at uprobe event. Now we can use BPF program like this:
SEC(
target:/lib64/libc.so.6\n
libcwrite=__write
)
int libcwrite(void *ctx)
{
return 1;
}
Where, in section
it possible to profile user space programs
and kernel events together using BPF.
Signed-off-by: Wang Nan wangn...@huawei.com
That's great.
Acked-by: Alexei Starovoitov a...@plumgrid.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord
On Thu, Jun 18, 2015 at 08:31:45AM +, Wang Nan wrote:
Original code has a problem, cause following code failed to pass verifier:
r1 - r10
r1 -= 8
r2 = 8
r3 = unsafe pointer
call BPF_FUNC_probe_read -- R1 type=inv expected=fp
However, by replacing 'r1 -= 8' to 'r1 += -8' the
On 6/18/15 4:40 AM, Daniel Wagner wrote:
BPF offers another way to generate latency histograms. We attach
kprobes at trace_preempt_off and trace_preempt_on and calculate the
time it takes to from seeing the off/on transition.
The first array is used to store the start time stamp. The key is the
On 6/18/15 11:07 PM, Daniel Wagner wrote:
I'm only a bit suspicious of kprobes, since we have:
NOKPROBE_SYMBOL(preempt_count_sub)
but trace_preemp_on() called by preempt_count_sub()
don't have this mark...
The original commit indicates that anything called from
preempt_disable() should also be
On 6/26/15 11:44 PM, He Kuang wrote:
@@ -1141,13 +1141,13 @@ kprobe_perf_func(struct trace_kprobe *tk, struct
pt_regs *regs)
int size, __size, dsize;
int rctx;
- if (prog !trace_call_bpf(prog, regs))
- return;
-
head =
On 6/26/15 7:15 AM, Wang Nan wrote:
This is the 9th version which tries to introduce eBPF programs to perf.
This patchset combined with 2 patchset I posted:
1. V8 of 'perf tools: filtering events using eBPF programs';
2. 'tracing, perf tools: Attach BPF program on uprobe events'
And
On 6/11/15 12:25 AM, Daniel Wagner wrote:
In both cases BPF or based on Tom's 'hist' triggers' patches, there is
some trickery necessary to get it working. While the first approach
has more flexibility what you want to measure or how you want to
present it, I suspect it will be harder to get it
On 6/16/15 9:05 AM, Paul E. McKenney wrote:
On Tue, Jun 16, 2015 at 11:37:38AM -0400, Steven Rostedt wrote:
On Tue, 16 Jun 2015 05:27:33 -0700
Paul E. McKenney paul...@linux.vnet.ibm.com wrote:
On Mon, Jun 15, 2015 at 10:45:05PM -0700, Alexei Starovoitov wrote:
On 6/15/15 7:14 PM, Paul E
On 6/16/15 2:19 AM, Daniel Borkmann wrote:
if you really want to, you
could go via skb-sk-sk_socket-file and then retrieve credentials
from there for egress side (you can have a look at xt_owner). You'd
need a different *_proto helper for tc in that case, which would
then map to
On 6/16/15 5:38 AM, Daniel Wagner wrote:
static int free_thread(void *arg)
+{
+ unsigned long flags;
+ struct htab_elem *l;
+
+ while (!kthread_should_stop()) {
+ spin_lock_irqsave(elem_freelist_lock, flags);
+ while (!list_empty(elem_freelist)) {
+
On 6/16/15 5:47 PM, Steven Rostedt wrote:
Do what I do in tracing. Use a bit (per cpu?) test.
Add the element to the list (that will be a cmpxchg, but I'm not sure
you can avoid it), then check the bit to see if the irq work is already
been activated. If not, then activate the irq work and set
*buf, int size_of_buf)
stores current-comm into buf
They can be used from the programs attached to TC as well to classify packets
based on current task fields.
Update tracex2 example to print histogram of write syscalls for each process
instead of aggregated for all.
Signed-off-by: Alexei
On 6/12/15 3:08 PM, Andy Lutomirski wrote:
On Fri, Jun 12, 2015 at 2:40 PM, Alexei Starovoitov a...@plumgrid.com wrote:
eBPF programs attached to kprobes need to filter based on
current-pid, uid and other fields, so introduce helper functions:
u64 bpf_get_current_pid_tgid(void)
Return: current
It's useful to do per-cpu histograms.
Suggested-by: Daniel Wagner daniel.wag...@bmw-carit.de
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
kernel/trace/bpf_trace.c |2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index
bpf_trace_printk() is a helper function used to debug eBPF programs.
Let socket and TC programs use it as well.
Note, it's DEBUG ONLY helper. If it's used in the program,
the kernel will print warning banner to make sure users don't use
it in production.
Signed-off-by: Alexei Starovoitov
Introduce new helpers to access 'struct task_struct'-pid, tgid, uid, gid, comm
fields in tracing and networking.
Share bpf_trace_printk() and bpf_get_smp_processor_id() helpers between
tracing and networking.
Alexei Starovoitov (3):
bpf: introduce current-pid, tgid, uid, gid, comm accessors
On 6/17/15 2:05 AM, Daniel Wagner wrote:
Steven's suggestion deferring the work via irq_work results in the same
stack trace. (Now I get cold feets, without the nice heat from the CPU
busy looping...)
That one still not working. It also makes the system really really slow.
I guess I still do
On 6/17/15 1:37 PM, Paul E. McKenney wrote:
On Wed, Jun 17, 2015 at 11:39:29AM -0700, Alexei Starovoitov wrote:
On 6/17/15 2:05 AM, Daniel Wagner wrote:
Steven's suggestion deferring the work via irq_work results in the same
stack trace. (Now I get cold feets, without the nice heat from
On 6/17/15 2:36 PM, Paul E. McKenney wrote:
Well, you do need to have something in each element to allow them to be
tracked. You could indeed use llist_add() to maintain the per-CPU list,
and then use llist_del_all() bulk-remove all the elements from the per-CPU
list. You can then pass each
On 6/16/15 10:37 AM, Steven Rostedt wrote:
+ kfree(l);
that's not right, since such thread defeats rcu protection of lookup.
We need either kfree_rcu/call_rcu or synchronize_rcu.
Obviously the former is preferred that's why I'm still digging into it.
Probably a thread
On 6/12/15 9:11 PM, pi3orama wrote:
发自我的 iPhone
在 2015年6月13日,上午10:31,Alexei Starovoitov a...@plumgrid.com 写道:
On 6/11/15 10:35 PM, Wang Nan wrote:
# Path to clang. If omit, search it from $PATH.
clang-path = /path/to/clang
I think this bit and search_program() from the next
On 6/12/15 7:52 PM, pi3orama wrote:
发自我的 iPhone
在 2015年6月13日,上午10:31,Alexei Starovoitov a...@plumgrid.com 写道:
On 6/11/15 10:35 PM, Wang Nan wrote:
# Path to clang. If omit, search it from $PATH.
clang-path = /path/to/clang
I think this bit and search_program() from the next
On 6/12/15 3:54 PM, Andy Lutomirski wrote:
On Fri, Jun 12, 2015 at 3:44 PM, Alexei Starovoitov a...@plumgrid.com wrote:
On 6/12/15 3:08 PM, Andy Lutomirski wrote:
On Fri, Jun 12, 2015 at 2:40 PM, Alexei Starovoitov a...@plumgrid.com
wrote:
eBPF programs attached to kprobes need to filter
On 6/12/15 4:25 PM, Andy Lutomirski wrote:
It's a dangerous tool. Also, shouldn't the returned uid match the
namespace of the task that installed the probe, not the task that's
being probed?
so leaking info to unprivileged apps is the concern?
The whole thing is for root only as you know.
The
On 6/12/15 4:47 PM, Andy Lutomirski wrote:
On Fri, Jun 12, 2015 at 4:38 PM, Alexei Starovoitov a...@plumgrid.com wrote:
On 6/12/15 4:25 PM, Andy Lutomirski wrote:
It's a dangerous tool. Also, shouldn't the returned uid match the
namespace of the task that installed the probe, not the task
On 6/11/15 10:35 PM, Wang Nan wrote:
# Path to clang. If omit, search it from $PATH.
clang-path = /path/to/clang
I think this bit and search_program() from the next patch is
overly flexible. It's always delicate to search file paths.
Unless this is really needed, I would drop
bpf_trace_printk() is a helper function used to debug eBPF programs.
Let socket and TC programs use it as well.
Note, it's DEBUG ONLY helper. If it's used in the program,
the kernel will print warning banner to make sure users don't use
it in production.
Signed-off-by: Alexei Starovoitov
*buf, int size_of_buf)
stores current-comm into buf
They can be used from the programs attached to TC as well to classify packets
based on current task fields.
Update tracex2 example to print histogram of write syscalls for each process
instead of aggregated for all.
Signed-off-by: Alexei
v1-v2: switched to init_user_ns from current_user_ns as suggested by Andy
Introduce new helpers to access 'struct task_struct'-pid, tgid, uid, gid, comm
fields in tracing and networking.
Share bpf_trace_printk() and bpf_get_smp_processor_id() helpers between
tracing and networking.
Alexei
It's useful to do per-cpu histograms.
Suggested-by: Daniel Wagner daniel.wag...@bmw-carit.de
Signed-off-by: Alexei Starovoitov a...@plumgrid.com
---
v1-v2: no changes
kernel/trace/bpf_trace.c |2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace
On 6/12/15 12:27 PM, Arnaldo Carvalho de Melo wrote:
Alexei, is this already possible with eBPF?
I want to decode that attr_uptr thing :-)
yes, it's already possible :)
Here is working example from our experimental c+python thingy:
#!/usr/bin/env python
from bpf import BPF
from subprocess
On 6/12/15 5:03 PM, Andy Lutomirski wrote:
On Fri, Jun 12, 2015 at 4:55 PM, Alexei Starovoitov a...@plumgrid.com wrote:
On 6/12/15 4:47 PM, Andy Lutomirski wrote:
On Fri, Jun 12, 2015 at 4:38 PM, Alexei Starovoitov a...@plumgrid.com
wrote:
On 6/12/15 4:25 PM, Andy Lutomirski wrote:
It's
On 6/12/15 5:24 PM, Andy Lutomirski wrote:
so what specifically you proposing?
Use from_kuid(init_user_ns,...) instead?
That seems reasonable to me. After all, you can't install one of
these probes from a non-init userns.
ok. will respin with that change.
--
To unsubscribe from this list:
On 6/15/15 4:07 PM, Paul E. McKenney wrote:
Oh... One important thing is that both call_rcu() and kfree_rcu()
use per-CPU variables, managing a per-CPU linked list. This is why
they disable interrupts. If you do another call_rcu() in the middle
of the first one in just the wrong place, you
On 6/15/15 4:01 PM, David Miller wrote:
Although I agree with the sentiment that this thing can cause
surprising results and can be asking for trouble.
If someone wants to filter traffic by UID they might make
a simple ingress TC ebpf program using these new interfaces
and expect it to work.
On 6/15/15 7:14 PM, Paul E. McKenney wrote:
Why do you believe that it is better to fix it within call_rcu()?
found it:
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 8cf7304b2867..a3be09d482ae 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -935,9 +935,9 @@ bool notrace
On 6/15/15 11:06 PM, Daniel Wagner wrote:
with the above 'fix' the trace.patch is now passing.
It still crashes for me with the original test program
[ 145.908013] [810d1da1] ? __rcu_reclaim+0x101/0x3d0
[ 145.908013] [810d1ca0] ? rcu_barrier_func+0x250/0x250
[ 145.908013]
On 6/15/15 11:34 PM, Daniel Wagner wrote:
On 06/16/2015 08:25 AM, Alexei Starovoitov wrote:
On 6/15/15 11:06 PM, Daniel Wagner wrote:
with the above 'fix' the trace.patch is now passing.
It still crashes for me with the original test program
[ 145.908013] [810d1da1] ? __rcu_reclaim
On 6/11/15 10:35 PM, Wang Nan wrote:
This is the 7th version which tries to introduce eBPF programs to perf.
It enables 'perf record' to filter events using eBPF programs like:
# perf record --event bpf-file.c sleep 1
and
# perf record --event bpf-file.o sleep 1
This patch series is
On 6/11/15 11:58 PM, Wangnan (F) wrote:
Is it possible to make 'uapi/linux/bpf.h' hold something useful for
eBPF programming, so we can get rid from bpf_helpers?
that won't be right. uapi headers suppose to have things
needed for both kernel and user space, not user space only.
I think it's
On 6/12/15 7:33 AM, Daniel Wagner wrote:
On 06/12/2015 08:12 AM, Daniel Wagner wrote:
On 06/12/2015 12:08 AM, Alexei Starovoitov wrote:
On 6/11/15 12:25 AM, Daniel Wagner wrote:
If you have any suggestions on where to look, I'm all ears.
My stack traces look like:
Running with 10*40 (== 400
Hi Paul,
I've been debugging the issue reported by Daniel:
http://thread.gmane.org/gmane.linux.kernel/1974304/focus=1974304
and it seems I narrowed it down to recursive call_rcu.
From trace_preempt_on() I'm doing:
e = kmalloc(sizeof(struct elem), GFP_ATOMIC)
kfree_rcu(e, rcu)
which causing all
On 5/28/15 6:01 AM, He Kuang wrote:
I don't think you can break it down in two steps like this.
There is no such thing as 'calling regs'. x86_32 with ax,dx,cx
are not 'calling regs'. 64-bit values will be passed in a pair.
Only 'pt_regs + arch + func_proto + asmlinkage' makes sense
from the
On Thu, May 28, 2015 at 03:14:44PM +0800, Wangnan (F) wrote:
On 2015/5/28 14:09, Alexei Starovoitov wrote:
On Thu, May 28, 2015 at 11:09:50AM +0800, Wangnan (F) wrote:
However this breaks a law in current design that opening phase doesn't
talk to kernel with sys_bpf() at all. All related
On 5/26/15 7:27 PM, He Kuang wrote:
hi, Alexei
On 2015/5/27 1:50, Alexei Starovoitov wrote:
On 5/25/15 1:33 AM, He Kuang wrote:
Right, I learnt regparm(3) is mandatory in x86_32, according to rules,
the first three args will go to regparm(ax, dx, cx). But we should not
refer arg1~3 to ax
On 5/29/15 4:55 PM, Masami Hiramatsu wrote:
On 2015/05/29 15:30, He Kuang wrote:
hi, Alexei
On 2015/5/29 2:10, Alexei Starovoitov wrote:
On 5/28/15 6:01 AM, He Kuang wrote:
I don't think you can break it down in two steps like this.
There is no such thing as 'calling regs'. x86_32 with ax
On Wed, May 27, 2015 at 05:19:51AM +, Wang Nan wrote:
This patch creates maps based on 'map' section in object file using
bpf_create_map(), and store the fds into an array in
'struct bpf_object'. Since the byte order of the object may differ
from the host, swap map definition before
in this context.
Acked-by: Alexei Starovoitov a...@plumgrid.com
btw, you didn't cc me on this set, luckily I found it on lkml.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
to correct byte order, we are unable to deal with
endianess in code logical generated by LLVM.
Therefore, libbpf should simply reject missmatched ELF object, and let
LLVM to create good code.
Signed-off-by: Wang Nan wangn...@huawei.com
lgtm
Acked-by: Alexei Starovoitov a...@plumgrid.com
structure of 'map' section in the ELF object is
not cared by of bpf.[ch].
We first introduce bpf_create_map().
Note that, since functions in bpf.[ch] are wrapper of sys_bpf, they
don't use OO style naming.
Signed-off-by: Wang Nan wangn...@huawei.com
Acked-by: Alexei Starovoitov a...@plumgrid.com
On Thu, May 28, 2015 at 10:03:04AM +0800, Wangnan (F) wrote:
On 2015/5/28 9:53, Alexei Starovoitov wrote:
On Wed, May 27, 2015 at 05:19:45AM +, Wang Nan wrote:
If maps are used by eBPF programs, corresponding object file(s) should
contain a section named 'map'. Which contains map
On Wed, May 27, 2015 at 05:19:44AM +, Wang Nan wrote:
Expand bpf_obj_elf_collect() to collect license and kernel version
information in eBPF object file. eBPF object file should have a section
named 'license', which contains a string. It should also have a section
named 'version', contains
On Wed, May 27, 2015 at 05:19:53AM +, Wang Nan wrote:
bpf_load_program() can be used to load bpf program into kernel. To make
loading faster, first try to load without logbuf. Try again with logbuf
if the first try failed.
Signed-off-by: Wang Nan wangn...@huawei.com
Acked-by: Alexei
On 5/27/15 1:43 AM, Michael Kerrisk (man-pages) wrote:
Hello Alexei,
I took the draft 3 of the bpf(2) man page that you sent back in March
and did some substantial editing to clarify the language and add a
few technical details. Could you please check the revised version
below, to ensure I did
On Thu, May 28, 2015 at 11:34:37AM +0800, Wangnan (F) wrote:
...
+static int
+bpf_object__init_kversion(struct bpf_object *obj,
+ void *data, size_t size)
+{
+ u32 kver;
+ if (size sizeof(kver)) {
shouldn't it be '!=' ?
Is it possible that LLVM pads 'version'
.
Signed-off-by: Wang Nan wangn...@huawei.com
Acked-by: Alexei Starovoitov a...@plumgrid.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read
On Thu, May 28, 2015 at 11:09:50AM +0800, Wangnan (F) wrote:
However this breaks a law in current design that opening phase doesn't
talk to kernel with sys_bpf() at all. All related staff is done in loading
phase. This principle ensures that in every systems, no matter it support
sys_bpf()
On Wed, May 27, 2015 at 05:19:45AM +, Wang Nan wrote:
If maps are used by eBPF programs, corresponding object file(s) should
contain a section named 'map'. Which contains map definitions. This
patch copies the data of the whole section. Map data parsing should be
acted just before map
- 22/30 introduce libbpf, which first parse eBPF object
files then load maps and programs into kernel.
the libbpf side looks good.
For the patches that I haven't explictly acked yet:
Acked-by: Alexei Starovoitov a...@plumgrid.com
Patch 23-30 - 30/30 are perf side modifications, introducing
On 6/26/15 8:25 AM, Xi Wang wrote:
Currently ALU_END_FROM_BE 32 and ALU_END_FROM_LE 32 do not test if
the upper bits of the result are zeros (the arm64 JIT had such bugs).
Extend the two tests to catch this.
Cc: Alexei Starovoitova...@plumgrid.com
Signed-off-by: Xi Wangxi.w...@gmail.com
looks
On 7/1/15 8:38 PM, He Kuang wrote:
On 2015/7/2 10:48, Alexei Starovoitov wrote:
On 7/1/15 4:58 AM, Peter Zijlstra wrote:
But why create a separate trace buffer, it should go into the regular
perf buffer.
+1
I think
+static char __percpu *perf_extra_trace_buf[PERF_NR_CONTEXTS
On 6/30/15 7:57 PM, He Kuang wrote:
When we add a kprobe point and record events by perf, the execution path
of all threads on each cpu will enter this point, but perf may only
record events on a particular thread or cpu at this kprobe point, a
check on call-perf_events list filters out the
On 7/1/15 4:58 AM, Peter Zijlstra wrote:
But why create a separate trace buffer, it should go into the regular
perf buffer.
+1
I think
+static char __percpu *perf_extra_trace_buf[PERF_NR_CONTEXTS];
is redundant.
It adds quite a bit of unnecessary complexity to the whole patch set.
Also the
On 7/1/15 10:52 PM, Wangnan (F) wrote:
I'd like to discuss with you about the correctness of our
understanding. Do you have any strong reason to put BPF filters at such
an early stage?
the obvious reason is performance.
It is so much faster to run generated
'if (bpf_get_current_pid() !=
On 7/2/15 2:24 AM, Wangnan (F) wrote:
Yes, by using perf_trace_buf_prepare() + perf_trace_buf_submit() in
helper function and let bpf program always returns 0 we can make data
collected by BPF programs output into samples, if following problems
are solved:
1. In bpf program there's no way to
On 7/2/15 6:50 AM, He Kuang wrote:
When we add a kprobe point and record events by perf, the execution path
of all threads on each cpu will enter this point, but perf may only
record events on a particular thread or cpu at this kprobe point, a
check on call-perf_events list filters out the
On 6/11/15 12:35 AM, Wangnan (F) wrote:
Now I'm trying this:
$CLANG_EXEC $CLANG_OPTIONS $KERNEL_INC_OPTIONS
-Wno-unused-value -Wno-pointer-sign
-working-directory $WORKING_DIR
-c \$CLANG_SOURCE\ -march=bpf -O2 -o -
WORKING_DIR is appended because we will get
On 5/22/15 10:23 AM, Jiri Olsa wrote:
+
+struct bpf_object *bpf_open_object(const char *path)
+{
another suggestion for the namespace.. Arnaldo forces us ;-)
to use the object name first plus '__(method name)' for
interface functions so that would be:
bpf_object__open
bpf_object__close
On 5/20/15 5:24 PM, Wangnan (F) wrote:
Do you think we should classify kprobe/socket programs in libbpf layer
instead of perf?
In my current implementation, type of a program is determined by perf by
parsing names of
corresponding sections. Format of section names should be part of
interface
On 5/21/15 9:43 AM, Andy Lutomirski wrote:
On Thu, May 21, 2015 at 9:40 AM, Alexei Starovoitov a...@plumgrid.com wrote:
On 5/21/15 9:20 AM, Andy Lutomirski wrote:
What I mean is: why do we need the interface to be look up this index
in an array and just to what it references as a single
On 5/21/15 9:20 AM, Andy Lutomirski wrote:
What I mean is: why do we need the interface to be look up this index
in an array and just to what it references as a single atomic
instruction? Can't we break it down into first look up this index in
an array and then do this tail call?
I've
On 5/21/15 9:57 AM, Andy Lutomirski wrote:
On Thu, May 21, 2015 at 9:53 AM, Alexei Starovoitov a...@plumgrid.com wrote:
On 5/21/15 9:43 AM, Andy Lutomirski wrote:
On Thu, May 21, 2015 at 9:40 AM, Alexei Starovoitov a...@plumgrid.com
wrote:
On 5/21/15 9:20 AM, Andy Lutomirski wrote:
What
901 - 1000 of 4142 matches
Mail list logo