org>
Signed-off-by: Naveen N. Rao <naveen.n@linux.vnet.ibm.com>
Makes sense. Looks like ppc jit will be using quite a bit of
available ppc instructions. Nice.
I'm assuming all these new tests passed with x64 jit?
Acked-by: Alexei Starovoitov <a...@kernel.org>
On Wed, Mar 23, 2016 at 06:08:41PM +0800, Wangnan (F) wrote:
>
>
> On 2016/3/23 17:50, Peter Zijlstra wrote:
> >On Mon, Mar 14, 2016 at 09:59:43AM +, Wang Nan wrote:
> >>Convert perf_output_begin to __perf_output_begin and make the later
> >>function able to write records from the end of the
On Thu, Mar 24, 2016 at 11:48:54AM +0800, Wangnan (F) wrote:
>
> >>http://lkml.iu.edu/hypermail/linux/kernel/1601.2/03966.html
> >Wang, when you respin, please add all perf analysis that you've
> >done and the reasons to do it this way to commit log
> >to make sure it stays in git history.
> >
>
helpers don't have this problem,
since they don't hold any locks and don't modify global data.
bpf_trace_printk has its own recursive check and ok as well.
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
Acked-by: Daniel Borkmann <dan...@iogearbox.net>
---
include/linux/bpf
and kernel side updates
is also present in hashmap, but it's not a new race. bpf programs were
always allowed to modify hash and array map elements while user space
is copying them.
Fixes: d5a3b1f69186 ("bpf: introduce BPF_MAP_TYPE_STACK_TRACE")
Signed-off-by: Alexei Starovoitov <a.
Suggested-by: Daniel Borkmann <dan...@iogearbox.net>
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
kernel/bpf/arraymap.c | 2 +-
kernel/bpf/stackmap.c | 3 +++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index
extend test coveraged to include pre-allocated and run-time alloc maps
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/test_maps.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/samples/bpf/test_maps.c b/samples/bpf/test_maps.c
note old loader is compatible with new kernel.
map_flags are optional
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/bpf_helpers.h | 1 +
samples/bpf/bpf_load.c | 3 ++-
samples/bpf/fds_example.c | 2 +-
samples/bpf/libbpf.c| 5 +++--
sampl
sh map infra. It attaches to spin_lock
functions and bpf_map_update/delete are called from different contexts
Patch 11: stress for bpf_get_stackid
Patch 12: map performance test
Reported-by: Daniel Wagner <daniel.wag...@bmw-carit.de>
Reported-by: Tom Zanussi <tom.zanu...@linux.intel.com>
on the same cpu.
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
kernel/bpf/Makefile | 2 +-
kernel/bpf/percpu_freelist.c | 100 +++
kernel/bpf/percpu_freelist.h | 31 ++
3 files changed, 132 insertions(+), 1 deletion(-)
creat
move ksym search from offwaketime into library to be reused
in other tests
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/bpf_load.c | 62 ++
samples/bpf/bpf_load.h | 6
samples/bpf/offwaketime_user.
map creation is typically the first one to fail when rlimits are
too low, not enough memory, etc
Make this failure scenario more verbose
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/bpf_load.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/s
cond is low, it may make
sense to use it.
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
include/linux/bpf.h | 2 +
include/uapi/linux/bpf.h | 3 +
kernel/bpf/hashtab.c | 240 +--
kernel/bpf/syscall.c | 15 ++-
4 files chan
On 3/8/16 1:13 AM, Daniel Wagner wrote:
Some time back Daniel Wagner reported crashes when bpf hash map is
>used to compute time intervals between preempt_disable->preempt_enable
>and recently Tom Zanussi reported a dead lock in iovisor/bcc/funccount
>tool if it's used to count the number of
by walking and deleting map elements.
Note that due to nature bpf_load.c the earlier kprobe+bpf programs are
already active while loader loads new programs, creates new kprobes and
attaches them.
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/Makefile| 4 +++
sampl
on the same cpu.
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
kernel/bpf/Makefile | 2 +-
kernel/bpf/percpu_freelist.c | 81
kernel/bpf/percpu_freelist.h | 31 +
3 files changed, 113 insertions(+), 1 deletion(-)
creat
map creation is typically the first one to fail when rlimits are
too low, not enough memory, etc
Make this failure scenario more verbose
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/bpf_load.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/s
extend test coveraged to include pre-allocated and run-time alloc maps
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/test_maps.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/samples/bpf/test_maps.c b/samples/bpf/test_maps.c
note old loader is compatible with new kernel.
map_flags are optional
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/bpf_helpers.h | 1 +
samples/bpf/bpf_load.c | 3 ++-
samples/bpf/fds_example.c | 2 +-
samples/bpf/libbpf.c| 5 +++--
sampl
low, it may make
sense to use it.
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 3 +
kernel/bpf/hashtab.c | 264 ++-
kernel/bpf/syscall.c | 2 +-
4 files changed, 196
move ksym search from offwaketime into library to be reused
in other tests
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/bpf_load.c | 62 ++
samples/bpf/bpf_load.h | 6
samples/bpf/offwaketime_user.
helpers don't have this problem,
since they don't hold any locks and don't modify global data.
bpf_trace_printk has its own recursive check and ok as well.
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
include/linux/bpf.h | 3 +++
kernel/bpf/syscall.c | 13 +
Reported-by: Tom Zanussi <tom.zanu...@linux.intel.com>
Alexei Starovoitov (9):
bpf: prevent kprobe+bpf deadlocks
bpf: introduce percpu_freelist
bpf: pre-allocate hash map elements
samples/bpf: make map creation more verbose
samples/bpf: move ksym_search() into library
samples/bpf
performance tests for hash map and per-cpu hash map
with and without pre-allocation
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/Makefile | 4 +
samples/bpf/map_perf_test_kern.c | 100 +
samples/bpf/map_perf_test_user.c
On 3/7/16 2:33 AM, Daniel Borkmann wrote:
On 03/07/2016 02:58 AM, Alexei Starovoitov wrote:
Introduce simple percpu_freelist to keep single list of elements
spread across per-cpu singly linked lists.
/* push element into the list */
void pcpu_freelist_push(struct pcpu_freelist *, struct
On 3/7/16 3:08 AM, Daniel Borkmann wrote:
On 03/07/2016 02:58 AM, Alexei Starovoitov wrote:
[...]
---
include/linux/bpf.h | 1 +
include/uapi/linux/bpf.h | 3 +
kernel/bpf/hashtab.c | 264
++-
kernel/bpf/syscall.c | 2 +-
4
seeing on ton of these errors on net-next with kasan on.
Likely old bug though.
[ 373.705691] BUG: KASAN: slab-out-of-bounds in memcpy+0x28/0x40 at
addr 8811ada62cb0
[ 373.707137] Write of size 28 by task bash/7059
[ 373.708177]
On 4/1/16 7:37 AM, Naveen N. Rao wrote:
On 2016/03/31 08:19PM, Daniel Borkmann wrote:
On 03/31/2016 07:46 PM, Alexei Starovoitov wrote:
On 3/31/16 4:25 AM, Naveen N. Rao wrote:
clang $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) \
-D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused
On 4/1/16 7:41 AM, Naveen N. Rao wrote:
On 2016/03/31 10:52AM, Alexei Starovoitov wrote:
On 3/31/16 4:25 AM, Naveen N. Rao wrote:
...
+
+#ifdef __powerpc__
+#define BPF_KPROBE_READ_RET_IP(ip, ctx){ (ip) = (ctx)->link; }
+#define BPF_KRETPROBE_READ_RET_IP(ip,
F tail calls and skb loads.
Cc: Matt Evans <m...@ozlabs.org>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: Paul Mackerras <pau...@samba.org>
Cc: Alexei Starovoitov <a...@fb.com>
Cc: "David S. Miller" <da...@davemloft.net>
Cc: Ananth N Mavinakayanahalli &l
);
}
and on 4 cpus in parallel:
reads per sec
base (no tracepoints, no kprobes) 300k
with kprobe at urandom_read() 279k
with tracepoint at random:urandom_read290k
bpf progs attached to kprobe and tracepoint are noop.
Signed-off-by: Alexei Starovoitov
and some application access it directly without consulting tracepoint/format.
Same rule applies here: static tracepoint fields should only be accessed
in a format defined in tracepoint/format. The order of fields and
field sizes are not an ABI.
Signed-off-by: Alexei Starovoitov <a...@kernel.
needs two wrapper functions to fetch 'struct pt_regs *' to convert
tracepoint bpf context into kprobe bpf context to reuse existing
helper functions
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
include/linux/bpf.h | 1 +
kernel/bpf/stackmap.c| 2 +-
kernel/trace/bpf_t
register tracepoint bpf program type and let it call the same set
of helper functions as BPF_PROG_TYPE_KPROBE
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
kernel/trace/bpf_trace.c | 45 +++--
1 file changed, 43 insertions(+), 2 del
modify offwaketime to work with sched/sched_switch tracepoint
instead of kprobe into finish_task_switch
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/offwaketime_kern.c | 26 ++
1 file changed, 22 insertions(+), 4 deletions(-)
diff --git a/s
Recognize "tracepoint/" section name prefix and attach the program
to that tracepoint.
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
samples/bpf/bpf_load.c | 26 +-
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/samples/bpf/bpf_
call to perf_arch_fetch_caller_regs initializes the same fields on
all archs,
so we can safely drop memset from all of the above cases and move it into
perf_ftrace_function_call that calls it with stack allocated pt_regs.
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
include/linux/perf_event.
thread.gmane.org/gmane.linux.kernel.api/8127/focus=8165
[2] https://github.com/iovisor/bcc/blob/master/tools/tplist.py
[3] https://github.com/iovisor/bcc/blob/master/tools/argdist.py
Alexei Starovoitov (8):
perf: optimize perf_fetch_caller_regs
perf, bpf: allow bpf programs attach to tracepoints
-off-by: Alexei Starovoitov <a...@kernel.org>
---
include/linux/bpf.h | 1 +
include/linux/trace_events.h | 1 +
kernel/bpf/verifier.c| 6 +-
kernel/events/core.c | 8
kernel/trace/trace_events.c | 18 ++
5 files changed, 33 insertions
or
> the reading is unreliable.
>
> Signed-off-by: Wang Nan <wangn...@huawei.com>
> Cc: He Kuang <heku...@huawei.com>
> Cc: Alexei Starovoitov <a...@kernel.org>
> Cc: Arnaldo Carvalho de Melo <a...@redhat.com>
> Cc: Brendan Gregg <brendan.d.gr...@gmail.co
On Tue, Mar 29, 2016 at 10:01:24AM +0800, Wangnan (F) wrote:
>
>
> On 2016/3/28 14:41, Wang Nan wrote:
>
> [SNIP]
>
> >
> >To prevent this problem, we need to find a way to ensure the ring buffer
> >is stable during reading. ioctl(PERF_EVENT_IOC_PAUSE_OUTPUT) is
> >suggested because its
this to work with x86_64 and arm64, but not s390.
Cc: Alexei Starovoitov <a...@fb.com>
Cc: David S. Miller <da...@davemloft.net>
Cc: Ananth N Mavinakayanahalli <ana...@in.ibm.com>
Cc: Michael Ellerman <m...@ellerman.id.au>
Signed-off-by: Naveen N. Rao <nave
};
^
Fix this by including the necessary header file.
Cc: Alexei Starovoitov <a...@fb.com>
Cc: David S. Miller <da...@davemloft.net>
Cc: Ananth N Mavinakayanahalli <ana...@in.ibm.com>
Cc: Michael Ellerman <m...@ellerman.id.au>
Signed-off-by: Naveen N. Rao <nave
On 3/31/16 4:25 AM, Naveen N. Rao wrote:
While at it, fix some typos in the comment.
Cc: Alexei Starovoitov <a...@fb.com>
Cc: David S. Miller <da...@davemloft.net>
Cc: Ananth N Mavinakayanahalli <ana...@in.ibm.com>
Cc: Michael Ellerman <m...@ellerman.id.au>
Sign
On 3/31/16 11:51 AM, Naveen N. Rao wrote:
On 2016/03/31 10:49AM, Alexei Starovoitov wrote:
On 3/31/16 4:25 AM, Naveen N. Rao wrote:
Make BPF samples build depend on CONFIG_SAMPLE_BPF. We still don't add a
Kconfig option since that will add a dependency on llvm for allyesconfig
builds which may
On 3/31/16 4:25 AM, Naveen N. Rao wrote:
Make BPF samples build depend on CONFIG_SAMPLE_BPF. We still don't add a
Kconfig option since that will add a dependency on llvm for allyesconfig
builds which may not be desirable.
Those who need to build the BPF samples can now just do:
make
On 3/31/16 11:46 AM, Naveen N. Rao wrote:
It's failing this way on powerpc? Odd.
This fails for me on x86_64 too -- RHEL 7.1.
indeed. fails on centos 7.1, whereas centos 6.7 is fine.
On Mon, Apr 04, 2016 at 10:31:33PM +0530, Naveen N. Rao wrote:
> While at it, remove the generation of .s files and fix some typos in the
> related comment.
>
> Cc: Alexei Starovoitov <a...@fb.com>
> Cc: David S. Miller <da...@davemloft.net>
> Cc: Daniel Borkma
EGS_IP() to access the instruction pointer.
>
> Cc: Alexei Starovoitov <a...@fb.com>
> Cc: Daniel Borkmann <dan...@iogearbox.net>
> Cc: David S. Miller <da...@davemloft.net>
> Cc: Ananth N Mavinakayanahalli <ana...@in.ibm.com>
> Cc: Michael Ellerman &l
On Fri, Apr 22, 2016 at 05:52:32PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Wed, Apr 20, 2016 at 04:04:12PM -0700, Alexei Starovoitov escreveu:
> > On Wed, Apr 20, 2016 at 07:47:30PM -0300, Arnaldo Carvalho de Melo wrote:
>
> > Nice. I like it. That's a great approa
On Mon, Apr 25, 2016 at 04:22:29PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Mon, Apr 25, 2016 at 01:27:06PM -0300, Arnaldo Carvalho de Melo escreveu:
> > Em Mon, Apr 25, 2016 at 01:14:25PM -0300, Arnaldo Carvalho de Melo escreveu:
> > > Em Fri, Apr 22, 2016 at 03:18:
On Mon, Apr 25, 2016 at 05:17:50PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Mon, Apr 25, 2016 at 01:06:48PM -0700, Alexei Starovoitov escreveu:
> > On Mon, Apr 25, 2016 at 04:22:29PM -0300, Arnaldo Carvalho de Melo wrote:
> > > Em Mon, Apr 25, 2016 at 01:27:06PM -0300, Arnald
On Mon, Apr 25, 2016 at 08:41:39PM -0300, Arnaldo Carvalho de Melo wrote:
>
> +int sysctl_perf_event_max_stack __read_mostly = PERF_MAX_STACK_DEPTH;
> +
> +static inline size_t perf_callchain_entry__sizeof(void)
> +{
> + return (sizeof(struct perf_callchain_entry) +
> +
On Mon, Apr 25, 2016 at 09:29:28PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Mon, Apr 25, 2016 at 05:07:26PM -0700, Alexei Starovoitov escreveu:
> > > + {
> > > + .procname = "perf_event_max_stack",
> > > + .data
On Wed, Apr 27, 2016 at 11:00:23AM +0800, Wangnan (F) wrote:
>
>
> On 2016/4/27 10:46, Florian Fainelli wrote:
> >Le 24/04/2016 19:34, Florian Fainelli a écrit :
> >>Hi all,
> >>
> >>Two trivial patches that were flagged by Coverity.
> >>
> >>Thanks!
> >Ping! Did I send this to the correct
On Tue, Apr 26, 2016 at 06:38:28PM +0200, Peter Zijlstra wrote:
> On Mon, Apr 25, 2016 at 10:24:31PM -0300, Arnaldo Carvalho de Melo wrote:
> > Em Mon, Apr 25, 2016 at 10:03:58PM -0300, Arnaldo Carvalho de Melo escreveu:
> > > I now need to continue investigation why this doesn't seem to work from
On Fri, Apr 22, 2016 at 04:05:31PM -0600, David Ahern wrote:
> On 4/22/16 2:52 PM, Arnaldo Carvalho de Melo wrote:
> >Em Wed, Apr 20, 2016 at 04:04:12PM -0700, Alexei Starovoitov escreveu:
> >>On Wed, Apr 20, 2016 at 07:47:30PM -0300, Arnaldo Carvalho de Melo wrote:
> >
>
er seen.
Tested-by: Alexei Starovoitov <a...@kernel.org>
>
> Thanks.
>
> diff --git a/mm/percpu.c b/mm/percpu.c
> index 0c59684..bd2df70 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -162,7 +162,7 @@ static struct pcpu_chunk *pcpu_reserved_chunk;
> sta
nterrupted event. Perf does the same thing, but all the time.
yeah. good point. there is no actual 'order' here.
The whole thing looks good to me.
Acked-by: Alexei Starovoitov <a...@kernel.org>
slower, but
> that may be well within the noise.
>
> The third run shows that discarding all events only took 1.3 seconds. This
> is a speed up of 23%! The discard is much faster than even the commit.
>
> The one downside is shown in the last run. Events that are not discarded by
>
On 4/19/16 3:09 AM, Philip Li wrote:
On Tue, Apr 19, 2016 at 10:33:34AM +0800, Fengguang Wu wrote:
Fengguang, any idea why build-bot sometimes silent?
Sorry I went off for some time.. Philip, would you help have a check?
Hi Alexei, i have done some investigation for this. Fengguang, pls
On 4/18/16 3:16 PM, Steven Rostedt wrote:
On Mon, 18 Apr 2016 14:43:07 -0700
Alexei Starovoitov <a...@fb.com> wrote:
I was worried about this too, but single 'if' and two calls
(as in commit 98b5c2c65c295) is a better way, since it's faster, cleaner
and doesn't need to refactor the
ize as a pointer.
>
> Signed-off-by: Arnd Bergmann <a...@arndb.de>
> Fixes: 9940d67c93b5 ("bpf: support bpf_get_stackid() and
> bpf_perf_event_output() in tracepoint programs")
Thanks.
Acked-by: Alexei Starovoitov <a...@kernel.org>
I guess I started to rely on 0-
On Sun, Apr 17, 2016 at 12:58:21PM -0400, Sasha Levin wrote:
> Hi all,
>
> I've hit the following while fuzzing with syzkaller inside a KVM tools guest
> running the latest -next kernel:
thanks for the report. Adding Tejun...
if I read the report correctly it's not about bpf, but rather points
On Wed, Apr 20, 2016 at 07:47:30PM -0300, Arnaldo Carvalho de Melo wrote:
> The default remains 127, which is good for most cases, and not even hit
> most of the time, but then for some cases, as reported by Brendan, 1024+
> deep frames are appearing on the radar for things like groovy, ruby.
>
On 4/18/16 9:13 AM, Steven Rostedt wrote:
On Mon, 4 Apr 2016 21:52:46 -0700
Alexei Starovoitov <a...@fb.com> wrote:
Hi Steven, Peter,
last time we discussed bpf+tracepoints it was a year ago [1] and the reason
we didn't proceed with that approach was that bpf would make arguments
arg1
perf tracepoints.
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Alexei Starovoitov <a...@kernel.org>
---
include/linux/trace_events.h | 5 +
include/trace/perf.h | 13 +++--
kernel/events/core.c | 20 +++-
3 files
On 4/18/16 1:29 PM, Steven Rostedt wrote:
On Mon, 4 Apr 2016 21:52:48 -0700
Alexei Starovoitov <a...@fb.com> wrote:
introduce BPF_PROG_TYPE_TRACEPOINT program type and allow it to be
attached to tracepoints.
The tracepoint will copy the arguments in the per-cpu buffer and pass
it to t
On 4/18/16 1:47 PM, Steven Rostedt wrote:
On Mon, 18 Apr 2016 12:51:43 -0700
Alexei Starovoitov <a...@fb.com> wrote:
yeah, it could be added to ftrace as well, but it won't be as effective
as perf_trace, since the cost of trace_event_buffer_reserve() in
trace_event_raw_event_() h
On Wed, Apr 20, 2016 at 06:01:40PM +, Wang Nan wrote:
> This patch set allows to perf invoke some user space BPF scripts on some
> point. uBPF scripts and kernel BPF scripts reside in one BPF object.
> They communicate with each other with BPF maps. uBPF scripts can invoke
> helper functions
On Tue, May 24, 2016 at 12:04 PM, Tejun Heo wrote:
> Hello,
>
> Alexei, can you please verify this patch? Map extension got rolled
> into balance work so that there's no sync issues between the two async
> operations.
tests look good. No uaf and basic bpf tests exercise per-cpu
On Wed, Jul 13, 2016 at 03:36:11AM -0700, Sargun Dhillon wrote:
> Provides BPF programs, attached to kprobes a safe way to write to
> memory referenced by probes. This is done by making probe_kernel_write
> accessible to bpf functions via the bpf_probe_write helper.
not quite :)
> Signed-off-by:
On Fri, Jul 22, 2016 at 11:53:52AM +0200, Daniel Borkmann wrote:
> On 07/22/2016 04:14 AM, Alexei Starovoitov wrote:
> >On Thu, Jul 21, 2016 at 06:09:17PM -0700, Sargun Dhillon wrote:
> >>This allows user memory to be written to during the course of a kprobe.
> >>It sho
On Sat, Jul 23, 2016 at 09:01:39PM -0700, Sargun Dhillon wrote:
> In kernel/bpf/syscall.c we restrict programs loading bpf kprobe programs so
> attr.kern_version must be exactly equal to what the user is running at the
> moment. This makes a lot of sense because kprobes can touch lots of
>
+ current->comm, task_pid_nr(current));
I think checkpatch should have complained here.
current->comm line should start under "
No other nits for this patch :)
Once fixed, feel free to add my Acked-by: Alexei Starovoitov <a...@kernel.org>
at
> uses it, in one the intended ways to divert execution.
>
> Thanks to Alexei Starovoitov, and Daniel Borkmann for review, I've made
> changes based on their recommendations.
>
> This helper should be considered experimental, so we print a warning
> to dmesg when it is
On Sat, Jul 23, 2016 at 05:44:11PM -0700, Sargun Dhillon wrote:
> This example shows using a kprobe to act as a dnat mechanism to divert
> traffic for arbitrary endpoints. It rewrite the arguments to a syscall
> while they're still in userspace, and before the syscall has a chance
> to copy the
On Sat, Jul 23, 2016 at 05:39:42PM -0700, Sargun Dhillon wrote:
> The example has been modified to act like a test in the follow up set. It
> tests
> for the positive case (Did the helper work or not) as opposed to the negative
> case (is the helper able to violate the safety constraints we set
t; the system, we print a warning on invocation.
>
> It was tested with the tracex7 program on x86-64.
>
> Signed-off-by: Sargun Dhillon <sar...@sargun.me>
> Cc: Alexei Starovoitov <a...@kernel.org>
> Cc: Daniel Borkmann <dan...@iogearbox.net>
> ---
> in
On Sun, Jul 24, 2016 at 06:50:47PM +0100, Colin King wrote:
> From: Colin Ian King <colin.k...@canonical.com>
>
> file f needs to be closed, fixes resource leak.
>
> Signed-off-by: Colin Ian King <colin.k...@canonical.com>
have been travelling. sorry for delay.
A
On Mon, Aug 01, 2016 at 12:33:30AM -0400, Valdis Kletnieks wrote:
> Building with W=1 generates some 350 lines of warnings of the form:
>
> kernel/bpf/core.c: In function '__bpf_prog_run':
> kernel/bpf/core.c:476:33: warning: initialized field overwritten
> [-Woverride-init]
>[BPF_ALU |
On Mon, Aug 01, 2016 at 01:18:43AM -0400, valdis.kletni...@vt.edu wrote:
> On Sun, 31 Jul 2016 21:42:22 -0700, Alexei Starovoitov said:
>
> > and at least 2 other such patches for other files...
> > Is there a single warning where -Woverride-init was useful?
> >
On Tue, Aug 02, 2016 at 04:51:02PM -0300, Arnaldo Carvalho de Melo wrote:
> Hi Wang,
>
> Something changed and a function used in a perf test for BPF is
> not anymore appearing on vmlinux, albeit still available on
> /proc/kallsyms:
>
> # readelf -wi /lib/modules/4.7.0+/build/vmlinux |
On Tue, Aug 02, 2016 at 11:15:34PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Tue, Aug 02, 2016 at 02:03:33PM -0700, Alexei Starovoitov escreveu:
> > On Tue, Aug 02, 2016 at 04:51:02PM -0300, Arnaldo Carvalho de Melo wrote:
> > > Hi Wang,
> > >
> > > S
T_REL) || (ep->e_machine != 0)) {
> + /* Old LLVM set e_machine to EM_NONE */
> + if ((ep->e_type != ET_REL) || (ep->e_machine && (ep->e_machine !=
> EM_BPF))) {
Thanks for the fix. Didn't realize we already check for zero here.
btw EM_BPF will be in llvm 3.9 release.
Acked-by: Alexei Starovoitov <a...@kernel.org>
On Sun, Jul 17, 2016 at 03:19:13AM -0700, Sargun Dhillon wrote:
>
> +static u64 bpf_copy_to_user(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
> +{
> + void *to = (void *) (long) r1;
> + void *from = (void *) (long) r2;
> + int size = (int) r3;
> +
> + /* check if we're in a user
On Mon, Jul 18, 2016 at 03:57:17AM -0700, Sargun Dhillon wrote:
>
>
> On Sun, 17 Jul 2016, Alexei Starovoitov wrote:
>
> >On Sun, Jul 17, 2016 at 03:19:13AM -0700, Sargun Dhillon wrote:
> >>
> >>+static u64 bpf_copy_to_user(u64 r1, u64 r2, u64 r3, u64 r4, u64
On Tue, Jul 19, 2016 at 01:17:53PM +0200, Daniel Borkmann wrote:
> >+return -EINVAL;
> >+
> >+/* Is this a user address, or a kernel address? */
> >+if (!access_ok(VERIFY_WRITE, to, size))
> >+return -EINVAL;
> >+
> >+return probe_kernel_write(to, from, size);
>
On Wed, Jul 20, 2016 at 01:19:51AM +0200, Daniel Borkmann wrote:
> On 07/19/2016 06:34 PM, Alexei Starovoitov wrote:
> >On Tue, Jul 19, 2016 at 01:17:53PM +0200, Daniel Borkmann wrote:
> >>>+ return -EINVAL;
> >>>+
> >>>+ /
On Wed, Jul 13, 2016 at 01:31:57PM -0700, Sargun Dhillon wrote:
>
>
> On Wed, 13 Jul 2016, Alexei Starovoitov wrote:
>
> > On Wed, Jul 13, 2016 at 03:36:11AM -0700, Sargun Dhillon wrote:
> >> Provides BPF programs, attached to kprobes a safe way to write to
>
On Fri, Jul 15, 2016 at 07:16:01PM -0700, Sargun Dhillon wrote:
>
>
> On Thu, 14 Jul 2016, Alexei Starovoitov wrote:
>
> >On Wed, Jul 13, 2016 at 01:31:57PM -0700, Sargun Dhillon wrote:
> >>
> >>
> >>On Wed, 13 Jul 2016, Alexei Starovoitov wrote:
On Tue, Jun 28, 2016 at 07:47:53PM +0800, Hekuang wrote:
>
>
> 在 2016/6/27 4:48, Alexei Starovoitov 写道:
> >On Sun, Jun 26, 2016 at 11:20:52AM +, He Kuang wrote:
> >> bounds check just like ubpf library does.
> >hmm. I don't think I suggested to hack
On Sat, Jul 09, 2016 at 01:31:40AM +0200, Eric Dumazet wrote:
> On Fri, 2016-07-08 at 17:52 +0200, Michal Kubecek wrote:
> > If socket filter truncates an udp packet below the length of UDP header
> > in udpv6_queue_rcv_skb() or udp_queue_rcv_skb(), it will trigger a
> > BUG_ON in
On Thu, Aug 04, 2016 at 04:28:53PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 03, 2016 at 11:57:05AM -0700, Brendan Gregg wrote:
>
> > As for pmu tracepoints: if I were to instrument it (although I wasn't
> > planning to), I'd put a tracepoint in perf_event_overflow() called
> >
On Wed, Dec 31, 2014 at 08:38:49PM -0500, kan.li...@intel.com wrote:
>
> Changes since V1:
> - Using work queue to set Rx network flow classification rules and search
>available NET policy object asynchronously.
> - Using RCU lock to replace read-write lock
> - Redo performance test and
On Thu, Aug 04, 2016 at 09:13:16PM -0700, Brendan Gregg wrote:
> On Thu, Aug 4, 2016 at 6:43 PM, Alexei Starovoitov
> <alexei.starovoi...@gmail.com> wrote:
> > On Thu, Aug 04, 2016 at 04:28:53PM +0200, Peter Zijlstra wrote:
> >> On Wed, Aug 03, 2016 at 11:57:05AM
On Fri, Aug 05, 2016 at 12:52:09PM +0200, Peter Zijlstra wrote:
> > > > Currently overflow_handler is set at event alloc time. If we start
> > > > changing it on the fly with atomic xchg(), afaik things shouldn't
> > > > break, since each overflow_handler is run to completion and doesn't
> > > >
On Fri, Jul 22, 2016 at 05:05:27PM -0700, Sargun Dhillon wrote:
> It was tested with the tracex7 program on x86-64.
it's my fault to start tracexN tradition that turned out to be
cumbersome, let's not continue it. Instead could you rename it
to something meaningful? Like test_probe_write_user ?
On Wed, Jun 29, 2016 at 06:35:12PM +0800, Wangnan (F) wrote:
>
>
> On 2016/6/29 18:15, Hekuang wrote:
> >hi
> >
> >在 2016/6/28 22:57, Alexei Starovoitov 写道:
> >>
> >> return 0;
> >> }
> >>@@ -465,7 +465,7 @@ EXPORT_SYMBOL_
On Fri, Feb 03, 2017 at 01:07:39PM -0800, Andy Lutomirski wrote:
>
> Is there any plan to address this? If not, I'll try to write that
> patch this weekend.
yes. I'm working on 'disallow program override' flag.
It got stalled, because netns discussion got stalled.
Later today will send a patch
1301 - 1400 of 4143 matches
Mail list logo