the more
generic function dma_request_slave_channel().
Signed-off-by: Robert Jarzmik <robert.jarz...@free.fr>
Reviewed-by: Daniel Mack <dan...@zonque.org>
---
sound/arm/pxa2xx-ac97.c | 14 ++
sound/arm/pxa2xx-pcm-lib.c | 6 +++---
sound/soc/pxa/pxa2xx
. is a 1-1 match to
ssp., and the channels are either "rx" or "tx".
- for device tree platforms, the dma node should be hooked into the
pxa2xx-ac97 or pxa-ssp-dai node.
Signed-off-by: Robert Jarzmik <robert.jarz...@free.fr>
Acked-by: Daniel Mack <dan...@zonque.org
Hi Robert,
Please refer to the attached patch instead of the one I sent earlier. I
missed to also remove the platform_get_resource(IORESOURCE_DMA) call.
Thanks,
Daniel
On Friday, May 18, 2018 11:31 PM, Daniel Mack wrote:
Hi Robert,
Thanks for this series.
On Monday, April 02, 2018 04:26
mtd/nand/raw/marvell_nand.c
recently, so this patch can be dropped. I attached a version for the new
driver which you can pick instead.
Thanks,
Daniel
>From c63bc40bdfe2d596e42919235840109a2f1b2776 Mon Sep 17 00:00:00 2001
From: Daniel Mack <dan...@zonque.org>
Date: Sat, 12 May 2018 2
On 09/20/2017 08:51 PM, Craig Gallek wrote:
> On Wed, Sep 20, 2017 at 12:51 PM, Daniel Mack <dan...@zonque.org> wrote:
>> Hi Craig,
>>
>> Thanks, this looks much cleaner already :)
>>
>> On 09/20/2017 06:22 PM, Craig Gallek wrote:
>>> diff --git
Hi Craig,
Thanks, this looks much cleaner already :)
On 09/20/2017 06:22 PM, Craig Gallek wrote:
> diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
> index 9d58a576b2ae..b5a7d70ec8b5 100644
> --- a/kernel/bpf/lpm_trie.c
> +++ b/kernel/bpf/lpm_trie.c
> @@ -397,7 +397,7 @@ static int
On 09/19/2017 11:29 PM, David Miller wrote:
> From: Craig Gallek <kraigatg...@gmail.com>
> Date: Tue, 19 Sep 2017 17:16:13 -0400
>
>> On Tue, Sep 19, 2017 at 5:13 PM, Daniel Mack <dan...@zonque.org> wrote:
>>> On 09/19/2017 10:55 PM, David Miller wrot
On 09/19/2017 10:55 PM, David Miller wrote:
> From: Craig Gallek
> Date: Mon, 18 Sep 2017 15:30:54 -0400
>
>> This was previously left as a TODO. Add the implementation and
>> extend the test to cover it.
>
> Series applied, thanks.
>
Hmm, I think these patches need
Hi,
Thanks for working on this, Craig!
On 09/19/2017 06:12 PM, Daniel Borkmann wrote:
> On 09/19/2017 05:08 PM, Craig Gallek wrote:
>> On Mon, Sep 18, 2017 at 6:53 PM, Alexei Starovoitov wrote:
>>> On 9/18/17 12:30 PM, Craig Gallek wrote:
> [...]
+
+
add support for eBPF programs")
> Signed-off-by: Alexei Starovoitov <a...@kernel.org>
Looks good to me.
Acked-by: Daniel Mack <dan...@zonque.org>
Let's get this into 4.10!
Thanks,
Daniel
> ---
> v1->v2: disallowed overridable->non_override transition as sugges
On 01/23/2017 05:39 PM, Daniel Borkmann wrote:
> On 01/21/2017 05:26 PM, Daniel Mack wrote:
> [...]
>> +/* Called from syscall or from eBPF program */
>> +static int trie_update_elem(struct bpf_map *map,
>> +void *_key, void *value, u64 flags)
>&
lengths that are multiples of 8, in
the range from 8 to 2048. The key used for lookup and update operations
is a struct bpf_lpm_trie_key, and the value is a uint64_t.
The code carries more information about the internal implementation.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Re
.
Based on tlpm, this inserts randomized data into bpf-lpm-maps and
verifies the trie-based bpf-map implementation behaves the same way
as tlpm.
The second part uses 'real world' IPv4 and IPv6 addresses and tests
the trie with those.
Signed-off-by: David Herrmann <dh.herrm...@gmail.com>
Signed-of
* Removed node->flags and denode intermediate nodes through
node->value == NULL instead
rfc -> v1:
* Add __rcu pointer annotations to make sparse happy
* Fold _lpm_trie_find_target_node() into its only caller
* Fix some minor documentation issues
yscall with an empty
bpf program takes roughly 6.5us on my system. Lookups in empty tries
take ~1.8us on first try, ~0.9us on retries. Lookups in tries with 8192
entries take ~7.1us (on the first _and_ any subsequent try).
Signed-off-by: David Herrmann <dh.herrm...@gmail.com>
Reviewed-by: Dani
.
Based on tlpm, this inserts randomized data into bpf-lpm-maps and
verifies the trie-based bpf-map implementation behaves the same way
as tlpm.
The second part uses 'real world' IPv4 and IPv6 addresses and tests
the trie with those.
Signed-off-by: David Herrmann <dh.herrm...@gmail.com>
Signed-of
rse happy
* Fold _lpm_trie_find_target_node() into its only caller
* Fix some minor documentation issues
Daniel Mack (1):
bpf: add a longest prefix match trie map implementation
David Herrmann (1):
bpf: Add tests for the lpm trie map
include/uapi/linux/bpf.h |
lengths that are multiples of 8, in
the range from 8 to 2048. The key used for lookup and update operations
is a struct bpf_lpm_trie_key, and the value is a uint64_t.
The code carries more information about the internal implementation.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Re
On 01/13/2017 07:01 PM, Alexei Starovoitov wrote:
> On Thu, Jan 12, 2017 at 06:29:21PM +0100, Daniel Mack wrote:
>> This trie implements a longest prefix match algorithm that can be used
>> to match IP addresses to a stored set of ranges.
>>
>> Internally, data is s
.
Based on tlpm, this inserts randomized data into bpf-lpm-maps and
verifies the trie-based bpf-map implementation behaves the same way
as tlpm.
The second part uses 'real world' IPv4 and IPv6 addresses and tests
the trie with those.
Signed-off-by: David Herrmann <dh.herrm...@gmail.com>
Signed-of
lengths that are multiples of 8, in
the range from 8 to 2048. The key used for lookup and update operations
is a struct bpf_lpm_trie_key, and the value is a uint64_t.
The code carries more information about the internal implementation.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Re
LL instead
rfc -> v1:
* Add __rcu pointer annotations to make sparse happy
* Fold _lpm_trie_find_target_node() into its only caller
* Fix some minor documentation issues
Daniel Mack (1):
bpf: add a longest prefix match trie map implementation
David Herrmann (1)
Hi,
On 01/05/2017 09:01 PM, Daniel Borkmann wrote:
> On 01/05/2017 05:25 PM, Daniel Borkmann wrote:
>> On 12/29/2016 06:28 PM, Daniel Mack wrote:
> [...]
>>> +static struct bpf_map *trie_alloc(union bpf_attr *attr)
>>> +{
>>> +struct lpm_tri
Hi Daniel,
Thanks for your feedback! I agree on all points. Two questions below.
On 01/05/2017 05:25 PM, Daniel Borkmann wrote:
> On 12/29/2016 06:28 PM, Daniel Mack wrote:
>> diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
>> new file mode 100644
>> i
lengths that are multiples of 8, in
the range from 8 to 2048. The key used for lookup and update operations
is a struct bpf_lpm_trie_key, and the value is a uint64_t.
The code carries more information about the internal implementation.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Re
.
Based on tlpm, this inserts randomized data into bpf-lpm-maps and
verifies the trie-based bpf-map implementation behaves the same way
as tlpm.
The second part uses 'real world' IPv4 and IPv6 addresses and tests
the trie with those.
Signed-off-by: David Herrmann <dh.herrm...@gmail.com>
Signed-of
appreciated.
Thanks,
Daniel
Changelog:
rfc -> v1:
* Add __rcu pointer annotations to make sparse happy
* Fold _lpm_trie_find_target_node() into its only caller
* Fix some minor documentation issues
Daniel Mack (1):
bpf: add a longest prefix match trie map implementat
Hi,
On 12/20/2016 06:23 PM, Andy Lutomirski wrote:
> On Tue, Dec 20, 2016 at 2:21 AM, Daniel Mack <dan...@zonque.org> wrote:
> To clarify, since this thread has gotten excessively long and twisted,
> I think it's important that, for hooks attached to a cgroup, you be
> able to
The member 'effective' in 'struct cgroup_bpf' is protected by RCU.
Annotate it accordingly to squelch a sparse warning.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
include/linux/bpf-cgroup.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/bpf-cgro
.
Based on tlpm, this inserts randomized data into bpf-lpm-maps and
verifies the trie-based bpf-map implementation behaves the same way
as tlpm.
The second part uses 'real world' IPv4 and IPv6 addresses and tests
the trie with those.
Signed-off-by: David Herrmann <dh.herrm...@gmail.com>
Signed-of
.
Thanks,
Daniel
Daniel Mack (1):
bpf: add a longest prefix match trie map implementation
David Herrmann (1):
bpf: Add tests for the lpm trie map
include/uapi/linux/bpf.h | 7 +
kernel/bpf/Makefile| 2 +-
kernel/bpf/lpm_trie.c
lengths that are multiples of 8, in
the range from 8 to 2048. The key used for lookup and update operations
is a struct bpf_lpm_trie_key, and the value is a uint64_t.
The code carries more information about the internal implementation.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Re
pushed out to net-next yet, so:
>
> Acked-by: Daniel Borkmann <dan...@iogearbox.net>
>
FWIW:
Acked-by: Daniel Mack <dan...@zonque.org>
On 11/28/2016 02:03 PM, Daniel Borkmann wrote:
> On 11/28/2016 12:04 PM, Daniel Mack wrote:
>> There's a 'not' missing in one paragraph. Add it.
>>
>> Signed-off-by: Daniel Mack <dan...@zonque.org>
>> Reported-by: Rami Rosen <roszenr...@gmail.com>
>&
There's a 'not' missing in one paragraph. Add it.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Reported-by: Rami Rosen <roszenr...@gmail.com>
Fixes: 3007098494be ("cgroup: add support for eBPF programs")
---
kernel/bpf/cgroup.c | 4 ++--
1 file changed, 2 insertions(+)
There's a 'not' missing in one paragraph. Add it.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Reported-by: Rami Rosen <roszenr...@gmail.com>
Fixes: 3007098494be ("cgroup: add support for eBPF programs")
---
kernel/bpf/cgroup.c | 6 +++---
1 file changed, 3 insertions(+)
Hi Rami,
On 11/23/2016 11:46 PM, Rami Rosen wrote:
> A minor comment:
>
>> +/**
>> + * __cgroup_bpf_update() - Update the pinned program of a cgroup, and
>> + * propagate the change to descendants
>> + * @cgrp: The cgroup which descendants to traverse
>> + * @parent: The
the bpf(2)
syscall. For now, ingress and egress inet socket filtering are the
only supported use-cases.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Starovoitov <a...@kernel.org>
---
include/linux/bpf-cgroup.h | 79 +
include/linux/cgroup-defs.h |
(), and the payload starts at the
network headers (L3).
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Staro
This program type is similar to BPF_PROG_TYPE_SOCKET_FILTER, except that
it does not allow BPF_LD_[ABS|IND] instructions and hooks up the
bpf_skb_load_bytes() helper.
Programs of this type will be attached to cgroups for network filtering
and accounting.
Signed-off-by: Daniel Mack <
to the bpf cgroup controller implementation.
The API is guarded by CAP_NET_ADMIN.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Starovoitov <a...@kernel.org>
---
include/uapi/linux/bpf.h | 8 +
kernel/bpf/syscall.c | 81 +++
programs have access to
the skb through bpf_skb_load_bytes(), and the payload starts at the
network headers (L3).
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Dan
supported, this can be extended in
the future.
* The sample program learned to support both ingress and egress, and
can now optionally make the eBPF program drop packets by making it
return 0.
Daniel Mack (6):
bpf: add new prog type for cgroup socket filtering
cgroup: add support for eBPF
passed as 3rd argument,
which will make the generated eBPF program return 0 instead of 1, so
the kernel will drop the packet.
libbpf gained two new wrappers for the new syscall commands.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Starovoitov <a...@kernel.org>
---
sample
families should be supported, this can be extended in
the future.
* The sample program learned to support both ingress and egress, and
can now optionally make the eBPF program drop packets by making it
return 0.
Daniel Mack (6):
bpf: add new prog type for cgroup socket filtering
cg
the bpf(2)
syscall. For now, ingress and egress inet socket filtering are the
only supported use-cases.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Starovoitov <a...@kernel.org>
---
include/linux/bpf-cgroup.h | 79 +
include/linux/cgroup-defs.h |
(), and the payload starts at the
network headers (L3).
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Staro
to the bpf cgroup controller implementation.
The API is guarded by CAP_NET_ADMIN.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Starovoitov <a...@kernel.org>
---
include/uapi/linux/bpf.h | 8 +
kernel/bpf/syscall.c | 81 +++
passed as 3rd argument,
which will make the generated eBPF program return 0 instead of 1, so
the kernel will drop the packet.
libbpf gained two new wrappers for the new syscall commands.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Starovoitov <a...@kernel.org>
---
sample
This program type is similar to BPF_PROG_TYPE_SOCKET_FILTER, except that
it does not allow BPF_LD_[ABS|IND] instructions and hooks up the
bpf_skb_load_bytes() helper.
Programs of this type will be attached to cgroups for network filtering
and accounting.
Signed-off-by: Daniel Mack <
programs have access to
the skb through bpf_skb_load_bytes(), and the payload starts at the
network headers (L3).
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Dan
Hi Pablo,
On 11/14/2016 10:12 AM, Pablo Neira Ayuso wrote:
> Add cgroup version 2 support to nf_tables.
>
> This extension allows us to fetch the cgroup i-node number from the
> cgroup socket data, place it in a register, then match it against any
> value specified by user. This approach scales
On 10/31/2016 06:05 PM, David Ahern wrote:
> On 10/31/16 11:00 AM, Daniel Mack wrote:
>> On 10/31/2016 05:58 PM, David Miller wrote:
>>> From: David Ahern <d...@cumulusnetworks.com> Date: Wed, 26 Oct
>>> 2016 17:58:38 -0700
>>>
>>>> diff --gi
On 10/31/2016 05:58 PM, David Miller wrote:
> From: David Ahern
> Date: Wed, 26 Oct 2016 17:58:38 -0700
>
>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>> index 6b62ee9a2f78..73da296c2125 100644
>> --- a/include/uapi/linux/bpf.h
>> +++
On 10/28/2016 01:53 PM, Pablo Neira Ayuso wrote:
> On Thu, Oct 27, 2016 at 10:40:14AM +0200, Daniel Mack wrote:
>> It's not anything new. These hooks live on the very same level as
>> SO_ATTACH_FILTER. The only differences are that the BPF programs are
>>
On 10/26/2016 09:59 PM, Pablo Neira Ayuso wrote:
> On Tue, Oct 25, 2016 at 12:14:08PM +0200, Daniel Mack wrote:
> [...]
>> Dumping programs once they are installed is problematic because of
>> the internal optimizations done to the eBPF program during its
>> lifeti
the bpf(2)
syscall. For now, ingress and egress inet socket filtering are the
only supported use-cases.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Starovoitov <a...@kernel.org>
---
include/linux/bpf-cgroup.h | 71 +++
include/linux/cgroup-defs.h |
to the bpf cgroup controller implementation.
The API is guarded by CAP_NET_ADMIN.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Starovoitov <a...@kernel.org>
---
include/uapi/linux/bpf.h | 8 +
kernel/bpf/syscall.c | 81 +++
passed as 3rd argument,
which will make the generated eBPF program return 0 instead of 1, so
the kernel will drop the packet.
libbpf gained two new wrappers for the new syscall commands.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Starovoitov <a...@kernel.org>
---
sample
turn 0.
Daniel Mack (6):
bpf: add new prog type for cgroup socket filtering
cgroup: add support for eBPF programs
bpf: add BPF_PROG_ATTACH and BPF_PROG_DETACH commands
net: filter: run cgroup eBPF ingress programs
net: ipv4, ipv6: run cgroup eBPF egress programs
samples: bpf: add use
through bpf_skb_load_bytes(), and the payload starts at the
network headers (L3).
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Daniel Mack <dan...@zonque.org>
This program type is similar to BPF_PROG_TYPE_SOCKET_FILTER, except that
it does not allow BPF_LD_[ABS|IND] instructions and hooks up the
bpf_skb_load_bytes() helper.
Programs of this type will be attached to cgroups for network filtering
and accounting.
Signed-off-by: Daniel Mack <
(), and the payload starts at the
network headers (L3).
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Daniel Mack <dan...@zonque.org>
Acked-by: Alexei Staro
On 09/22/2016 05:12 PM, Daniel Borkmann wrote:
> On 09/22/2016 02:05 PM, Pablo Neira Ayuso wrote:
>> Benefits are, rewording previous email:
>>
>> * You get access to all of the existing netfilter hooks in one go
>>to run bpf programs. No need for specific redundant hooks. This
>>provides
Hi Pablo,
On 09/20/2016 04:29 PM, Pablo Neira Ayuso wrote:
> On Mon, Sep 19, 2016 at 10:56:14PM +0200, Daniel Mack wrote:
> [...]
>> Why would we artificially limit the use-cases of this implementation if
>> the way it stands, both filtering and introspection are possible?
On 09/19/2016 11:53 PM, Sargun Dhillon wrote:
> On Mon, Sep 19, 2016 at 06:34:28PM +0200, Daniel Mack wrote:
>> On 09/16/2016 09:57 PM, Sargun Dhillon wrote:
>>> Now, with this patch, we don't have that, but I think we can reasonably add
>>> some
>>> f
On 09/19/2016 10:35 PM, Pablo Neira Ayuso wrote:
> On Mon, Sep 19, 2016 at 09:30:02PM +0200, Daniel Mack wrote:
>> On 09/19/2016 09:19 PM, Pablo Neira Ayuso wrote:
>>> Actually, did you look at Google's approach to this problem? They
>>> want to control this at socket
On 09/19/2016 09:19 PM, Pablo Neira Ayuso wrote:
> On Mon, Sep 19, 2016 at 06:44:00PM +0200, Daniel Mack wrote:
>> diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
>> index 6001e78..5dc90aa 100644
>> --- a/net/ipv6/ip6_output.c
>> +++ b/net/ipv6/ip6
the bpf(2)
syscall. For now, ingress and egress inet socket filtering are the
only supported use-cases.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
include/linux/bpf-cgroup.h | 71 +++
include/linux/cgroup-defs.h | 4 ++
init/Kconfig| 12
kern
ket families should be supported, this can be extended in
the future.
* The sample program learned to support both ingress and egress, and
can now optionally make the eBPF program drop packets by making it
return 0.
As always, feedback is much appreciated.
Thanks,
Daniel
Daniel Mack (6):
This program type is similar to BPF_PROG_TYPE_SOCKET_FILTER, except that
it does not allow BPF_LD_[ABS|IND] instructions and hooks up the
bpf_skb_load_bytes() helper.
Programs of this type will be attached to cgroups for network filtering
and accounting.
Signed-off-by: Daniel Mack <
to the bpf cgroup controller implementation.
The API is guarded by CAP_NET_ADMIN.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
include/uapi/linux/bpf.h | 8 +
kernel/bpf/syscall.c | 81
2 files changed, 89 insertions(+)
diff
(), and the payload starts at the
network headers (L3).
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
net/core/filter.c | 4 +
passed as 3rd argument,
which will make the generated eBPF program return 0 instead of 1, so
the kernel will drop the packet.
libbpf gained two new wrappers for the new syscall commands.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
samples/bpf/Makefile| 2 +
samples/bpf/libbp
through bpf_skb_load_bytes(), and the payload starts at the
network headers (L3).
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
ne
Hi,
On 09/16/2016 09:57 PM, Sargun Dhillon wrote:
> On Wed, Sep 14, 2016 at 01:13:16PM +0200, Daniel Mack wrote:
>> I have no idea what makes you think this is limited to systemd. As I
>> said, I provided an example for userspace that works from the command
>> line. The
On 09/15/2016 08:36 AM, Vincent Bernat wrote:
> ❦ 12 septembre 2016 18:12 CEST, Daniel Mack <dan...@zonque.org> :
>
>> * The sample program learned to support both ingress and egress, and
>> can now optionally make the eBPF program drop packets by making it
>>
Hi Pablo,
On 09/13/2016 07:24 PM, Pablo Neira Ayuso wrote:
> On Tue, Sep 13, 2016 at 03:31:20PM +0200, Daniel Mack wrote:
>> On 09/13/2016 01:56 PM, Pablo Neira Ayuso wrote:
>>> On Mon, Sep 12, 2016 at 06:12:09PM +0200, Daniel Mack wrote:
>>>> This is v5 of the pa
Hi,
On 09/13/2016 01:56 PM, Pablo Neira Ayuso wrote:
> On Mon, Sep 12, 2016 at 06:12:09PM +0200, Daniel Mack wrote:
>> This is v5 of the patch set to allow eBPF programs for network
>> filtering and accounting to be attached to cgroups, so that they apply
>> to all socket
the bpf(2)
syscall. For now, ingress and egress inet socket filtering are the
only supported use-cases.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
include/linux/bpf-cgroup.h | 71 +++
include/linux/cgroup-defs.h | 4 ++
init/Kconfig| 12
kern
to the bpf cgroup controller implementation.
The API is guarded by CAP_NET_ADMIN.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
include/uapi/linux/bpf.h | 8 +
kernel/bpf/syscall.c | 81
2 files changed, 89 insertions(+)
diff
passed as 3rd argument,
which will make the generated eBPF program return 0 instead of 1, so
the kernel will drop the packet.
libbpf gained two new wrappers for the new syscall commands.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
samples/bpf/Makefile| 2 +
samples/bpf/libbp
rop packets by making it
return 0.
As always, feedback is much appreciated.
Thanks,
Daniel
Daniel Mack (6):
bpf: add new prog type for cgroup socket filtering
cgroup: add support for eBPF programs
bpf: add BPF_PROG_ATTACH and BPF_PROG_DETACH commands
net: filter: run cgroup eBPF ingr
For now, this program type is equivalent to BPF_PROG_TYPE_SOCKET_FILTER in
terms of checks during the verification process. It may access the skb as
well.
Programs of this type will be attached to cgroups for network filtering
and accounting.
Signed-off-by: Daniel Mack <dan...@zonque.
.
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
net/core/dev.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/ne
headers.
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
net/core/filter.c | 4
1 file changed, 4 insertions(+)
diff --git
On 09/06/2016 07:18 PM, Daniel Borkmann wrote:
> On 09/06/2016 03:46 PM, Daniel Mack wrote:
>> This patch adds two sets of eBPF program pointers to struct cgroup.
>> One for such that are directly pinned to a cgroup, and one for such
>> that are effective for it.
>>
For now, this program type is equivalent to BPF_PROG_TYPE_SOCKET_FILTER in
terms of checks during the verification process. It may access the skb as
well.
Programs of this type will be attached to cgroups for network filtering
and accounting.
Signed-off-by: Daniel Mack <dan...@zonque.
headers.
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
net/core/filter.c | 4
1 file changed, 4 insertions(+)
diff --git
.
Note that cgroup_bpf_run_filter() is stubbed out as static inline nop
for !CONFIG_CGROUP_BPF, and is otherwise guarded by a static key if
the feature is unused.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
net/core/dev.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
the bpf(2)
syscall. For now, ingress and egress inet socket filtering are the
only supported use-cases.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
include/linux/bpf-cgroup.h | 70 +++
include/linux/cgroup-defs.h | 4 ++
init/Kconfig| 12
kern
to the bpf cgroup controller implementation.
The API is guarded by CAP_NET_ADMIN.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
include/uapi/linux/bpf.h | 8 +
kernel/bpf/syscall.c | 81
2 files changed, 89 insertions(+)
diff
passed as 3rd argument,
which will make the generated eBPF program return 0 instead of 1, so
the kernel will drop the packet.
libbpf gained two new wrappers for the new syscall commands.
Signed-off-by: Daniel Mack <dan...@zonque.org>
---
samples/bpf/Makefile| 2 +
samples/bpf/libbp
Thanks,
Daniel
Daniel Mack (6):
bpf: add new prog type for cgroup socket filtering
cgroup: add support for eBPF programs
bpf: add BPF_PROG_ATTACH and BPF_PROG_DETACH commands
net: filter: run cgroup eBPF ingress programs
net: core: run cgroup eBPF egress programs
samples: bpf: add userspace e
On 09/05/2016 08:32 PM, Alexei Starovoitov wrote:
> On 9/5/16 10:09 AM, Daniel Borkmann wrote:
>> On 09/05/2016 04:09 PM, Daniel Mack wrote:
>>> I really don't think it's worth sparing 8 bytes here and then do the
>>> binary compat dance after flags are added, f
On 09/05/2016 05:30 PM, David Laight wrote:
> From: Daniel Mack
>>>> +
>>>> + struct { /* anonymous struct used by BPF_PROG_ATTACH/DETACH commands */
>>>> + __u32 target_fd; /* container object to attach
>>>> to */
>
Hi,
On 08/30/2016 01:04 AM, Sargun Dhillon wrote:
> On Fri, Aug 26, 2016 at 09:58:48PM +0200, Daniel Mack wrote:
>> This patch adds two sets of eBPF program pointers to struct cgroup.
>> One for such that are directly pinned to a cgroup, and one for such
>&g
On 08/30/2016 12:03 AM, Daniel Borkmann wrote:
> On 08/26/2016 09:58 PM, Daniel Mack wrote:
>> diff --git a/net/core/dev.c b/net/core/dev.c
>> index a75df86..17484e6 100644
>> --- a/net/core/dev.c
>> +++ b/net/core/dev.c
>> @@ -141,6 +141,7 @@
>>
On 09/05/2016 03:56 PM, Daniel Borkmann wrote:
> On 09/05/2016 02:54 PM, Daniel Mack wrote:
>> On 08/30/2016 01:00 AM, Daniel Borkmann wrote:
>>> On 08/26/2016 09:58 PM, Daniel Mack wrote:
>>
>>>>enum bpf_map_type {
>
On 08/27/2016 02:08 AM, Alexei Starovoitov wrote:
> On Fri, Aug 26, 2016 at 09:58:49PM +0200, Daniel Mack wrote:
>> +
>> +struct { /* anonymous struct used by BPF_PROG_ATTACH/DETACH commands */
>> +__u32 target_fd; /* conta
1 - 100 of 158 matches
Mail list logo