On Mon, 2016-11-21 at 23:07 +0100, Florian Westphal wrote:
> instead of allocating each xt_counter individually, allocate 4k chunks
> and then use these for counter allocation requests.
>
> This should speed up rule evaluation by increasing data locality,
> also speeds up ruleset loading because
instead of allocating each xt_counter individually, allocate 4k chunks
and then use these for counter allocation requests.
This should speed up rule evaluation by increasing data locality,
also speeds up ruleset loading because we reduce calls to the percpu
allocator.
As Eric points out we can't
Keeps some noise away from a followup patch.
Signed-off-by: Florian Westphal
Acked-by: Eric Dumazet
---
No changes since v1.
include/linux/netfilter/x_tables.h | 27 +--
net/ipv4/netfilter/arp_tables.c| 5 +
... to speed up iptables(-restore) calls.
Especially a pattern like
for i in $(seq 1 1000) ; iptables -A FORWARD ;done
is expensive, because adding the rule doubles the percpu counters (allocate
2nd blob, then free old one, including its percpu counters).
This causes frequent expansion and
On SMP we overload the packet counter (unsigned long) to contain
percpu offset. Hide this from callers and pass xt_counters address
instead.
Preparation patch to allocate the percpu counters in page-sized batch
chunks.
Signed-off-by: Florian Westphal
Acked-by: Eric Dumazet
On Mon, 2016-11-21 at 14:57 +0100, Florian Westphal wrote:
...
> #define SMP_ALIGN(x) (((x) + SMP_CACHE_BYTES-1) & ~(SMP_CACHE_BYTES-1))
> +#define XT_PCPU_BLOCK_SIZE 4096
>
> struct compat_delta {
> unsigned int offset; /* offset in kernel */
> @@ -1618,6 +1619,7 @@
On Mon, 2016-11-21 at 14:57 +0100, Florian Westphal wrote:
> Keeps some noise away from a followup patch.
>
> Signed-off-by: Florian Westphal
> ---
Acked-by: Eric Dumazet
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the
On Mon, 2016-11-21 at 14:57 +0100, Florian Westphal wrote:
> On SMP we overload the packet counter (unsigned long) to contain
> percpu offset. Hide this from callers and pass xt_counters address
> instead.
>
> Preparation patch to allocate the percpu counters in page-sized batch
> chunks.
>
>
instead of allocating each xt_counter individually, allocate 4k chunks
and then use these for counter allocation requests.
This should speed up rule evaluation by increasing data locality,
also speeds up ruleset loading because we reduce calls to the percpu
allocator.
As Eric points out we can't
Keeps some noise away from a followup patch.
Signed-off-by: Florian Westphal
---
include/linux/netfilter/x_tables.h | 27 +--
net/ipv4/netfilter/arp_tables.c| 5 +
net/ipv4/netfilter/ip_tables.c | 5 +
net/ipv6/netfilter/ip6_tables.c
On SMP we overload the packet counter (unsigned long) to contain
percpu offset. Hide this from callers and pass xt_counters address
instead.
Preparation patch to allocate the percpu counters in page-sized batch
chunks.
Signed-off-by: Florian Westphal
---
... to speed up iptables(-restore) calls.
Especially a pattern like
for i in $(seq 1 1000) ; iptables -A FORWARD ;done
is expensive, because adding the rule doubles the percpu counters (allocate 2nd
blob, then free old one,
including its percpu counters).
This causes frequent expansion and
From: Liping Zhang
Otherwise, kernel panic will happen if the user does not specify
the related attributes.
Fixes: 0f3cd9b36977 ("netfilter: nf_tables: add range expression")
Signed-off-by: Liping Zhang
---
net/netfilter/nft_range.c | 6 ++
1 file
Bjørnar Ness wrote:
> After a restart of the machine, this problem disappeared for 10 days,
> but unfortunately
> showed up again yesterday. Seems like net_dropmonitor does not catch
> this one, the
> only output I get is:
>
> nf_hook_slow 171
Hello again.
After a restart of the machine, this problem disappeared for 10 days,
but unfortunately
showed up again yesterday. Seems like net_dropmonitor does not catch
this one, the
only output I get is:
nf_hook_slow 171 1
This is definately
Hi Anders,
2016-11-21 16:57 GMT+08:00 Anders K. Pedersen | Cohaesio :
[...]
>> nla[NFTA_SET_TIMEOUT] should be kept indent consistent with
>> be64_to_cpu.
>> You can add some spaces after tab.
>
> The indentation is deliberate, because I don't want to give the
> impression that
Hi Liping,
On man, 2016-11-21 at 09:48 +0800, Liping Zhang wrote:
> 2016-11-21 0:38 GMT+08:00 Anders K. Pedersen | Cohaesio .com>:
> Acked-by: Liping Zhang
>
> But there's some small indent issues, see below.
> diff --git a/net/netfilter/nf_tables_api.c
>
17 matches
Mail list logo