Le 18/10/2016 à 10:47, Florian Westphal a écrit :
> Nicolas Dichtel <nicolas.dich...@6wind.com> wrote:
>> After commit b87a2f9199ea ("netfilter: conntrack: add gc worker to remove
>> timed-out entries"), netlink conntrack deletion events may be sent with a
>> huge delay (5 minutes).
>> There is two ways to evict conntrack:
>> - during a conntrack lookup;
>> - during a conntrack dump.
>> Let's do a full scan of conntrack entries after a period of inactivity
>> (no conntrack lookup).
>> CC: Florian Westphal <f...@strlen.de>
>> Signed-off-by: Nicolas Dichtel <nicolas.dich...@6wind.com>
>> Here is another proposal to try to fix the problem.
>> Comments are welcomed,
> Hmm, I don't think its good idea in practice.
> If goal is to avoid starving arbitrary 'dead' ct for too long,
> then simple ping will defeat the logic here, because...
>> net/netfilter/nf_conntrack_core.c | 11 +++++++++--
>> 1 file changed, 9 insertions(+), 2 deletions(-)
>> diff --git a/net/netfilter/nf_conntrack_core.c
>> index ba6a1d421222..3dbb27bd9582 100644
>> --- a/net/netfilter/nf_conntrack_core.c
>> +++ b/net/netfilter/nf_conntrack_core.c
>> @@ -87,6 +87,7 @@ static __read_mostly bool nf_conntrack_locks_all;
>> #define GC_MAX_BUCKETS 8192u
>> #define GC_INTERVAL (5 * HZ)
>> #define GC_MAX_EVICTS 256u
>> +static bool gc_full_scan = true;
>> static struct conntrack_gc_work conntrack_gc_work;
>> @@ -511,6 +512,7 @@ ____nf_conntrack_find(struct net *net, const struct
>> nf_conntrack_zone *zone,
>> unsigned int bucket, hsize;
>> + gc_full_scan = false;
> ... we do periodic lookup (but always in same slot), so no full scan is
Yes, I was wondering about that. My first idea was to have that bool per bucket
and force a scan of the bucket instead of the whole table.
> If you think its useful, consider sending patch that rescheds worker
> instantly in case budget expired, otherwise I will do this later this
Ok, I will send it, but it does not address the "inactivity" problem.
> [ I am aware doing instant restart might be too late, but at least we
> would then reap more entries once we stumble upon large number of
> expired ones ].