Sending a high throughput stream of UDP packets through NFQ causes a few 
packets to be dropped.

Let's say we have 10 packets with same tuple going in. They all receive 
different conntrack objects (with confirmed flag unset).

They then get grabbed by user space through NFQ and suppose they all get accept 
verdict.

The first one to be accepted will go through the standard flow (get confirmed 
and the conntrack will be entered into the hash table). 

The other 9 packets are reinjected and upon being sent for confirmation 
(ipv4_confirm) at some point it gets dropped (in __nf_conntrack_confirm). The 
conditions for being dropped are that a conntrack entry with that tuple already 
exists. The reason for dropping is given as comment in the code:

/* See if there's one in the list already, including reverse:
NAT could have grabbed it without realizing, since we're
not in the hash.  If there is, we lost race. */

Dropping packets in this case seems like a malfunction IMO. True, UDP is not 
guaranteed, but still this may in some cases cause degradation in functionality 
to a working application. Any opinions on that?

Also any ideas for how to resolve this? Seems like there is a strong assumption 
of serialization in conntrack, that is that one packet enters the ip layer only 
after the previous packet has left (i.e. confirmed). So any solution that only 
replaces the contract with the more updated one in the middle (not that I know 
how to do it) will be only partial (since the packet has already traversed some 
hooks which may have relied on having a 'good' conntrack). 

Any thoughts appreciated.



--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to