Dear Filip!
Of all the possibilities you have described using iptables non will not
work with the policy routing code for a simple reason:
If I would do it the way you have described with SNAT rules for the
return path - there is still the same problem:
SNAT is only allowed in POSTROUTING -
On Sun, 23 Jun 2002, Jean-Michel Hemstedt wrote:
So I'm guessing that large number of entries in conntrack table is
evidence that packets are being lost.
not only: a crashed endpoint breaking the tcp sequence causes also
garbage entries in conntrack (known issue).
Did I miss something?
On Sun, 23 Jun 2002, Jean-Michel Hemstedt wrote:
I'm doing some tcp benches on a netfilter enabled box and noticed
huge and surprising perf decrease when loading iptable_nat module.
Sounds as expected.
loading a module, doesn't mean using it (lsmod reports it as 'unused'
in my
On Tue, 25 Jun 2002, Jean-Michel Hemstedt wrote:
not only: a crashed endpoint breaking the tcp sequence causes also
garbage entries in conntrack (known issue).
Did I miss something? What do you mean by this known issue above?
I don't understand what do you refer.
I refer to
I'm doing some tcp benches on a netfilter enabled box and noticed
huge and surprising perf decrease when loading iptable_nat module.
- ip_conntrack is of course also loading the system, but with huge memory
and a large bucket size, the problem can be solved. The big issue with
loading a module, doesn't mean using it (lsmod reports it as 'unused'
in my tests). So, does it really 'sounds as expected', when you see
From where do you think that the module usage counter reports how many
packets/connections are handled (currently? totally?) by the module.
There is
In my opinion, a first step should be to reconsider timeout values but
also timer mechanisms.
No. A first step MUST be pointing out that the current timeouts become
a problem in REAL LIFE. Right now you are speculating. On all setups
I personally know, the timeouts are NOT a problem.
Jean-Michel,
PS: could anybody redo similar tests so that we can compare the results
and stop killing the messenger, please? ;o)
Just so you don't get the wrong impression: I am not trying to shoot
the messenger, I'm trying to shoot incomplete messages. Please, don't
become discouraged in
On Tue, Jun 25, 2002 at 01:33:13PM +0200, Jozsef Kadlecsik wrote:
From where do you think that the module usage counter reports how many
packets/connections are handled (currently? totally?) by the module.
There is no whatsoever connection!
one should also consider the performance impact this
On Tue, Jun 25, 2002 at 03:21:56PM +0200, Jean-Michel Hemstedt wrote:
loading a module, doesn't mean using it (lsmod reports it as 'unused'
in my tests). So, does it really 'sounds as expected', when you see
From where do you think that the module usage counter reports how many
Jean-Michel Hemstedt said:
In my opinion, a first step should be to reconsider timeout values but
also timer mechanisms.
I've been following this thread with interest as I recently also had
conntrack related problems (failing to establish new connections due to the
table being full).
My
On Tue, 25 Jun 2002, Harald Welte wrote:
According to your first mail, the machine has 256M RAM and you issued
insmod ip_conntrack 16384
That requires 16384*8*~600byte ~= 75MB non-swappable RAM.
When you issued iptables -t nat -L, the system tried to reserve plus
2x75MB. That's
On Tue, 25 Jun 2002, Jean-Michel Hemstedt wrote:
From where do you think that the module usage counter reports how many
packets/connections are handled (currently? totally?) by the module.
There is no whatsoever connection!
module usage counter increases when a TARGET needs it (i.e.
On Tue, Jun 25, 2002 at 03:59:01PM +0200, Jean-Michel Hemstedt wrote:
??? Why should listing an IP table try to reserve twice the size of the
conntrack table?
this is in nat_init (or so): nat takes the conntrack hash size to
allocate 2 additional nat hashes 'bysource' and 'byisproto'.
On Tue, Jun 25, 2002 at 04:51:37PM +0200, Jean-Michel Hemstedt wrote:
both of this is already true. look at the module loadtime parameters of
ip_conntrack.o and iptable_nat.o
right for conntrack, but i can't find something similar for nat:
strange. I though we already had that.
On Tue, Jun 25, 2002 at 05:13:02PM +0200, Balazs Scheidler wrote:
On Tue, Jun 25, 2002 at 04:17:54PM +0200, Jozsef Kadlecsik wrote:
On Tue, 25 Jun 2002, Jean-Michel Hemstedt wrote:
The book-keeping overhead is at least doubled compared to the
conntrack-only case - this explains pretty
From: Jean-Michel Hemstedt [EMAIL PROTECTED]
98 static inline u_int32_t
99 hash_conntrack(const struct ip_conntrack_tuple *tuple)
100 {
101 #if 0
102 dump_tuple(tuple);
103 #endif
104 /* ntohl because more differences in low bits. */
105 /* To
hi,
just FYI, hash was already discussed in a previous thread:
'connection tracking scaling' [19 March 2002]
sorry if you were aware of it.
From: Jean-Michel Hemstedt [EMAIL PROTECTED]
98 static inline u_int32_t
99 hash_conntrack(const struct ip_conntrack_tuple *tuple)
100 {
On Tue, 25 Jun 2002, Harald Welte wrote:
According to your first mail, the machine has 256M RAM and you issued
insmod ip_conntrack 16384
That requires 16384*8*~600byte ~= 75MB non-swappable RAM.
When you issued iptables -t nat -L, the system tried to reserve plus
2x75MB. That's
On Tue, 25 Jun 2002, Henrik Nordstrom wrote:
if conntrack doesn't see a FIN or RST packet, it won't be forwarded by
the machine and thus never arrive at the receiver. The sender will thus
retransmit, and hope the packet makes it next time.
FIN will be retransmitted a couple of times,
20 matches
Mail list logo