On Thu, Jun 27, 2002 at 12:21:45PM +1000, Andrew Smith wrote:
This gives a good example when being able to set the timeout dependant
upon specific factors (e.g. port/protocol) would be good rather than a
global timeout that suits specific cases and does not match many cases
- and causes a
On Tue, Jun 25, 2002 at 11:47:12PM +0200, Jean-Michel Hemstedt wrote:
agreed.
(strange thing is that ethernet irq's reported by procinfo are
decreasing when the machine is overloaded. It suppose that it
means either that irq's are not even caught by the kernel/driver,
which is quite
(strange thing is that ethernet irq's reported by procinfo are
decreasing when the machine is overloaded. It suppose that it
means either that irq's are not even caught by the kernel/driver,
which is quite worrying, or either that irq's counters refer to
'processessed'
On Tue, 25 Jun 2002, Jean-Michel Hemstedt wrote:
connections. As good as possible. If the conntrack table becomes
full, there are two possibilities:
- conntrack table size is underestimated for the real traffic
flowing
trough. Get more RAM and increase the table size.
-
On Sun, 23 Jun 2002, Jean-Michel Hemstedt wrote:
So I'm guessing that large number of entries in conntrack table is
evidence that packets are being lost.
not only: a crashed endpoint breaking the tcp sequence causes also
garbage entries in conntrack (known issue).
Did I miss something?
On Sun, 23 Jun 2002, Jean-Michel Hemstedt wrote:
I'm doing some tcp benches on a netfilter enabled box and noticed
huge and surprising perf decrease when loading iptable_nat module.
Sounds as expected.
loading a module, doesn't mean using it (lsmod reports it as 'unused'
in my
On Tue, 25 Jun 2002, Jean-Michel Hemstedt wrote:
not only: a crashed endpoint breaking the tcp sequence causes also
garbage entries in conntrack (known issue).
Did I miss something? What do you mean by this known issue above?
I don't understand what do you refer.
I refer to
I'm doing some tcp benches on a netfilter enabled box and noticed
huge and surprising perf decrease when loading iptable_nat module.
- ip_conntrack is of course also loading the system, but with huge memory
and a large bucket size, the problem can be solved. The big issue with
loading a module, doesn't mean using it (lsmod reports it as 'unused'
in my tests). So, does it really 'sounds as expected', when you see
From where do you think that the module usage counter reports how many
packets/connections are handled (currently? totally?) by the module.
There is
In my opinion, a first step should be to reconsider timeout values but
also timer mechanisms.
No. A first step MUST be pointing out that the current timeouts become
a problem in REAL LIFE. Right now you are speculating. On all setups
I personally know, the timeouts are NOT a problem.
Jean-Michel,
PS: could anybody redo similar tests so that we can compare the results
and stop killing the messenger, please? ;o)
Just so you don't get the wrong impression: I am not trying to shoot
the messenger, I'm trying to shoot incomplete messages. Please, don't
become discouraged in
On Tue, Jun 25, 2002 at 01:33:13PM +0200, Jozsef Kadlecsik wrote:
From where do you think that the module usage counter reports how many
packets/connections are handled (currently? totally?) by the module.
There is no whatsoever connection!
one should also consider the performance impact this
On Tue, Jun 25, 2002 at 03:21:56PM +0200, Jean-Michel Hemstedt wrote:
loading a module, doesn't mean using it (lsmod reports it as 'unused'
in my tests). So, does it really 'sounds as expected', when you see
From where do you think that the module usage counter reports how many
Jean-Michel Hemstedt said:
In my opinion, a first step should be to reconsider timeout values but
also timer mechanisms.
I've been following this thread with interest as I recently also had
conntrack related problems (failing to establish new connections due to the
table being full).
My
On Tue, 25 Jun 2002, Harald Welte wrote:
According to your first mail, the machine has 256M RAM and you issued
insmod ip_conntrack 16384
That requires 16384*8*~600byte ~= 75MB non-swappable RAM.
When you issued iptables -t nat -L, the system tried to reserve plus
2x75MB. That's
On Tue, 25 Jun 2002, Jean-Michel Hemstedt wrote:
From where do you think that the module usage counter reports how many
packets/connections are handled (currently? totally?) by the module.
There is no whatsoever connection!
module usage counter increases when a TARGET needs it (i.e.
On Tue, Jun 25, 2002 at 03:59:01PM +0200, Jean-Michel Hemstedt wrote:
??? Why should listing an IP table try to reserve twice the size of the
conntrack table?
this is in nat_init (or so): nat takes the conntrack hash size to
allocate 2 additional nat hashes 'bysource' and 'byisproto'.
On Tue, Jun 25, 2002 at 04:51:37PM +0200, Jean-Michel Hemstedt wrote:
both of this is already true. look at the module loadtime parameters of
ip_conntrack.o and iptable_nat.o
right for conntrack, but i can't find something similar for nat:
strange. I though we already had that.
On Tue, Jun 25, 2002 at 05:13:02PM +0200, Balazs Scheidler wrote:
On Tue, Jun 25, 2002 at 04:17:54PM +0200, Jozsef Kadlecsik wrote:
On Tue, 25 Jun 2002, Jean-Michel Hemstedt wrote:
The book-keeping overhead is at least doubled compared to the
conntrack-only case - this explains pretty
On Tue, 25 Jun 2002, Harald Welte wrote:
According to your first mail, the machine has 256M RAM and you issued
insmod ip_conntrack 16384
That requires 16384*8*~600byte ~= 75MB non-swappable RAM.
When you issued iptables -t nat -L, the system tried to reserve plus
2x75MB. That's
Don,
(hope you don't mind me replying on-list)
On Sun, Jun 23, 2002 at 11:30:13PM -0700, Don Cohen wrote:
Patrick Schaaf writes:
Nevertheless, it does point out a valid optimization chance. We discussed
that months ago, and it's still there.
What's that?
Looking at the quality of the
Patrick Schaaf writes:
Don,
(hope you don't mind me replying on-list)
No, I just hope the rest of the list doesn't mind.
The relative slowness of conntrack vs. nonconntrack doesn't matter in the
real world. I can reproduce it in artificial tests, but in reality, the
arrival rates
But I don't think that the hash function is the problem in that case.
In fact, there is no hash function that solves that problem, since the
attacker can always feed you data that ruins the hash function, unless
of course you want the function to differ from one machine to the next
and in
Patrick Schaaf writes:
I suggest instead that the hash lookup be limited to a small number of
probes. If not found in, say, 10 probes, act like it's not there and
the table is full.
For each packet, there is exactly one hash chain to look up. So you end
up limiting the _length_
I know this debate is not new... I just didn't expect such a (90% see
below) perf drop, and unavailablity risk. That's why I'm only reporting
it, hoping secretly that experienced hackers will consider it seriously.
;o)
Note: I don't want to play with words, but if you prefer, consider
I'm doing some tcp benches on a netfilter enabled box and noticed
huge and surprising perf decrease when loading iptable_nat module.
Rather similar to the results I posted about a week ago.
oops, sorry, it seems we performed our tests at the same time ;o)
- Another (old) question:
On Sun, Jun 23, 2002 at 09:46:29PM -0700, Don Cohen wrote:
From: Jean-Michel Hemstedt [EMAIL PROTECTED]
Since in my test, each connection is ephemeral (10ms) ...
One question here is whether the traffic generator is acting like
a real set of users or like an attacker. A real user
On Thu, Jun 20, 2002 at 09:48:27PM +0200, Jean-Michel Hemstedt wrote:
dear netdevels,
I'm doing some tcp benches on a netfilter enabled box and noticed
huge and surprising perf decrease when loading iptable_nat module.
Sounds as expected.
- ip_conntrack is of course also loading the
28 matches
Mail list logo