Don,
My friend the crypto expert told me that crc is easy to attack,
and suggested ABCD(EF) because it has the right properties to
resist attack. It would take more work for me to demonstrate that
crc is bad, and to better understand ABCD and possibly adjust it,
but just on the basis of
big-endian machine:
#uname -a
SunOS licia.dtek.chalmers.se 5.8 Generic_108528-14 sun4u sparc SUNW,Sun-Fire-280
http://www.dtek.chalmers.se/~d97gozem/cttest/0.4/
All graphs here look about equally good. I dont know if there is a bug
somewhere in cttest that might do this.
I don't
Now there is http://bei.bof.de/cttest-0.5.tar.gz and an example
presentation at http://bei.bof.de/ex4/
As nobody requested the tarball yet, I've remade it with the latest
64 bit ABCDEF code from Don. I also remade the ex4/ pictures,
in the abcd series, Don's new hash is named abcd_long. Looks
The rt hash doesn't do well at all on your data. The winner on your data
is crc32 which is very robust in all tests on all data it seems. It's
just a shame that it's so cpu-consuming compared to the other hashes.
The abcd hash also fares very well, if we disregard the results for
the
Now also see:
http://bei.bof.de/ex3
http://bei.bof.de/ex3rt
http://bei.bof.de/ex3abcd
http://bei.bof.de/ex3crc
This is the same calculations as before, but with a different data set.
These are from a web server farm (a web portal), with the server farm
running on
if (hashsize%2 == 0)
hashsize--;
Just so you don't underestimate a requirement ... :-)
Underestimating is desirable here. User gives upper bound - observe it!
(assume we are checking positive numbers ;-)
We are, the variable in question, is unsigned.
Actually, I now have this nice
Hi Harald,
On Fri, Jul 05, 2002 at 04:21:27PM +0200, Harald Welte wrote:
You could just add a boolean field 'terminating' to the iptables_target.
Then, make sure iptables abort and complains if it sees more than one
terminating target being requested in a single rule.
no, it's not
For the record:
match module:
pro: no naming issue, current well known concepts can be kept
pro: couple of modules can be unified
con: ordering issue
I strongly prefer this solution, with the added requirement that order
issues should be defined clearly, and have a clear
Hi all,
I have put a tarball at http://bei.bof.de/cttest-0.1.tar.gz
Unpack, look at README, and reproduce the gnuplot pictures I have
mentioned earlier today (at http://bei.bof.de/ex1/)
I would love to see results from other kinds of workloads.
thanks in advance
Patrick
Hi all,
here is the next version of my cttest.c conntrack test program.
Thanks go to Don Cohen for noticing that my parser was fucked up.
I did not notice that we sometimes have the [XXX] state marker
between the two tuples, and sometimes we have it after both tuples.
How ugly... This is now
Is this normal ? Do you have some idea about all these
ports 3128 and 3228 ?
The machine where that ip_conntrack came from, is running two squid
processes, one on each processor, and clients are distributed evenly
over the two processes. 3128 and 3228 are the two listening ports
of those
Don,
one more thing to observe for your tests: each line from ip_conntrack
is hashed twice, with mirrored source / destination addresses. Ideally,
for a sufficiently oversized bucket count, that would be immaterial.
For the too-small-bucket case, hash list lengths should double.
With a too-bad
After not receiving a response for two weeks second try:
Sorry. Here we go:
The attached patch adds a new option --terminate to the MARK target
which lets the user choose if MARK should return IPT_CONTINUE
(normal behaviour) or NF_ACCEPT (to terminate further rule processing).
[...]
A
Hi all,
find appended, for use and review, my C version of a /proc/net/ip_conntrack
based hash function and bucket occupation test program. Here is a short
usage summary:
- extract crc32.c and cttest.c from the attachments
- compile with gcc -o cttest cttest.c
- by default,
This version adds a '-b X' option. If you specify that, all TCP conntracks
with timeouts less than X get ignored. For example, use -b 431940 to only
count the conntracks active within the last 60 seconds.
I guess I just don't understand conntrack well enough.
How does -b 431940 map
BTW it turns out this leads to a good demonstration of my point about
making the modulus relatively prime to the size of the data.
Missed that point on first reading. I totally agree with you on this
point. We already pay for the modulus in the current code, so we may
as well take advantage of
Hi,
/* save information important for NAT in
ct-help.ct_foo_info; */
here!!
but in ip_conntrack.help,there is no member ct_foo_info in union help.
The foo was an _example_, of course
On Thu, Jun 27, 2002 at 12:01:05PM -0500, Glover George wrote:
Yes, SIP can get very hairy, because it's primarily xml -ished based.
SIP is very similar to HTTP, and thus any special protocol action would
best be handled by the traditional application level gateway. The REDIRECT
target can be
Hello Don,
I had mentioned that I was hoping to remember/find the appropriate
formula for the probability of a hash bucket reaching its limit.
[SNIP calculation - thanks for sharing your research!]
I assume that the hash function maps a connection to each bucket
with equal probability, P =
In my opinion, a first step should be to reconsider timeout values but
also timer mechanisms.
No. A first step MUST be pointing out that the current timeouts become
a problem in REAL LIFE. Right now you are speculating. On all setups
I personally know, the timeouts are NOT a problem.
Jean-Michel,
PS: could anybody redo similar tests so that we can compare the results
and stop killing the messenger, please? ;o)
Just so you don't get the wrong impression: I am not trying to shoot
the messenger, I'm trying to shoot incomplete messages. Please, don't
become discouraged in
Don,
(hope you don't mind me replying on-list)
On Sun, Jun 23, 2002 at 11:30:13PM -0700, Don Cohen wrote:
Patrick Schaaf writes:
Nevertheless, it does point out a valid optimization chance. We discussed
that months ago, and it's still there.
What's that?
Looking at the quality
But I don't think that the hash function is the problem in that case.
In fact, there is no hash function that solves that problem, since the
attacker can always feed you data that ruins the hash function, unless
of course you want the function to differ from one machine to the next
and in
On Sun, Jun 23, 2002 at 09:46:29PM -0700, Don Cohen wrote:
From: Jean-Michel Hemstedt [EMAIL PROTECTED]
Since in my test, each connection is ephemeral (10ms) ...
One question here is whether the traffic generator is acting like
a real set of users or like an attacker. A real user
The built-in chain and target names are all fully capitalized.
What about the simple restriction that user-defined chain name cannot be a
string consisting of capitalized letters only.
This is again breaking backwards compatibility. For example, most of my
rulesets contain two chains, named
What about simply returning by an error code if there is an attempt to
create a chain wich clashes with a target name?
Wasn't there recent discussion about how do I find all available
target names? But I agree in principle, that would be the least
intrusive shorttime rationalization of the
replace matches with transformations or plugin function calls,
respectively
Hey, let's put Parrot in the kernel and use that.
:)
Patrick
we just bought a server with double CPU. sure linux will support both of that. but
someone remind me that netfilter cannot support smp. is that true? and is there
someway to change that?
It is NOT true.
regards
Patrick
Alternatively, if no answer comes back at all, the conntrack is in the
(extra) state UNREPLIED. When the connection table becomes full, UNREPLIED
connections are recycled preferentially.
Hey, this is not fair !
The behaviour is as fair as it can be, IMO.
This behaviour is not
My point is not to go on and on and on, but deeper and deeper
and deeper. Once I reach to right amount of knowledge about it,
I will fix it (or I will make it so clear that anybody which
has followed the thread will be abble to fix it).
The latter approach won't be effective. You want a
hope this is the right list you're not annoyed.
As a usage question, the user mailinglist, [EMAIL PROTECTED],
would have been the right place to ask.
What I am looking for:
- isn't there a particular piece of code where I can manually change kern to
local4 or whatsoever?
Use the
The funny thing is that if you have a bad ruleset, you can easily be
DOSed by some external people which are just sending random ACK packets.
Those ACKs will create entries in your connection table as ESTABLISHED
connections with a time-out of 5 days ! 8-)
Well no, since the
Hello,
iptables -A FORWARD -p tcp -m state --state NEW -j ACCEPT
The Problem:
When sending an ACK packet, the packet is allowed through the filter.
Due to the second rule. This means that ACK packets are accepted as
being in the state NEW and does not create an entry in the
The behaviour is intentional. The reason is connection pickup. Imagine
a situation where the firewall reboots. All active conntracking information
will be lost. With the observed behaviour, connections are relearned
on the fly as they create new traffic.
I simply disagree with you on
Hi all,
regarding our recurring how do I filter based on URL thread here,
and just for everybodies information, have a look here:
http://www.linuxvirtualserver.org/software/ktcpvs/ktcpvs.html
(I'm not involved, just saw it on freshmeat)
best regards
Patrick
SOAP (and Web services generally) defeat this technique by overloading
port 80 to expose a variety of services. Because SOAP has no real
security model, poorly written handlers for SOAP requests represent a
real security risk. Consequently it isn't sufficient to filter packets
based on
I think I want to do it at user-space-handler-over-netfilter level.
Reason? Suppose we have a network topology like this:
A B C D
| | | |
+---+---+---+---+
|
[firewall]
It seems I hit the same problem when trying to setup an IPSec tunnel between
two routers (running Linux 2.4.18+newnat). FTP data transfer is broken. Control
connection is ok.
After some investigations it seems NAT doesn't recognize IPSec packets being
part of the FTP connection and so they
Felix,
we have a ftp connection which passes through two routers which have a
IPSEC tunnel in between. Both routers have nat and conntrack modules
compiled into the kernel but there are no rules at all.
You mean there are also no filter rules? Good. That excludes much.
[a simple ftp
Hi Joakim all,
We (me and Martin) has discussed a table, border, that is the absolutly
first thing that is being travered after leaving the netcard driver.
I like the idea (a lot!), as well as the placement, but I'm not really
fond of the name. May I suggest one of two things?
A)
In addition to what others have already pointed out, even if you know
what modules are available in user and kernel space, you also need to
know what each version of these is present, and whatever is talking to
all of this has to have detailed knowledge of the relative capabilities
of each
IM an IT Tafe student currently completing a diploma in information
technologh System administration. My problem is that our final project is to build a
linux gateway that masqs ips based on Novel context from windows 2k servers. So far
we have managed to basically copy the clients IP
On Sat, Mar 30, 2002 at 10:51:19AM +0100, Henrik Nordstrom wrote:
Actually the tool is there already.. called iptables-restore + cron.
Simply set up your ruleset in a file using iptables-save syntax, but
using hostnames instead of IP addresses for the dynamic-DNS addresses.
Then
Two questions regardin this strange effect:
a) Is there a performance penalty for this huge number of connections in
contracker?
Yes. This has been discussed, with possible remedies (hashsize parameter
to ip_conntrack) mentioned, about a week ago. See the thread at
On Fri, Mar 29, 2002 at 09:59:58AM +0100, Harald Welte wrote:
b) Why does it occure primarily with the Cisco Content Switch. The
numbers were much lower
before utilising the content switch! So the CSS is ACK flooding! Is
there a strange
interaction between the CSS and
I'm trying to patch 1.2.6a into a 2.4.16 kernel with the inclusion of
the String Matching filter, but I got this error when compiling the
kernel at the make modules stage. The error as follow :
ipt_string.c:80:72: macro max passed 3 arguments, but takes just 2
This is a well-known API
I'd rather like to have this information to be gathered at runtime within
the kernel, where one could read out the current hash occupation via /proc
or some ioctl.
OK, that's what I wanted to hear :-)
Actually, the interesting statistics for a hash are not that large, and all
aggregate:
Hello Jean-Michel,
thanks for your input. I appreciate it.
On Tue, Mar 19, 2002 at 03:56:32PM +0100, Jean-Michel Hemstedt wrote:
I'm not a conntrack specialist, neither a kernel hacker, but I've
some experience with ip hash caches in access servers (BRAS)
that may be useful(?):
some
I agree that, since we already use a full division when calculating
the hash function, we may as well use a power-of-two hashsize. This will
waste some room in the last OS page of the array, but that's irrelevant
given the overall size of the array.
Damn. The second lines is of course
I'm working on a proxy type program, using REDIRECT to catch (tcp) traffic,
and I'm seeing severe network degradation above ~2000 connection.
(computer: 1Gb p3, 2Gb memory, kernel 2.4.18 + aa1 patch)
Two questions first: how many packets/second is these 2000 connections?
And, do you have a
Hello Martin,
thanks for your numbers.
with 16384 hashbuckets and a maximum of 131072 tracked connections it took
7.5 seconds to perform 1 million lookups in the hashtable (using
__ip_conntrack_find from userspace).
Was this 1 million lookups to a random hash bucket each, with guaranteed
no
with 16384 hashbuckets and a maximum of 131072 tracked connections it took
7.5 seconds to perform 1 million lookups in the hashtable (using
__ip_conntrack_find from userspace).
Was this 1 million lookups to a random hash bucket each, with guaranteed
no match? Assuming this in the
grep conntrack /proc/slabinfo (about 4200 connections at the time)
ip_conntrack4527 4532 352 412 412 1
8192 buckets, 65536 max
I guess the hash function fails for my setup (note: this is a test setup,
connection originating from a single ip to a single ip)
... and probably
On Sun, Mar 17, 2002 at 10:12:14AM +0100, Harald Welte wrote:
On Sat, Mar 16, 2002 at 03:35:29PM -0800, Americo Melara wrote:
Where do I take the timestamps to track the packet throughout the stack?
here:
1. netif_rx (net/core/dev.c)
2. ip_local_deliver (net/ipv4/ip_input.c) before
Hello Americo,
Where do I take the timestamps to track the packet throughout the stack?
here:
1. netif_rx (net/core/dev.c)
2. ip_local_deliver (net/ipv4/ip_input.c) before it calls NF_HOOK
[...]
-- Personal question, would the community be interested in seeing these
results?
I would
On Thu, Mar 14, 2002 at 02:32:51PM -0800, Americo Melara wrote:
Hi, I'm working on my thesis and need some help. I am doing performance
measurements to understand how much overhead does iptables create in the stack when
processing a single packt by varying the number and type of rules, and
Hello Sumit,
not that I want to slow down your enthusiasm for the iptables module
framework too much, but did you try if you can configure your syslogd
for simultaneous logging to several remote destinations? It can already
log the same message to both files and remote systems, maybe two remote
57 matches
Mail list logo