Problem
I've a filtering bridge, which connects in/out to another firewall
(yea, yea, paranoid I know) and the local lan.
I run snort on the various bits of network cable, watching the outside
and inside bridges, and cross-correlating.
My problem appears to be that there is only one state table in the
kernel for all PF connections, and so I need to ensure that only one
interface creates state table entries.
Hmmm. I'll explain by flow:
1) I send a tcp SYN from my PC to my PF-bridge on fxp0 (bridge0).
2) The bridge sends this to my firewall on fxp1  (bridge0).
3) The firewall sends this on to the PF-bridge on fxp2  (bridge1).
4) The bridge sends this to my gateway out of fxp3 (bridge1).

Now I can add pass rules, without keep state, on anywhere, but if I
put a keep state on an i/f on bridge0, naturally it sees the same
packet on bridge1 and drops it, because it's expecting a reply, not
a duplicate.

Does this sound "right", and does anyone know how I can get around
it? I'd like to keep state on all rules, if possible.



Dom
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Dom De Vitto                                       Tel. 07855 805 271
http://www.devitto.com                         mailto:[EMAIL PROTECTED]
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -


-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of Daniel Hartmeier
Sent: Thursday, July 03, 2003 12:39 AM
To: Adam Getchell
Cc: [EMAIL PROTECTED]
Subject: Re: Adaptive timeouts


I'm not sure I understand what you meant exactly. The age of a state
(the time since it was created) doesn't matter at all. Only the number
of states does.

The number of states (that exist at the given time) decides the scaling
factor. As long as the number is lower than adaptive.start, the factor
is 100%. If it reaches adaptive.end, the factor is 0%. In between, the
factor decreases linearly as the number of states increases. So, in the
example

> set timeout { adaptive.start 6000, adaptive.end 12000 }

states factor
1      100%
1000   100%
5000   100%
6000   100%
7500    75%
9000    50%
10500   25%
12000    0%

Each state entry uses a specific base timeout value, depending on
protocol and condition of the state (like, an established TCP connection
uses a base value of tcp.established, 86400 seconds by default). For
each state, its particular base value is multiplied by the factor
calculated above (same factor for all states).

If you specify adaptive.start/.end in a rule (as compared to the global
settings), the scaling for states created by that rule is based on the
number of existing states created by that rule and the specific start
and end values (and the total number of states and the global start/end
values are irrelevant for states created by the rule, only one scaling
factor gets applied, not both the global and per-rule one).

Maybe it's easier to understand if I describe the algorithm which
calculates when a given state entry expires (this is recalculated
whenever an input parameter changed):

1) does the rule that created the state have its own adaptive.start
   and adaptive.end values?

   a) if so, use those two values and the number of states that
      currently exist that were created from the same rule

   b) if not, use the global values, and the total number of
      states that currently exist (created by any rule)

2) calculate the scale factor, based on the number of states, start
   and end, either from a) or b)

3) check the protocol and condition of state entry to get the
   base timeout value (tcp.first, tcp.opening, tcp.established,
   etc.), note that a rule can also override the global timeout
   values with its own, applying to all states created from the
   rule

4) multiply the base value with the factor, this is the number
   of seconds that the state entry is kept alive without matching
   a packet

5) add this number to the last time a packet matched the state,
   this is the time when the state should expire

6) if that time lies in the past, remove the state

So, the age of a state entry is irrelevant, what matters is the last
time a packet matched the state. And how many other states exist (scale
factor). And the base timeout value.

A flood of new connections can't kill existing established connections,
since they'll get the same scale factor, and tcp.first/tcp.opening is
much smaller than tcp.established. But additional new connections can
cause an established _idle_ connection to get removed earlier than it
would be without additional connections. But that's the point :)

HTH,
Daniel



Reply via email to