Bill

Thanls for that info - looks like all states are going to set most of these 
data chunks, so are likely to be bumping the 1K mark.

On a related point.  I have bumped my max state size to 100K states.  My 
master is running with around 33K states in use currently, but the slave is 
showing up 75K!!  I am a bit puzzled by the discrepancy.

Any idea what is going on?

/Peter 

On Monday 15 May 2006 22:09, Bill Marquette wrote:
> This...
>
> struct pf_state {
>         u_int64_t        id;
>         u_int32_t        creatorid;
>         struct pf_state_host lan;
>         struct pf_state_host gwy;
>         struct pf_state_host ext;
>         sa_family_t      af;
>         u_int8_t         proto;
>         u_int8_t         direction;
>         u_int8_t         pad;
>         u_int8_t         log;
>         u_int8_t         allow_opts;
>         u_int8_t         timeout;
>         u_int8_t         sync_flags;
> #define PFSTATE_NOSYNC   0x01
> #define PFSTATE_FROMSYNC 0x02
> #define PFSTATE_STALE    0x04
>         union {
>                 struct {
>                         RB_ENTRY(pf_state)       entry_lan_ext;
>                         RB_ENTRY(pf_state)       entry_ext_gwy;
>                         RB_ENTRY(pf_state)       entry_id;
>                         TAILQ_ENTRY(pf_state)    entry_list;
>                         struct pfi_kif          *kif;
>                 } s;
>                 char     ifname[IFNAMSIZ];
>         } u;
>         struct pf_state_peer src;
>         struct pf_state_peer dst;
>         union pf_rule_ptr rule;
>         union pf_rule_ptr anchor;
>         union pf_rule_ptr nat_rule;
>         struct pf_addr   rt_addr;
>         struct pfi_kif  *rt_kif;
>         struct pf_src_node      *src_node;
>         struct pf_src_node      *nat_src_node;
>         u_int64_t        packets[2];
>         u_int64_t        bytes[2];
>         u_int32_t        creation;
>         u_int32_t        expire;
>         u_int32_t        pfsync_time;
>         u_int16_t        tag;
> };
>
> Note the amount of included structs.  For a full breakdown read:
> http://www.openbsd.org/cgi-bin/cvsweb/src/sys/net/pfvar.h?rev=1.234&content
>-type=text/x-cvsweb-markup
>
> --Bill
>
> On 5/15/06, Peter Curran <[EMAIL PROTECTED]> wrote:
> > Thanks Holger
> >
> > I thought I remembered seeing something about this in the past, but
> > google could not find it.
> >
> > Interesting it is max 1K per state.  I wonder what the factors are that
> > influence the size.
> >
> > /peter
> >
> > On Monday 15 May 2006 20:15, Holger Bauer wrote:
> > > Bill already answered this here:
> > > http://forum.pfsense.org/index.php?topic=1000.msg5953#msg5953
> > >
> > > Holger
> > >
> > > > -----Original Message-----
> > > > From: Peter Curran [mailto:[EMAIL PROTECTED]
> > > > Sent: Monday, May 15, 2006 8:54 PM
> > > > To: [email protected]
> > > > Subject: [pfSense Support] Maximum state table size
> > > >
> > > >
> > > > Can I ask Scott/Bill/Chris how big a state table I can reserve?
> > > >
> > > > We have just gone live with a pfsense pair on a pretty big
> > > > website.  We are
> > > > pulling a pretty consistent load of 6-8 Mbps outbound and
> > > > running with a
> > > > steady 30-40K states, peaking to 50+.
> > > >
> > > > The boxes have 256MB of memory, so plenty of head-room.
> > > >
> > > > What is the maximum size that I can wind the 'Firewall
> > > > Maximum States'
> > > > variable up to?  I am currently running with 70K states
> > > > defined, but would
> > > > like to go to at least 100K.
> > > >
> > > > Incidentally the site admin has reported a strange problem with state
> > > > management in the slave firewall:  If you run out of states
> > > > on the slave you
> > > > can only get it to see an increase by rebooting.  This does
> > > > not seem to be
> > > > the case on the master.  This is probably a FreeBSD/pf issue,
> > > > rather than
> > > > pfsense but I thought you might like to know.
> > > >
> > > > This setup seems to be very smooth - I am seeing a consistent
> > > > 3.3 Mbps out of
> > > > the masters pfsync port and < 30% CPU load (2GHz Celeron with
> > > > 4 Gigabit
> > > > Ethernet ports).  I have not tried polling yet, but will do
> > > > so if the CPU
> > > > load goes much higher.
> > > >
> > > > Be glad of any advice.
> > > >
> > > > Cheers
> > > >
> > > > /Peter
> > > >
> > > > --
> > > > This message has been scanned for viruses and
> > > > dangerous content by MailScanner, and is
> > > > believed to be clean.
> > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > > > For additional commands, e-mail: [EMAIL PROTECTED]
> > >
> > > ____________
> > > Virus checked by G DATA AntiVirusKit
> >
> > --
> > This message has been scanned for viruses and
> > dangerous content by MailScanner, and is
> > believed to be clean.
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to