[This may be a duplicate but I haven't seen my first try show up]

On 2007-Jun-26 08:30:48 -0700, Darren Reed <[EMAIL PROTECTED]> wrote:
>Peter Jeremy wrote:
>> ...
>> "ipnat -s" showed no "no memory" failures (though there was a
>> significant rate of "bad nat" failures).
>
>In this instance, what it probably amounts to is running out of unique
>address/port pairs on the outgoing side.

I don't think this is likely because each internal host has its own
external address.  I checked through the NAT logs and there are no
cases where the port numbers are being re-mapped.

>It may also be a reference to the NAT hash bucket being too long....
>to test this one, try "ipf -T fr_nat_maxbucket=100" and see if the problem
>you're seeing goes away.  I'm hoping this is *not* the case though...

That does seem to be the problem.  With fr_nat_maxbucket=100 (and
everything else at the default), the problems seem to have gone away,
though I am still seeing "bad nat" failures.  The "inuse" has stabilised
at around 15,000.  This implies an average NAT chain length of about 7.

I suspect I should increase ipf_nattable_sz.

I might try to analyse the behaviour of NAT_HASH_FN() with my traffic
mix and see how skewed the output is.

>The stats/reporting for NAT are quite bad at present - this will improve
>a lot for 5.1.

More statistics would be most welcome - even if they are only exposed
via sysctl or kstat, at least you can narrow down which piece of code
is triggering the problem.

-- 
Peter Jeremy

Attachment: pgp6cSSnwaz8x.pgp
Description: PGP signature

Reply via email to