Re: IPFW In-Kernel NAT vs PF NAT Performance

2020-03-19 Thread Marko Zec
On Thu, 19 Mar 2020 14:33:34 +0300
Lev Serebryakov  wrote:

> On 19.03.2020 7:14, Neel Chauhan wrote:
> 
> > However, if you know, where in the code does libalias use only 4096
> > buckets? I want to know incase I want/have to switch back to IPFW.  
>  4096 is my mistake, it is 4001 and must be prime. It is here:
> 
> sys/netinet/libalias/alias_local.h:69-70:
> 
> #define LINK_TABLE_OUT_SIZE4001
> #define LINK_TABLE_IN_SIZE 4001

Out of curiosity, why exactly _must_ the hash size be a prime here?
Doing a quick

fgrep -R powerof2 /sys/netinet | fgrep hash

reveals that a completely different line of thought prevails there, and
probably elsewhere as well?  What gives?

Marko
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: IPFW In-Kernel NAT vs PF NAT Performance

2020-03-19 Thread Lev Serebryakov
On 19.03.2020 7:14, Neel Chauhan wrote:

> However, if you know, where in the code does libalias use only 4096
> buckets? I want to know incase I want/have to switch back to IPFW.
 4096 is my mistake, it is 4001 and must be prime. It is here:

sys/netinet/libalias/alias_local.h:69-70:

#define LINK_TABLE_OUT_SIZE4001
#define LINK_TABLE_IN_SIZE 4001


-- 
// Lev Serebryakov



signature.asc
Description: OpenPGP digital signature


Re: IPFW In-Kernel NAT vs PF NAT Performance

2020-03-19 Thread Eugene Grosbein
19.03.2020 18:19, Lev Serebryakov wrote:

>> Don't you think that now as ipfw nat builds libalias in kernel context,
>> it could scale with maxusers (sys/systm.h) ?
>>
>> Something like (4001 + (maxusers-32)*8) so it grows with amount of physical 
>> memory
>> and is kept small for low-memory systems.
>  IMHO, "maxusers" us useless now. It must be sysctl, as size of dynamic
> state table of IPFW itself. I have low-memory system where WHOLE memory
> is dedicated to firewall/nat, for example. I need really huge tables
> (131101) to make it work "bad" and not "terrible".

Sure, dedicated sysctl. I mean, its default value should be auto-tuned based on 
maxusers
that grows with installed RAM by default.



___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: IPFW In-Kernel NAT vs PF NAT Performance

2020-03-19 Thread Lev Serebryakov
On 19.03.2020 9:42, Eugene Grosbein wrote:

>>> I’d expect both ipfw and pf to happily saturate gigabit links with NAT, 
>>> even on quite modest hardware.
>>> Are you sure the NAT code is the bottleneck?
>>  ipfw nat is very slow, really. There are many reasons, and one of them
>> (easy fixable, but you need patch sources and rebuild kernel/module) is
>> that `libalias` uses only 4096 buckets in state hashtable by default. So
>> it could saturate 1GBps link if you have 10 TCP connections, but it
>> could not saturate 100Mbit if your have, say, 100K UDP streams.
> 
> It's really 4001 that is (and sould be) prime number.
 Oh, yes, I've forgot this detail.

> Don't you think that now as ipfw nat builds libalias in kernel context,
> it could scale with maxusers (sys/systm.h) ?
> 
> Something like (4001 + (maxusers-32)*8) so it grows with amount of physical 
> memory
> and is kept small for low-memory systems.
 IMHO, "maxusers" us useless now. It must be sysctl, as size of dynamic
state table of IPFW itself. I have low-memory system where WHOLE memory
is dedicated to firewall/nat, for example. I need really huge tables
(131101) to make it work "bad" and not "terrible".

-- 
// Lev Serebryakov



signature.asc
Description: OpenPGP digital signature


Re: IPFW In-Kernel NAT vs PF NAT Performance

2020-03-19 Thread Eugene Grosbein
19.03.2020 13:42, Eugene Grosbein wrote:

> It's really 4001 that is (and sould be) prime number.

If we decide to auto-tune this, here is small table of prime numbers to stick 
with:

4001
8011
12011
16001
24001
32003
48017
64007

___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: IPFW In-Kernel NAT vs PF NAT Performance

2020-03-19 Thread Eugene Grosbein
18.03.2020 21:25, Lev Serebryakov wrote:

> On 18.03.2020 9:17, Kristof Provost wrote:
> 
>>> Which firewall gives better performance, IPFW's In-Kernel NAT or PF NAT? I 
>>> am dealing with 1000s of concurrent connections but 
>>> browsing-level-bandwidth at once with Tor.
>>>
>> I’d expect both ipfw and pf to happily saturate gigabit links with NAT, even 
>> on quite modest hardware.
>> Are you sure the NAT code is the bottleneck?
>  ipfw nat is very slow, really. There are many reasons, and one of them
> (easy fixable, but you need patch sources and rebuild kernel/module) is
> that `libalias` uses only 4096 buckets in state hashtable by default. So
> it could saturate 1GBps link if you have 10 TCP connections, but it
> could not saturate 100Mbit if your have, say, 100K UDP streams.

It's really 4001 that is (and sould be) prime number.

Don't you think that now as ipfw nat builds libalias in kernel context,
it could scale with maxusers (sys/systm.h) ?

Something like (4001 + (maxusers-32)*8) so it grows with amount of physical 
memory
and is kept small for low-memory systems.

___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: IPFW In-Kernel NAT vs PF NAT Performance

2020-03-18 Thread Neel Chauhan

Thanks for telling me this.

I switched to PF and it performs better.

However, if you know, where in the code does libalias use only 4096 
buckets? I want to know incase I want/have to switch back to IPFW.


-Neel

On 2020-03-18 07:25, Lev Serebryakov wrote:

On 18.03.2020 9:17, Kristof Provost wrote:

Which firewall gives better performance, IPFW's In-Kernel NAT or PF 
NAT? I am dealing with 1000s of concurrent connections but 
browsing-level-bandwidth at once with Tor.


I’d expect both ipfw and pf to happily saturate gigabit links with 
NAT, even on quite modest hardware.

Are you sure the NAT code is the bottleneck?

 ipfw nat is very slow, really. There are many reasons, and one of them
(easy fixable, but you need patch sources and rebuild kernel/module) is
that `libalias` uses only 4096 buckets in state hashtable by default. 
So

it could saturate 1GBps link if you have 10 TCP connections, but it
could not saturate 100Mbit if your have, say, 100K UDP streams.

 I don't know about pf nat.

___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: IPFW In-Kernel NAT vs PF NAT Performance

2020-03-18 Thread Lev Serebryakov
On 18.03.2020 9:17, Kristof Provost wrote:

>> Which firewall gives better performance, IPFW's In-Kernel NAT or PF NAT? I 
>> am dealing with 1000s of concurrent connections but browsing-level-bandwidth 
>> at once with Tor.
>>
> I’d expect both ipfw and pf to happily saturate gigabit links with NAT, even 
> on quite modest hardware.
> Are you sure the NAT code is the bottleneck?
 ipfw nat is very slow, really. There are many reasons, and one of them
(easy fixable, but you need patch sources and rebuild kernel/module) is
that `libalias` uses only 4096 buckets in state hashtable by default. So
it could saturate 1GBps link if you have 10 TCP connections, but it
could not saturate 100Mbit if your have, say, 100K UDP streams.

 I don't know about pf nat.

-- 
// Lev Serebryakov



signature.asc
Description: OpenPGP digital signature


Re: IPFW In-Kernel NAT vs PF NAT Performance

2020-03-18 Thread Kristof Provost


> On 18 Mar 2020, at 13:31, Neel Chauhan  wrote:
> 
> Hi freebsd-net@ mailing list,
> 
> Right now, my firewall is a HP T730 thin client (with a Dell Broadcom 5720 
> PCIe NIC) running FreeBSD 12.1 and IPFW's In-Kernel NAT. My ISP is "Wave G" 
> in the Seattle area, and I have the Gigabit plan.
> 
> Speedtests usually give me 700 Mbps down/900 Mbps up, and 250-400 Mbps 
> down/800 Mbps up during the Coronavirus crisis. However, I'm having problems 
> with an application (Tor relays) where I am not able to use a lot of 
> bandwidth for Tor, Coronavirus-related telecommuting or not. My Tor server is 
> separate from my firewall.
> 
> Which firewall gives better performance, IPFW's In-Kernel NAT or PF NAT? I am 
> dealing with 1000s of concurrent connections but browsing-level-bandwidth at 
> once with Tor.
> 
I’d expect both ipfw and pf to happily saturate gigabit links with NAT, even on 
quite modest hardware.
Are you sure the NAT code is the bottleneck?

Regards,
Kristof
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


IPFW In-Kernel NAT vs PF NAT Performance

2020-03-17 Thread Neel Chauhan

Hi freebsd-net@ mailing list,

Right now, my firewall is a HP T730 thin client (with a Dell Broadcom 
5720 PCIe NIC) running FreeBSD 12.1 and IPFW's In-Kernel NAT. My ISP is 
"Wave G" in the Seattle area, and I have the Gigabit plan.


Speedtests usually give me 700 Mbps down/900 Mbps up, and 250-400 Mbps 
down/800 Mbps up during the Coronavirus crisis. However, I'm having 
problems with an application (Tor relays) where I am not able to use a 
lot of bandwidth for Tor, Coronavirus-related telecommuting or not. My 
Tor server is separate from my firewall.


Which firewall gives better performance, IPFW's In-Kernel NAT or PF NAT? 
I am dealing with 1000s of concurrent connections but 
browsing-level-bandwidth at once with Tor.


Also, I hope you all stay safe and healthy during the Coronavirus 
crisis.


-Neel

===

https://www.neelc.org/
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"