Not a TCP expert but the MTU is nearly always 1500 (or just under) hence
your limit.  Sending packets greater than the MTU will lead to
fragmentation.  Fragmentation leads to re-transmissions (depends on do not
fragment bit?) and performance problems.  Performance problems leads to
frustration and anger.  Anger leads to the dark side of the force.

You can increase the MTU to like 9000 or something if you enable jumbo
frames but you'd need to support it across the board (pfSense, routers,
switches?, servers, etc.).  It's a hassle probably not worth the effort in
terms of gains.  Some people do it as a means to increase iSCSI traffic
performance but others say the throughput gain is dubious at best.  I would
make sure some doofus didn't enable jumbo frames on your NFS server and if
so then turn it off and check the MTU setting in the network stack on the
NFS server as well.

I may not know what the hell i'm talking about though so someone else can
feel free to jump in and tell me what an idiot I am.



On Wed, Nov 5, 2014 at 6:47 PM, Adam Thompson <[email protected]> wrote:

> Problem: really, really bad performance (<10Mbps) on both NFS (both tcp
> and udp) and CIFS through pfSense.
>
> Proximate cause: running a packet capture on the Client shows one smoking
> gun - the TCP window size on packets sent from the client is always ~1444
> bytes.  Packets arriving from the server show a TCP window size of ~32k.
>
>
> The Network:
>                     +------+
>                     |Router|
>                     +--+---+
>                        |
>                 --+----+----+--
>                   |         |
>                +--+---+  +-------+
>                |Client|  |pfSense|
>                +------+  +--+----+
>                             |
>                           --+---+--
>                                 |
>                              +--+---+
>                              |Server|
>                              +------+
>
>     - Client and pfSense both have Router as default gateway.
>     - pfSense has custom outbound NAT rules preventing NAT between Server
> subnet and Client subnet, but NAT'ing all other     - outbound connections.
>     - Router has static route pointing to Server subnet via pfSense.
>
> Hardware:
>     Router is an OpenBSD system (a CARP cluster, actually) running on
> silly-overpowered hardware.
>     Client is actually multiple systems, ranging from laptops to high-end
> servers.
>     Server is a Xeon E3-1230v3 running Linux, exporting a filesystem via
> both NFS (v2, v3 & v4) and CIFS (samba).
>     pfSense is v2.1.5 (i386) on a dual P-III 1.1GHz, CPU usage typically
> peaks at around 5%.
>
>
> Performance on local Server subnet (i.e. from a same-subnet client) is
> very good on all protocols, nearly saturating the gigabit link.
> Traffic outbound from the server subnet to the internet (via Router) moves
> at a decent pace, this firewall can typically handle ~400Mbps without any
> trouble, IIRC synthetic benchmarks previously showed it can peak at over
> 800Mbps.
>
> Based on the FUBAR TCP window sizes I've observed, I assume pfSense is
> doing something to my TCP connections... but why are only the non-NAT'd
> connections affected?  I know there's an option to disable pf scrub, but
> that's only supposed to affect NFSv3 (AFAIK), and this also affects
> NFSv4-over-TCP and CIFS.
>
> --
> -Adam Thompson
>  [email protected]
>
> _______________________________________________
> List mailing list
> [email protected]
> https://lists.pfsense.org/mailman/listinfo/list
>
_______________________________________________
List mailing list
[email protected]
https://lists.pfsense.org/mailman/listinfo/list

Reply via email to