No confusion. The read and write buffer sizes would be above layer 3. VMware
offers little ability to modify read and write sizes. It did inspire me to
find this: https://kb.vmware.com/s/article/1007909
NFS.ReceiveBufferSize
This is the size of the receive buffer for NFS sockets. This value is chosen
based on internal performance testing. VMware does not recommend adjusting this
value.
NFS.SendBufferSize
The size of the send buffer for NFS sockets. This value is chosen based on
internal performance testing. VMware does not recommend adjusting this value.
...
ESXi 6.0, 6.5, 6.7:
Default Net.TcpipHeapMax is 512MB. Default send/receive socket buffer size of
NFS is 256K each. So each socket consumes ~512K+.For 256 shares, it would be
~128M. The default TCPIPheapMax is sufficient even for 256 mounts. Its not
required to increase.
Also, the man page for mount_nfs implies -w is useful for UDP mounts. I have
verified that this mount is using TCP.
-w writesize
Set the write data size to the specified value. Ditto the
comments w.r.t. the -r option, but using the "fragments dropped
after timeout" value on the server instead of the client. Note
that both the -r and -w options should only be used as a last
ditch effort at improving performance when mounting servers that
do not support TCP mounts.
-Steve S.
-----Original Message-----
From: [email protected] <[email protected]> On Behalf Of Carsten Reith
Sent: Wednesday, December 6, 2023 11:41 AM
To: [email protected]
Subject: Re: NFS Server performance
[You don't often get email from [email protected]. Learn why this is
important at https://aka.ms/LearnAboutSenderIdentification ]
Steven Surdock <[email protected]> writes:
> The client is VMWare ESXi, so my options are limited. I tried
> enabling jumbo frames (used 9000) and this made very little
> difference.
>
Is it possible that you confuse the network layers here ? Jumbo frames are
layer 2, the read and write sizes referred to apply are layer 3. You can try to
set them as suggested, indepently of the frame size.