Hi,

I'd like to a code review for the changes to address CR 4953763 & 6216670:

http://cr.opensolaris.org/~dain/4953763/ 
<http://cr.opensolaris.org/%7Edain/4953763/>

For 4953763, this change allows users to use /etc/default/nfs and 
/etc/default
to configure TCP send and receive buffer for NFS server and client 
connections
respectively. Users also have the option to force both NFS server and client
to use system-wide default buffer size (as in current release).

For 6216670, this change automatically set TCP send and receive buffer size
for both NFS server and client connections to 1MB. This value can be tuned
using /etc/default/nfs and /etc/default file. The value of 1MB was used for
these reasons:
. performance testing using vdbench on tmpfs shows write and read tests
  run at near line speed of 1GigE. Also significant perf. gain on IPoIB 
testing
  with vdbench.
. this value is also used in the SUN storage appliance product after 
extensive
  evaluation.
. Maximum value allowed by current TCP stack (see below).

Currently TCP places a system-wide maximum buffer size of 1MB for both
send and receive via configuration variable 'tcp_max_buf'. So if a user 
needs
to configure TCP buffer size that is larger 1MB, 'tcp_max_buf' has to be
first increased using 'ndd' command. The 'nfs.dfl' file contains 
instruction of
how to set TCP send and receive buffer size beyond 1MB.

NOTE: the changes in nfs4cbd.c are just fixes of code indentation, required
to pass 'hg pbchk'. Somehow my webrev command does not include these
changes so I attached the diff here for your review.

Thanks,
-Dai
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: nfs4cbd.diff
URL: 
<http://mail.opensolaris.org/pipermail/nfs-discuss/attachments/20090617/91e1bafc/attachment.ksh>

Reply via email to