NFS Performance: Weirder And Weirder

2013-03-16 Thread Tim Daneliuk

This is really weird.  A FreeBSD 9.1 system mounts the following:

/dev/ad4s1a989M625M285M69%/
devfs  1.0k1.0k  0B   100%/dev
/dev/ad4s1d7.8G  1G6.1G14%/var
/dev/ad4s1e 48G9.4G 35G21%/usr
/dev/ad4s1f390G127G231G35%/usr1
/dev/ad6s1d902G710G120G86%/usr1/BKU

/usr1/something (under ad4s1f) and /usr1/BKU (all of ad6s1d) are
exported for NFS mounting on the LAN.  I have tested the
speeds of these two drives locally doing a 'dd if=/dev/zero '.
Their speeds are quite comparable - around 55-60 MB/s so the
problem below is not an artifact of a slow drive.

The two mounts are imported like this on a Linux Mint 12 machine:


  machine:/usr1/BKU /BKU nfs   rw,soft,intr  0  0
  machine:/usr1/shared  /shared  nfs   rw,soft,intr  0  0

Problem:

When I write files from the LM12 machines to /BKU  the writes are
1/10 the speed of when writing to /shared.  Reads are fine in both
cases, at near native disk speeds being reported.

Someone here suggested I get rid of any symlinks in the mount and I did
that to no avail.


Incidentally, the only reason I just noticed this is that I upgraded the
NIC on the FreeBSD machine and the switch into which it connects to 1000Base
because the LM12 machine had a built in 1000Base NIC.  I also changed
the cables on both machines to ensure they were not the problem.   Prior
to this, I was bandwidth constrained by the 100Base so I never saw NFS
performance as an issue.  When I upgraded, I expected faster transfers
and when I didn't get them, I started this whole investigation.

So ... I'm stumped:

- It's not the drive or SATA ports because both drives show comparable 
performance.
- It's not the cables because I can get great throughput on one of the NFS 
mountpoints.
- It's neither NIC for the same reason.

Does anyone:

A) Have a clue what might be doing this
B) Have a suggestion how to track down the problem

Thanks,

--

Tim Daneliuk tun...@tundraware.com
PGP Key: http://www.tundraware.com/PGP/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFS Performance: Weirder And Weirder

2013-03-16 Thread Mehmet Erol Sanliturk
On Sat, Mar 16, 2013 at 11:49 AM, Tim Daneliuk tun...@tundraware.comwrote:

 This is really weird.  A FreeBSD 9.1 system mounts the following:

 /dev/ad4s1a989M625M285M69%/
 devfs  1.0k1.0k  0B   100%/dev
 /dev/ad4s1d7.8G  1G6.1G14%/var
 /dev/ad4s1e 48G9.4G 35G21%/usr
 /dev/ad4s1f390G127G231G35%/usr1
 /dev/ad6s1d902G710G120G86%/usr1/BKU

 /usr1/something (under ad4s1f) and /usr1/BKU (all of ad6s1d) are
 exported for NFS mounting on the LAN.  I have tested the
 speeds of these two drives locally doing a 'dd if=/dev/zero '.
 Their speeds are quite comparable - around 55-60 MB/s so the
 problem below is not an artifact of a slow drive.

 The two mounts are imported like this on a Linux Mint 12 machine:


   machine:/usr1/BKU /BKU nfs   rw,soft,intr  0  0
   machine:/usr1/shared  /shared  nfs   rw,soft,intr  0  0

 Problem:

 When I write files from the LM12 machines to /BKU  the writes are
 1/10 the speed of when writing to /shared.  Reads are fine in both
 cases, at near native disk speeds being reported.

 Someone here suggested I get rid of any symlinks in the mount and I did
 that to no avail.


 Incidentally, the only reason I just noticed this is that I upgraded the
 NIC on the FreeBSD machine and the switch into which it connects to
 1000Base
 because the LM12 machine had a built in 1000Base NIC.  I also changed
 the cables on both machines to ensure they were not the problem.   Prior
 to this, I was bandwidth constrained by the 100Base so I never saw NFS
 performance as an issue.  When I upgraded, I expected faster transfers
 and when I didn't get them, I started this whole investigation.

 So ... I'm stumped:

 - It's not the drive or SATA ports because both drives show comparable
 performance.
 - It's not the cables because I can get great throughput on one of the NFS
 mountpoints.
 - It's neither NIC for the same reason.

 Does anyone:

 A) Have a clue what might be doing this
 B) Have a suggestion how to track down the problem

 Thanks,

 --
 --**--**
 
 Tim Daneliuk tun...@tundraware.com
 PGP Key: http://www.tundraware.com/PGP/



With respect to your mount points : /usr1 is spanning TWO different
partitions :

/dev/ad4s1f390G127G231G35%/usr1
/dev/ad6s1d902G710G120G86%/usr1/BKU


because /usr1/BKU is a sub-directory of  /usr1 .


If you create a new directory , for example /usr2 , and /usr2/BKU , and
using this new separate directory for sharing , such as :

/dev/ad6s1d902G710G120G86%/usr2/BKU

and

  machine:/usr2/BKU /BKU nfs   rw,soft,intr  0  0


 will it make difference ?


Mehmet Erol Sanliturk
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFS Performance: Weirder And Weirder

2013-03-16 Thread Tim Daneliuk

On 03/16/2013 04:20 PM, Mehmet Erol Sanliturk wrote:







With respect to your mount points : /usr1 is spanning TWO different partitions :

/dev/ad4s1f390G127G231G35%/usr1
/dev/ad6s1d902G710G120G86%/usr1/BKU


because /usr1/BKU is a sub-directory of  /usr1 .


If you create a new directory , for example /usr2 , and /usr2/BKU , and using 
this new separate directory for sharing , such as :

/dev/ad6s1d902G710G120G86%/usr2/BKU

and

   machine:/usr2/BKU /BKU nfs   rw,soft,intr  0  0


  will it make difference ?


Mehmet Erol Sanliturk



I just tried this and it made no difference.  The same file copied onto
the NFS mount on /usr1/shared takes about 20x as long when coppied
on to /usr[1|2]/BKU.



--

Tim Daneliuk tun...@tundraware.com
PGP Key: http://www.tundraware.com/PGP/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFS Performance: Weirder And Weirder

2013-03-16 Thread Mehmet Erol Sanliturk
On Sat, Mar 16, 2013 at 3:07 PM, Tim Daneliuk tun...@tundraware.com wrote:

 On 03/16/2013 04:20 PM, Mehmet Erol Sanliturk wrote:





 With respect to your mount points : /usr1 is spanning TWO different
 partitions :

 /dev/ad4s1f390G127G231G35%/usr1
 /dev/ad6s1d902G710G120G86%/usr1/BKU


 because /usr1/BKU is a sub-directory of  /usr1 .


 If you create a new directory , for example /usr2 , and /usr2/BKU , and
 using this new separate directory for sharing , such as :

 /dev/ad6s1d902G710G120G86%/usr2/BKU

 and

machine:/usr2/BKU /BKU nfs   rw,soft,intr  0  0


   will it make difference ?


 Mehmet Erol Sanliturk



 I just tried this and it made no difference.  The same file copied onto
 the NFS mount on /usr1/shared takes about 20x as long when coppied
 on to /usr[1|2]/BKU.



 --
 --**--**
 
 Tim Daneliuk tun...@tundraware.com
 PGP Key: http://www.tundraware.com/PGP/



Michael  W. Lucas in Absolute FeeBSD , 2nd Edition ,  ( ISBN :
978-1-59327-151-0 ) ,
is suggesting the following ( p. 248 ) :

In client ( mount , or , fstab ) , use options ( -o tcp , intr , soft ,
-w=32768 , -r=32768 )

tcp option will request a TCP mount instead of UDP mount , because FreeBSD
NFS defaults to running over UDF .

This subject may be another check point .


Mehmet Erol Sanliturk
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFS Performance: Weirder And Weirder

2013-03-16 Thread iamatt
just slap an netapp 8.x with an avere flash box in front if you want
NFS performance...  or isilon.

On Sat, Mar 16, 2013 at 5:43 PM, Mehmet Erol Sanliturk
m.e.sanlit...@gmail.com wrote:
 On Sat, Mar 16, 2013 at 3:07 PM, Tim Daneliuk tun...@tundraware.com wrote:

 On 03/16/2013 04:20 PM, Mehmet Erol Sanliturk wrote:





 With respect to your mount points : /usr1 is spanning TWO different
 partitions :

 /dev/ad4s1f390G127G231G35%/usr1
 /dev/ad6s1d902G710G120G86%/usr1/BKU


 because /usr1/BKU is a sub-directory of  /usr1 .


 If you create a new directory , for example /usr2 , and /usr2/BKU , and
 using this new separate directory for sharing , such as :

 /dev/ad6s1d902G710G120G86%/usr2/BKU

 and

machine:/usr2/BKU /BKU nfs   rw,soft,intr  0  0


   will it make difference ?


 Mehmet Erol Sanliturk



 I just tried this and it made no difference.  The same file copied onto
 the NFS mount on /usr1/shared takes about 20x as long when coppied
 on to /usr[1|2]/BKU.



 --
 --**--**
 
 Tim Daneliuk tun...@tundraware.com
 PGP Key: http://www.tundraware.com/PGP/



 Michael  W. Lucas in Absolute FeeBSD , 2nd Edition ,  ( ISBN :
 978-1-59327-151-0 ) ,
 is suggesting the following ( p. 248 ) :

 In client ( mount , or , fstab ) , use options ( -o tcp , intr , soft ,
 -w=32768 , -r=32768 )

 tcp option will request a TCP mount instead of UDP mount , because FreeBSD
 NFS defaults to running over UDF .

 This subject may be another check point .


 Mehmet Erol Sanliturk
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFS Performance: Weirder And Weirder

2013-03-16 Thread Tim Daneliuk

On 03/16/2013 05:43 PM, Mehmet Erol Sanliturk wrote:





Michael  W. Lucas in Absolute FeeBSD , 2nd Edition ,  ( ISBN : 
978-1-59327-151-0 ) ,
is suggesting the following ( p. 248 ) :

In client ( mount , or , fstab ) , use options ( -o tcp , intr , soft , 
-w=32768 , -r=32768 )

tcp option will request a TCP mount instead of UDP mount , because FreeBSD NFS 
defaults to running over UDF .

This subject may be another check point .




Another very good suggestion but ... to no avail.  Thanks for pointing
this out.

--

Tim Daneliuk tun...@tundraware.com
PGP Key: http://www.tundraware.com/PGP/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFS Performance: Weirder And Weirder

2013-03-16 Thread Mehmet Erol Sanliturk
On Sat, Mar 16, 2013 at 6:46 PM, Tim Daneliuk tun...@tundraware.com wrote:

 On 03/16/2013 05:43 PM, Mehmet Erol Sanliturk wrote:



  Michael  W. Lucas in Absolute FeeBSD , 2nd Edition ,  ( ISBN :
 978-1-59327-151-0 ) ,
 is suggesting the following ( p. 248 ) :

 In client ( mount , or , fstab ) , use options ( -o tcp , intr , soft ,
 -w=32768 , -r=32768 )

 tcp option will request a TCP mount instead of UDP mount , because
 FreeBSD NFS defaults to running over UDF .

 This subject may be another check point .



 Another very good suggestion but ... to no avail.  Thanks for pointing
 this out.

 --
 --**--**
 
 Tim Daneliuk tun...@tundraware.com
 PGP Key: http://www.tundraware.com/PGP/



I have read messages once more .

There is a phrase : Linux Mint 12 machineS ( plural ) .

In your descriptions , there is no any information about network setup :

Single client ,
multiple clients , etc .

Then , with some assumptions :

If there is ONLY ONE client , and all of the tests are performed on this
ONLY client ,
problem may be attributed to FreeBSD server or kind of file(s) in different
directories :
One of the is encrypted ( requires decryption ) , another is plain file ,
etc. .

If there is MORE than ONE client , problem may be attributed to any one the
components of the network ( server , clients , switch , cable , NICs ,
interfering other software , etc. ) .

Assume there is MULTIPLE clients :


Take two clients of them :

(A) Client 1 : Mount two directories .
(B) Client 2 : Mount two directories .

Test transmission performance :

If they are similar , inspect server settings , directory privileges , etc
. , file systems ( one is ZFS , other is UFS2 , etc. ) . All of the
hardware may work properly , but if the file reading is not able to feed
NIC sufficiently fast , it may show up as degraded performance .
Increasing NIC buffer size ( as standard it is around 1000 bytes ) to
maximum available , may
offset latency of supply of data to NIC .

If they are different : Check client specialties :

A cable may be CAT5 ( only maximum 100 Mbits transfer . Network cards are
adaptive , they try 1 Gbits , if it is not achievable , it reduces to speed
to 100 Mbits , even to 10 Mbits ) .
In that case either use CAT6 cable or CAT5x ( for 1 Gbit transmission , I
do not remember x now )
The cable kind should be written on cable , if it is not written , select a
properly labelled cable .

Interchange cable tips to clients : If performance interchanges also :
Cable or SWITCH port is faulty :

Check switch port : It may be a 100 Mbits  , be sure that it is also 1
Gbits and working properly .


Mehmet Erol Sanliturk
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFS Performance: Weirder And Weirder

2013-03-16 Thread Tim Daneliuk

On 03/16/2013 10:15 PM, Mehmet Erol Sanliturk wrote:



On Sat, Mar 16, 2013 at 6:46 PM, Tim Daneliuk tun...@tundraware.com 
mailto:tun...@tundraware.com wrote:

On 03/16/2013 05:43 PM, Mehmet Erol Sanliturk wrote:



Michael  W. Lucas in Absolute FeeBSD , 2nd Edition ,  ( ISBN : 
978-1-59327-151-0 ) ,
is suggesting the following ( p. 248 ) :

In client ( mount , or , fstab ) , use options ( -o tcp , intr , soft , 
-w=32768 , -r=32768 )

tcp option will request a TCP mount instead of UDP mount , because 
FreeBSD NFS defaults to running over UDF .

This subject may be another check point .



Another very good suggestion but ... to no avail.  Thanks for pointing
this out.

--

--__--__
Tim Daneliuk tun...@tundraware.com mailto:tun...@tundraware.com
PGP Key: http://www.tundraware.com/PGP/



I have read messages once more .

There is a phrase : Linux Mint 12 machineS ( plural ) .

In your descriptions , there is no any information about network setup :

Single client ,
multiple clients , etc .

Then , with some assumptions :

If there is ONLY ONE client , and all of the tests are performed on this ONLY 
client ,
problem may be attributed to FreeBSD server or kind of file(s) in different 
directories :
One of the is encrypted ( requires decryption ) , another is plain file , etc. .



There is one server - FreeBSD, and one client - LM12.

Both have had their cables replaced with new CAT6 wiring.

Copying the exact same file to each of the NFS mounts exhibits the problem.

Reading from the two NFS mount is fast and as expected, so I do not suspect
network issues.

The two drives used on the server show similar disk performance locally.

The server side exports are identical for both mounts as are the client side
mounts.

The ONLY difference is that the fast NFS mount has server side permissions of
777 whereas the slow NFS mount has server side permissions of 775.  Both
are owned by root:wheel.  The contents of each filesystem are owned by a
user in the wheel group.  The one other difference is that all the contents
of the slow mount are in a particular user group, and all the ones in the
fast mount are in the wheel group.   Changing the group ownership of all the
stuff in the slow mount to wheel makes no difference.

The problem appears to be size related on the slow mount.  When I copy,
say, a 100MB file to it, performance is just fine.  When I copy a 1G file,
it's 1/20 the throughput (45MB/sec vs 2MB/sec).

This feels like some kind of buffer starvation but the fact that I can
run at full speed against another mount point leaves me scratching my
head as to just where.  It's almost like there's some kind of halting
going on during the transfer.








Tim Daneliuk tun...@tundraware.com
PGP Key: http://www.tundraware.com/PGP/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFS Performance: Weirder And Weirder

2013-03-16 Thread Mehmet Erol Sanliturk
There is one more point to check :

From your mount information , in the server , directories are on DIFFERENT
drives .

Assume one of the drives is  very INTELLIGENT to save power .

During local reading , due to reading speed , it may not go to SLEEP ,
but during network access , it may go to  sleep due to its exceeded waiting
time .

If this is the case , please stay away from INTELLIGENT drives in a server
: These are designed and produced by very IGNORANT entities .
For simple , personal applications , their latency may not be noticed very
much , but in a server , they can not be used .

Another point may be file sizes .

To check effect of file size , into the two different directories copy a
large ( for example , 5 GB , or a  4.n GB  .iso file ) and
transmit these same files from their directories to a single client  .

If directory structure makes a difference , assuming hardware parts and
client does not behave differently to these files ,
performance difference may be attributed to server side .


Mehmet Erol Sanliturk
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Weird NFS Performance Problem

2013-03-15 Thread Tim Daneliuk

I have a FreeBSD 9.1-STABLE exhibiting weird NFS performance issues
and I'd appreciate any suggestions.

I have several different directories exported from the same filesystem.
The machine that mounts them (a Linux Mint 12 desktop) writes
nice and fast to one of them, but writes to the other one
are dreadfully slow.  Both are mounted on the LM machine using
'rw,soft,intr' in that machine's fstab file.

Any ideas on what might be the culprit here?


--

Tim Daneliuk tun...@tundraware.com
PGP Key: http://www.tundraware.com/PGP/

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Weird NFS Performance Problem

2013-03-15 Thread Mehmet Erol Sanliturk
On Fri, Mar 15, 2013 at 5:09 PM, Tim Daneliuk tun...@tundraware.com wrote:

 I have a FreeBSD 9.1-STABLE exhibiting weird NFS performance issues
 and I'd appreciate any suggestions.

 I have several different directories exported from the same filesystem.
 The machine that mounts them (a Linux Mint 12 desktop) writes
 nice and fast to one of them, but writes to the other one
 are dreadfully slow.  Both are mounted on the LM machine using
 'rw,soft,intr' in that machine's fstab file.

 Any ideas on what might be the culprit here?


 --
 --**--**
 
 Tim Daneliuk tun...@tundraware.com
 PGP Key: http://www.tundraware.com/PGP/



Is the slow directory a LINK ?
If it is a LINK , then , try with the real directory name .


Thank you very much .

Mehmet Erol Sanliturk
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Asymmetric NFS Performance

2012-02-08 Thread Dan Nelson
In the last episode (Feb 02), Tim Daneliuk said:
 Server:FBSD 8.2-STABLE / MTU set to 15000
 Client:Linux Mint 12   / MTU set to 8192
 NFS Mount Options: rw,soft,intr
 Problem:
 
 Throughput copying from Server to Client is about 2x that when copying a
 file from client to server.  The client does have a SSD whereas the server
 has conventional SATA drives but ...  This problem is evident with either
 100- or 1000- speed ethernet so I don't think it is a drive thing since
 you'd expect to saturate 100-BASE with either type of drive.
 
 Things I've Tried So Far:
 
 - Increasing the MTUs - This helped speed things up, but the up/down
ratio stayed about the same.
 
 - Fiddling with rsize and wsize on the client - No real difference

If iostat -zx 1 on the server shows the disks at 100% busy, you're
probably getting hit by the fact that NFS has to commit writes to stable
storage before acking the client, so writes over NFS can be many times
slower than local write speed.  Setting the vfs.nfsrv.async sysctl to 1 will
speed things up, but if the server reboots while a client is writing, you
will probably end up with missing data even though the client thought
everything was written.  If you are serving ZFS filesystems, stick an SSD in
the server and point the ZFS intent log at it: zpool add mypool log da3. 
8GB of ZIL is more than enough, but it needs to be fast, so no sticking a
$10 thumb drive in and expecting any improvement :)


-- 
Dan Nelson
dnel...@allantgroup.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Asymmetric NFS Performance

2012-02-02 Thread Tim Daneliuk

Server:FBSD 8.2-STABLE / MTU set to 15000
Client:Linux Mint 12   / MTU set to 8192
NFS Mount Options: rw,soft,intr
Problem:

Throughput copying from Server to Client is about 2x that when
copying a file from client to server.  The client does have
a SSD whereas the server has conventional SATA drives but ...
This problem is evident with either 100- or 1000- speed ethernet
so I don't think it is a drive thing since you'd expect to saturate
100-BASE with either type of drive.

Things I've Tried So Far:

- Increasing the MTUs - This helped speed things up, but the up/down
  ratio stayed about the same.

- Fiddling with rsize and wsize on the client - No real difference


Ideas anyone?







---
Tim Daneliuk
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


NFS performance-tuning FreeBSD - NetApp

2009-08-03 Thread Ewald Jenisch
Hi,

I've got a FreeBSD 7.2 box (HP C-class Blade - AMD dual core Opteron
(x64), 4GB RAM, Broadcom NetXtreme II BCM5706) that should be
connected to a NetApp 3170 filer via NFS.

Out of the box, with nothing tuned (no special parameters for
mount_nfs, no kernel tuning), performance is very sluggish: I've got
~250Mbit/sec performance with peaks around 400Mbit/sec.

Sure enough, neither CPU (server and NetApp) nor network performance
is the problem here - it must be something NFS-related.

Any ideas on how to increas my NFS-performance? (Special mount
parameters, kernel tuning,...)

Thanks in advance for any clue,
-ewald


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFS performance-tuning FreeBSD - NetApp

2009-08-03 Thread Omer Faruk SEN
Merhaba Ewald,

You can read http://communities.netapp.com/thread/39 thread. There is special 
mount options for Linux and also for FreeBSD give it a try.

Regards.

Monday, August 3, 2009, 11:55:16 AM, you wrote:

 Hi,

 I've got a FreeBSD 7.2 box (HP C-class Blade - AMD dual core Opteron
 (x64), 4GB RAM, Broadcom NetXtreme II BCM5706) that should be
 connected to a NetApp 3170 filer via NFS.

 Out of the box, with nothing tuned (no special parameters for
 mount_nfs, no kernel tuning), performance is very sluggish: I've got
 ~250Mbit/sec performance with peaks around 400Mbit/sec.

 Sure enough, neither CPU (server and NetApp) nor network performance
 is the problem here - it must be something NFS-related.

 Any ideas on how to increas my NFS-performance? (Special mount
 parameters, kernel tuning,...)

 Thanks in advance for any clue,
 -ewald


 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
 freebsd-questions-unsubscr...@freebsd.org



-- 
Best regards,
 Omermailto:of...@enderunix.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: NFS performance-tuning FreeBSD - NetApp

2009-08-03 Thread Steven Kreuzer


On Aug 3, 2009, at 4:55 AM, Ewald Jenisch wrote:


Hi,

I've got a FreeBSD 7.2 box (HP C-class Blade - AMD dual core Opteron
(x64), 4GB RAM, Broadcom NetXtreme II BCM5706) that should be
connected to a NetApp 3170 filer via NFS.

Out of the box, with nothing tuned (no special parameters for
mount_nfs, no kernel tuning), performance is very sluggish: I've got
~250Mbit/sec performance with peaks around 400Mbit/sec.

Sure enough, neither CPU (server and NetApp) nor network performance
is the problem here - it must be something NFS-related.

Any ideas on how to increas my NFS-performance? (Special mount
parameters, kernel tuning,...)


I would suggest bumping the read and write sizes to 32K and using tcp  
instead of udp
If you have very large directories, you can also see an increase in  
responsiveness by

enabling readdirplus as well, but that wont help with raw throughput.

Try passing the following parameters to mount and see if performance  
is any better


-r=32768 -w=32768 -l -T

--
Steven Kreuzer
http://www.exit2shell.com/~skreuzer

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: very poor NFS performance from a beta3

2007-11-18 Thread Kris Kennaway

Jonathan Horne wrote:
i updated my workstatino to beta3, and then got on a 6.2-p8 machine and 
mounted /usr/src and /usr/obj from the beta3.   tried to installkernel, but 
it moved as painful pace.  would get to the point where it moves kernel to 
kernel.old, and would just pause for a long time.  file transfer showed about 
104k.


i took this same 6.2-p8 box, and mounted src and obj from my main 6.2 build 
server, and reinstalled the 6.2-p8 kernel, and speed was as expected.


is there anywhere i can being looking to troubleshoot this problem (as to why 
the 7.0b3 would serve NFS so slowly)?


This is usually because your NIC has mis-negotiated and is unable to 
pass packets properly.


Kris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: very poor NFS performance from a beta3

2007-11-18 Thread Jonathan Horne
On Sunday 18 November 2007 05:59:12 am Kris Kennaway wrote:
 Jonathan Horne wrote:
  i updated my workstatino to beta3, and then got on a 6.2-p8 machine and
  mounted /usr/src and /usr/obj from the beta3.   tried to installkernel,
  but it moved as painful pace.  would get to the point where it moves
  kernel to kernel.old, and would just pause for a long time.  file
  transfer showed about 104k.
 
  i took this same 6.2-p8 box, and mounted src and obj from my main 6.2
  build server, and reinstalled the 6.2-p8 kernel, and speed was as
  expected.
 
  is there anywhere i can being looking to troubleshoot this problem (as to
  why the 7.0b3 would serve NFS so slowly)?

 This is usually because your NIC has mis-negotiated and is unable to
 pass packets properly.

 Kris

and another reason why this is so peculiar, is that when my desktop was at 
7.0b2, i installed b2 to another laptop (not the same system as the 6.2p8), 
and it was perfectly normal.

cheers,
-- 
Jonathan Horne
http://dfwlpiki.dfwlp.org
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: very poor NFS performance from a beta3

2007-11-18 Thread Jonathan Horne
On Sunday 18 November 2007 05:59:12 am Kris Kennaway wrote:
 Jonathan Horne wrote:
  i updated my workstatino to beta3, and then got on a 6.2-p8 machine and
  mounted /usr/src and /usr/obj from the beta3.   tried to installkernel,
  but it moved as painful pace.  would get to the point where it moves
  kernel to kernel.old, and would just pause for a long time.  file
  transfer showed about 104k.
 
  i took this same 6.2-p8 box, and mounted src and obj from my main 6.2
  build server, and reinstalled the 6.2-p8 kernel, and speed was as
  expected.
 
  is there anywhere i can being looking to troubleshoot this problem (as to
  why the 7.0b3 would serve NFS so slowly)?

 This is usually because your NIC has mis-negotiated and is unable to
 pass packets properly.

 Kris

kris, thanks for your reply.  whats the best way to tell if i have 
mis-negotiated?  is just 'ifconfig' sufficient?  i was wondering about this 
earlier, and reboots of the hosts as well as the switches yielded no 
different results.

also, i did a few other tests as well.  i can scp to/from the 7.0b3 and 6.2p8 
from/to each other as well as other hosts on the network at what i would call 
normal speeds (6-10 megabytes/sec).  also, i can 'cp -vpnRP' directory trees 
from/to the same with normal results.  only installing kernel from the 7.0 to 
the 6.2 stalls out.
-- 
Jonathan Horne
http://dfwlpiki.dfwlp.org
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


very poor NFS performance from a beta3

2007-11-16 Thread Jonathan Horne
i updated my workstatino to beta3, and then got on a 6.2-p8 machine and 
mounted /usr/src and /usr/obj from the beta3.   tried to installkernel, but 
it moved as painful pace.  would get to the point where it moves kernel to 
kernel.old, and would just pause for a long time.  file transfer showed about 
104k.

i took this same 6.2-p8 box, and mounted src and obj from my main 6.2 build 
server, and reinstalled the 6.2-p8 kernel, and speed was as expected.

is there anywhere i can being looking to troubleshoot this problem (as to why 
the 7.0b3 would serve NFS so slowly)?
-- 
Jonathan Horne
http://dfwlpiki.dfwlp.org
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Question on NFS performance

2006-05-10 Thread Valerio daelli

Hi all
we have a FreeBSD 5.4 exporting some NFS filesystem to a cluster of gentoo boxes
(kernel 2.6.12).
Our exported storage disk is an Apple XRaid.
We have Gigabit Ethernet both on the client and the server.
We would like to improve our read performance.
This is our performance:

about 10Mb reading a file of 1Gb with dd

and iozone confirms this result.
We already use the normal optimization flags (we use rpc.lockd and
rpc.statd, on the client we have
read size 65536 and a read ahead of 4 blocks, the async options).
Is our performance the best we can get? Can we improve it?
Thanks for your help

Valerio Daelli
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Question on NFS performance

2006-05-10 Thread Kris Kennaway
On Wed, May 10, 2006 at 02:54:39PM +0200, Valerio daelli wrote:
 Hi all
 we have a FreeBSD 5.4 exporting some NFS filesystem to a cluster of gentoo 
 boxes
 (kernel 2.6.12).
 Our exported storage disk is an Apple XRaid.
 We have Gigabit Ethernet both on the client and the server.
 We would like to improve our read performance.
 This is our performance:
 
 about 10Mb reading a file of 1Gb with dd
 
 and iozone confirms this result.
 We already use the normal optimization flags (we use rpc.lockd and
 rpc.statd, on the client we have
 read size 65536 and a read ahead of 4 blocks, the async options).
 Is our performance the best we can get? Can we improve it?

5.4 and filesystem performance cannot be said together in the same
sentence.  Upgrade to 6.1.

Kris


pgpndSMng4P0q.pgp
Description: PGP signature


poor NFS performance in 5.x

2003-11-27 Thread Antoine Jacoutot
Hi :)

I upgraded two boxes to FreeBSD-5.2-BETA a week ago and I noticed that NFS 
performance is very slow compared to 4.x-RELEASE.
Before, NFS transfers were between 10 and 12 MB/s and now I don't go past 7 
MB/s.
My exports/mount settings did not change and the hardware is obviously the 
same.
Any idea where I should start looking for resolving this ?
It is important since I'm planning on upgrading some servers in the futur and 
homedir are mounted via NFS.

Thanks in advance.

Antoine

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: poor NFS performance in 5.x

2003-11-27 Thread Kris Kennaway
On Thu, Nov 27, 2003 at 10:12:21AM +0100, Antoine Jacoutot wrote:
 Hi :)
 
 I upgraded two boxes to FreeBSD-5.2-BETA a week ago and I noticed that NFS 
 performance is very slow compared to 4.x-RELEASE.
 Before, NFS transfers were between 10 and 12 MB/s and now I don't go past 7 
 MB/s.
 My exports/mount settings did not change and the hardware is obviously the 
 same.
 Any idea where I should start looking for resolving this ?

Try reading the basic documentation that comes with 5.2-BETA, for
example the /usr/src/UPDATING file, which tells you clearly that
performance is not expected to be good unless you disable the standard
debugging options.  Also make sure you read the 5.x Early Adopter's
Guide, available on the website.

Kris

pgp0.pgp
Description: PGP signature


Re: poor NFS performance in 5.x

2003-11-27 Thread Antoine Jacoutot
On Thursday 27 November 2003 10:28, Kris Kennaway wrote:
 Try reading the basic documentation that comes with 5.2-BETA, for
 example the /usr/src/UPDATING file, which tells you clearly that
 performance is not expected to be good unless you disable the standard

I've been running CURRENT on test boxes for months, I know that and this is 
not the problem, I have my own compiled kernel without debugging options set.

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Improving FreeBSD NFS performance (esp. directory updates)

2003-06-06 Thread Thomas A. Limoncelli
On Thu, May 29, 2003 at 04:05:04PM -0500, Marc Wiz wrote:
 On Thu, May 29, 2003 at 04:54:00PM -0400, Tom Limoncelli wrote:
  I have a NFS server with (so far) a single NFS client.  Things work 
  fine, however if (on the client) I do an rm -rf foo on a large (deep 
  and wide) directory tree the tty receives NFS server not 
  responding/NFS server ok messages.
  
  I don't think the network is at fault, nor is the server really going 
  away.  I think the client is just impatient.  Is there a way to speed 
  up a large rm -rf?  I have soft-writes enabled but alas
 
 Tom,
 
 please reproduce the problem but before doing it run the following commands
 and save the output:
 
 On the client:
 
 nfsstat -c
 netstat -m
 netstat -s
 
 On the server:
 
 nfsstat -s
 netstat -m
 netstat -s
 
 Run the rm -rf /foo
 
 Rerun the above commands on both the client and server and of course
 save the output again :-)
 
 RTFM-ing for nfsstat I am disappointed that nfsstat does not have -z 
 option for zeroing out the counters.  Time to look at the source :-)

It turns out the issue isn't only the rm -rf, so I've eliminated
that.  The process is creating many subdirectories and files.

Here's the output that you requested.  I've generated the output
at a baseline, 1 minute into the process, and then every 4 hours.
Sorry for the flood of information but I figure more is better.
You can look at the first and last items if you prefer.

I really appreciate the help!
--tal

-- 
Tom Limoncelli -- [EMAIL PROTECTED]  --  www.lumeta.Com


# CLIENT BASELINE:
Thu Jun  5 01:41:32 UTC 2003
Client Info:
Rpc Counts:
  Getattr   SetattrLookup  Readlink  Read WriteCreateRemove
 16222151  2470   9976708 91232  23111364  23342730544746 51291
   Rename  Link   Symlink Mkdir Rmdir   Readdir  RdirPlusAccess
  385 0 11004760080 13602 16651 0  69815088
MknodFsstatFsinfo  PathConfCommitGLeaseVacate Evict
030962158 0  21980734 0 0 0
Rpc Info:
 TimedOut   Invalid X Replies   Retries  Requests
0 0 0   239 166249915
Cache Info:
Attr HitsMisses Lkup HitsMisses BioR HitsMisses BioW HitsMisses
359087567  84614481 137095640   9976703 201904089  23084158  15151506  23342730
BioRLHitsMisses BioD HitsMisses DirE HitsMisses
  5296031 91232817844 16635 53658 0
257/608/18240 mbufs in use (current/peak/max):
257 mbufs allocated to data
256/312/4560 mbuf clusters in use (current/peak/max)
776 Kbytes allocated to network (5% of mb_map in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines
tcp:
205722821 packets sent
155645171 data packets (2158194817 bytes)
109 data packets (89632 bytes) retransmitted
0 resends initiated by MTU discovery
34720623 ack-only packets (365546 delayed)
0 URG only packets
0 window probe packets
15355469 window update packets
1449 control packets
213168802 packets received
152983420 acks (for 2158194689 bytes)
1424 duplicate acks
0 acks for unsent data
208828275 packets (1042433032 bytes) received in-sequence
3 completely duplicate packets (192 bytes)
0 old duplicate packets
0 packets with some dup. data (0 bytes duped)
62 out-of-order packets (5136 bytes)
0 packets (0 bytes) of data after window
0 window probes
62774 window update packets
0 packets received after close
0 discarded for bad checksums
0 discarded for bad header offset fields
0 discarded because packet too short
671 connection requests
119 connection accepts
1 bad connection attempt
0 listen queue overflows
790 connections established (including accepts)
777 connections closed (including 2 drops)
202 connections updated cached RTT on close
202 connections updated cached RTT variance on close
3 connections updated cached ssthresh on close
0 embryonic connections dropped
152983420 segments updated rtt (of 149305673 attempts)
74 retransmit timeouts
1 connection dropped by rexmit timeout
0 persist timeouts
0 connections dropped by persist timeout
31 keepalive timeouts
31 keepalive probes sent
0 connections dropped by keepalive
3834814 correct ACK header predictions
60088755 correct data packet header predictions
119 syncache entries added
  

Improving FreeBSD NFS performance (esp. directory updates)

2003-05-30 Thread Tom Limoncelli
I have a NFS server with (so far) a single NFS client.  Things work 
fine, however if (on the client) I do an rm -rf foo on a large (deep 
and wide) directory tree the tty receives NFS server not 
responding/NFS server ok messages.

I don't think the network is at fault, nor is the server really going 
away.  I think the client is just impatient.  Is there a way to speed 
up a large rm -rf?  I have soft-writes enabled but alas

Details:
The server is an Intel Xeon CPU 2.20GHz running FreeBSD 4.8 (updated to 
the latest RELENG_4_8 release yesterday).  It has a 1000TX interface to 
a Cisco switch.  The partition is part of a SCSI RAID unit (160MB/s 
transfer rate, Tagged Queueing enabled).  The partition that is 
exported to the client is:

# mount | egrep e2
/dev/da0s1f on /e2 (ufs, NFS exported, local, soft-updates)
/etc/rc.conf lists:
nfs_server_flags=-u -t -n 16
The client is a (accoriding to dmesg) Pentium III/Pentium III 
Xeon/Celeron (934.99-MHz 686-class CPU) running FreeBSD 4.7-RELEASE.  
amd is configured to mount the partion with 
opts:=rw:nfs_proto=udp;nfs_vers=3.
/etc/rc.conf lists:
	nfs_client_flags=-n 4

Any tuning suggestions?

--tal

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Improving FreeBSD NFS performance (esp. directory updates)

2003-05-30 Thread Marc Wiz
On Thu, May 29, 2003 at 04:54:00PM -0400, Tom Limoncelli wrote:
 I have a NFS server with (so far) a single NFS client.  Things work 
 fine, however if (on the client) I do an rm -rf foo on a large (deep 
 and wide) directory tree the tty receives NFS server not 
 responding/NFS server ok messages.
 
 I don't think the network is at fault, nor is the server really going 
 away.  I think the client is just impatient.  Is there a way to speed 
 up a large rm -rf?  I have soft-writes enabled but alas

Tom,

please reproduce the problem but before doing it run the following commands
and save the output:

On the client:

nfsstat -c
netstat -m
netstat -s

On the server:

nfsstat -s
netstat -m
netstat -s

Run the rm -rf /foo

Rerun the above commands on both the client and server and of course
save the output again :-)

RTFM-ing for nfsstat I am disappointed that nfsstat does not have -z 
option for zeroing out the counters.  Time to look at the source :-)

Marc (who in a former life and now current life is doing NFS support)
-- 
Marc Wiz
[EMAIL PROTECTED]
Yes, that really is my last name.
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: nfs performance

2002-12-30 Thread Scott Ballantyne
 recently I discovered problems with my FreeBSD nfs server.
 I mount my /home/user from my linux box via automounter/nfs from my server.
 They are connected with a switch on a 100baseTX Ethernet. Now, whenever
 I copy large files from a local driver to my home dir or do anything
 else that involves moving some bigger amounts of data to my home dir
 the nfs server times out and doesnt respond anymore.
 Any ideas, suggestions would be appreciated,

I could never get NFS to work reliably on Linux. The server here is
slow and needs to be upgraded to newer iron and runs OpenBSD, but
Linux is the only client OS that exhibited these problems. I observed
them always on writing to the server with large files, reads seemed to
work fine, no matter what the file size. I spent considerable time
fiddling with timeouts, cachesizes and so on, this was several months
ago, but I remember thinking this had something to do with the
attribute caching.

I was advised by a Linux guru friend of mine that the kernel NFS on
Linux had multiple problems, and he advised me to use the userland
NFS. I didn't follow this advice, choosing to try FreeBSD instead. So
far, it has worked with minimal problems. Once every couple of weeks,
I start getting NFS Server not responding messages, but switching to
TCP transport seems to have cured that, and provided better
performance as well.

I'm sure this isn't what you wanted to hear, but you might find it
helpful...

sdb
-- 
[EMAIL PROTECTED]




To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



Re: nfs performance

2002-12-30 Thread John Martinez

On Monday, December 30, 2002, at 01:12  PM, Scott Ballantyne wrote:



I could never get NFS to work reliably on Linux


I'd  like to chime in on this.

There are some serious problems with Linux NFS support. At the company 
where I work, we use Solaris NFS servers on Sun hardware, with a mix of 
many UNIX clients (including some BSD and Mac OS X). The biggest 
problems come from Linux clients.

The interesting thing to note with Linux is that it hoses itself after 
a while. If the user reboots their Linux client, the problems go away 
for a few weeks.

We've messed with options in automount and AMD, but no help.

-john


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message


Re: NFS Performance woes

2002-11-12 Thread Tillman
On Tue, Nov 05, 2002 at 06:44:47PM +0100, Lasse Laursen wrote:
 How is the optimum number of nfsd processes determined on the server? On
 our current setup we have 4 nfs daemons running serving 3 clients
 (webservers)
 
 Is the number of daemons to start determined by the number of clients or
 the number of files that has to be transferred simultaniously?

I use something like ps waux | grep nfs and check the CPU time used by
the processes. Add processes until you have 1 or 2 that are typically
unused in normal loads.

-T

-- 
Losing an illusion makes you wiser than finding a truth.
Ludwig Borne

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



Re: NFS Performance woes

2002-11-06 Thread Duncan Anker
On Wed, 2002-11-06 at 19:52, BigBrother wrote:
 
 
 Although the man page says this, I *think* that the communication is done
 like this
 
 CLIENT = NFSIOD(CLIENT) = NFSIOD (SERVER) = NFSD
 
 which menas that NFSIOD 'speak' with each other and then they pass the
 requests to NFS.
 
 Of course if u dont need to have on the server too many NFSIOD. So in my
 case I just have 8 nfsiod on server running and most of them are idle, and
 besides, they only take 1.5MB of memory which I can afford. So I think
 having *some* NFSIOD also on server is not a bad idea. Of course on the
 server u should have a lot of NFSD.
 
 in other words, running NFSIOD on server is not a bad idea..

NFSIOD is enabled by putting

nfs_client_enable=YES

into rc.conf. Empirical evidence suggests that NFSIOD is not used
server-side. I ran 4 nfsiod daemons on the server and checked their
usage time. All were 0:00. If the server does any client NFS stuff it
would make a difference, and it may be different under other OS.
Certainly it does no harm to have them running.


 Also monitor the mbufs on all your machines (especially the server).
 
 do from time to time a 'netstat -m' and see the peak  value of mbuf
 and mbuf clusters...if it is close to the max limit then you will suffer
 from mbuff exhaustion will will eventually make the machine unreachable
 from network.
 
 
 u can change mbuff max value before kernel load ...see tuning (7)

kern.nmbclusters=value

in /boot/loader.conf, if anyone needs to do this. Have to reboot for
this one :-(

 
 
 Also if u have mbuf exhastion try to use smaller block size in NFS mounts.

Now this is interesting. I had thought mbuf cluster exhaustion was due
to a high number of connections. Although I guess a high number of
connections * large buffer size would do it too.


Thank you for your response and suggestions vis the other NFS stuff - I
managed to get our server talking UDP. The wildcard binding was the
problem, and the -h flag to nfsd fixed it.

Network usage graphs are showing the differential between incoming and
outgoing traffic to be much less now, so I would say there was a lot of
overhead in there, as well as retransmissions.

I am still playing with buffer sizes but chances are in this case the
FreeBSD default is best.

Regards

--
Duncan Anker
Senior Systems Administrator
Dark Blue Sea



To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



Re: NFS Performance woes

2002-11-05 Thread BigBrother


I recently did some research into NFS performance tuning and came across
the suggestion in an article on onlamp.com by Michael Lucas, that 32768
is a good value for the read and write buffers. His suggestion is these
flags:

tcp,intr,nfsv3,-r=32768,-w=32768

I used these options (I found tcp was mandatory, as we have multiple IPs
and UDP was refusing to play nice), also adding dumbtimer to avoid the
log messages about server not responding.





According to my experience UDP is much preffered for NFS transport
protocols. Also try to have the NFSIOD daemon being executed on every
machine by putting in the /etc/rc.conf

nfs_client_enable=YES
nfs_client_flags=-n 10


[u may put more than 10 instances if u suspect that more than 10
simultaneous transactions will happen]


Also use the -w=32768,-r=32768   switch only on the machines that have a
fast cpu and a good network card [e.g. nestat -w 1, doesnt show errors
under heavy load]

On all the other machines dont put any w,r values [which will default to
8k blocks]

In some machines of mine I have even used blocks of -r=4096,-w=4096
because they were old machines that could not keep up with the traffic and
they were complaining about mbufs [they run out of mbufs and after some
time they crashed]..(and because they machines were diskless it was unable
to change the value of mbufss, after the kernel loading the value is
readonly and cannot be changed).


Use good networking hardware...scrappy hardware will certainly put you
into great trouble.

If you use TCP for NFS on a 1GB network you will sure have problems on
your machines and they will not be able to keep up. TCP causes a great
overhead. UDP doesnt.

So bottom line: a) Use UDP
b) Run a lot of NFSIOD - the more the better
c) Examine what is the best block size for every host
   idividually! (dont assume that 32k block is good
   for every host)


Hopes it does your job..I was searching for over 3 months when I once
dealt with thisRead also from the 'Sun' site the 'Optimizing and
Tunning NFS' guide which is a nice PDF document that you can download for
free, and has a lot of interesting things similar with FreeBSD!





To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



Re: NFS Performance woes

2002-11-05 Thread Steve Shorter
Howdy!

I have done some simulations with NFS servers - Intel SCB2 (4G RAM)
serving files from 500G RAID devices. I created a treed directory structure
with 300G of 32k files that approximates our homedirectory structure.

I had about 6 diskless front ends (tyan 2518 with 2G) that NFS
booted and mounted the homedir  and ran multiple scripts that walked through 
the directory structure reading files and writing them to /dev/null.
All machines have 3 intel 100 NICs. One interface is used to mount the
root etc. and the other is used to mount homedirs. The NFS server
serves root from fxp0 and homedir data from both fxp1 fxp2. The
diskless front ends mount root from fxp0 and mount homedir from fxp1.

When the simulation was running full out I was serving about
1/3-1/2 data from page cache and 2/3-1/2 from disk.

I tried numerous configurations and tuning of network parameters
after doing research and discovered... for FreeBSD 4.6.2



1) NFS over UDP outperforms NFS over TCP.

I was able to average about 70Mbs over *both* (occasionally they would
almost max out ie. 95Mbs) interfaces serving data using  UDP mounts with
8K rw. (the default). No matter what I tried with TCP I never got 
more than half that throughput.

2) The optimal number of nfsd's to run on the server was about
100!

If I reduced the number of nfsds below 80 it would start to
choke off the data moving through the network. I found that at around
100 there was no more increase. You must make a minor change to
source as the max allowed now by default is 20. I was running 8
nfsiod's on the clients.


TCP mounts under tested conditions always had much higher loads
than UDP. Also it was impossible to do an ls on a mounted directory
under load with TCP. With UDP there were no such problems. If you are using
UDP it is *essential* that you monitor fragments that are being droppend
because of timeout. If you have a good network this should not be a
problem. For example I have a webserver that has been up 20 days 
and has moved 1G of fragments but has only dropped about 800.

Also TCP mounts will require a remount of the clients if the
server should crash/whatever. UDP just keeps on ticking.

If you have Gig ether than there is other tuning you *must*
do to realize this potential. Personally I think it is better
to use multiple 100Mbs NIC's than to use Gig ether if you can get
away with it.

YMMV.

-steve



To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



Re: NFS Performance woes

2002-11-05 Thread Lasse Laursen
Hi,

 According to my experience UDP is much preffered for NFS transport
 protocols. Also try to have the NFSIOD daemon being executed on every
 machine by putting in the /etc/rc.conf

 nfs_client_enable=YES
 nfs_client_flags=-n 10


 [u may put more than 10 instances if u suspect that more than 10
 simultaneous transactions will happen]

How is the optimum number of nfsd processes determined on the server? On
our current setup we have 4 nfs daemons running serving 3 clients
(webservers)

Is the number of daemons to start determined by the number of clients or
the number of files that has to be transferred simultaniously?

Same question goes for the number of nfsiod processes...


Regards

--
Lasse Laursen [EMAIL PROTECTED] - Systems Developer
NetGroup A/S, St. Kongensgade 40H, DK-1264 København K, Denmark
Phone: +45 3370 1526 - Fax: +45 3313 0066 - Web: www.netgroup.dk

- We don't surf the net, we make the waves.



To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



Re: NFS Performance woes

2002-11-05 Thread BigBrother


 According to my experience UDP is much preffered for NFS transport
 protocols. Also try to have the NFSIOD daemon being executed on every
 machine by putting in the /etc/rc.conf

 nfs_client_enable=YES
 nfs_client_flags=-n 10


 [u may put more than 10 instances if u suspect that more than 10
 simultaneous transactions will happen]

How is the optimum number of nfsd processes determined on the server? On
our current setup we have 4 nfs daemons running serving 3 clients
(webservers)

Is the number of daemons to start determined by the number of clients or
the number of files that has to be transferred simultaniously?

Same question goes for the number of nfsiod processes...



Well the only rule for selecting the number of nfsiods and nfsd is the
maximum number of threads that are going to request an NFS operation on
the server. For example assume that your web server has a typical number
of httpd dameons of 50, that means that every httpd can access files on
the server, and in the worst case both 50 httpd will request
simultaneoulsy different NFS operations. This means that you should have
at least 50 NFSIOD (on the client+server) and 50 NFSD running (on the
server).

Remember that NFSIOD must run both on CLIENT and SERVER.

So you determine what is the maximum number of NFS operations...for
example in your client you dont have only 50 httpd running, but you make
from time to time compile with the -j 4 (4 parallel compilation jobs),
this means that you should increase the number of 50 by +4...

also in your client you usually have some users that login and their home
directories are on NFS mounted media...usually 10 people are using NFS
mounted home, which means that in the worst case 10 people may request
something from their home so you have to increase the number fo 54 by 10
more

I know the handbook says taht 8 nfsiod/nfsd is a nice number but I think
that is not correct. I have an ftp server that uses NFS mounted
directories, and usually 15 people are connected...so I have put a 20 NFS
processes running...

Having too much NFSIOD is not bad...every NFSIOD eats just 220KB of memory
(which means that you should also consider your memory-if you can afford
to run a lot of nfsiod)

Having too much NFSD also is not bad...every NFS eats just 356Kbyte of
memory, which again you have to note it.



So with simple words, just add all the things that you can imagine that
can happen simultaneously on all the  NFS mounted dirs and put that
number...let it run for one week and note down how many NFSIOD are idle or
NFSD.If you have put 100 NFSIOD and you see that usually there are
more than 50 NFSIOD idle (doing nothing) [on your ps axwu or TOP output]
then its a safe bet to reduce the number...

Of course you cannot optimize the NFS system in one day...it needs a lot
of time to take measurements and check from time to time if you have
enough NFSIOD or NFSD, because system load distribution tend may change
and you may see that more or less NFS processes have to exist..


I hope I make it clear for you!!


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



Re: NFS Performance woes

2002-11-05 Thread Lasse Laursen
Hi,

Thanks for your reply. I have some additional questions:

 Well the only rule for selecting the number of nfsiods and nfsd is the
 maximum number of threads that are going to request an NFS operation on
 the server. For example assume that your web server has a typical number
 of httpd dameons of 50, that means that every httpd can access files on
 the server, and in the worst case both 50 httpd will request
 simultaneoulsy different NFS operations. This means that you should have
 at least 50 NFSIOD (on the client+server) and 50 NFSD running (on the
 server).

A read operation (typical operation for all the clients) does not alter any
data? So
does every read request require a nfsd? Lets assume a worst case scenario
where
50 http servers access 50 different files - would I need 50 NFS daemons to
run
on the server to obtain maximum performance then?

 Remember that NFSIOD must run both on CLIENT and SERVER.

(Taken from the man page pf nfsiod)

 Nfsiod runs on an NFS client machine to service asynchronous I/O
requests
 to its server.  It improves performance but is not required for correct
 operation.

Why should I start the nfsiod daemon on the server?

 Of course you cannot optimize the NFS system in one day...it needs a lot
 of time to take measurements and check from time to time if you have
 enough NFSIOD or NFSD, because system load distribution tend may change
 and you may see that more or less NFS processes have to exist..

Yep - At the moment one nfsd idles - I will monitor the number of processes
and try to change the setup and see how the cluster performs.


Regards

--
Lasse Laursen [EMAIL PROTECTED] - Systems Developer
NetGroup A/S, St. Kongensgade 40H, DK-1264 København K, Denmark
Phone: +45 3370 1526 - Fax: +45 3313 0066 - Web: www.netgroup.dk

- We don't surf the net, we make the waves.



To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



NFS Performance woes

2002-11-04 Thread Duncan Anker
I recently did some research into NFS performance tuning and came across
the suggestion in an article on onlamp.com by Michael Lucas, that 32768
is a good value for the read and write buffers. His suggestion is these
flags:

tcp,intr,nfsv3,-r=32768,-w=32768

I used these options (I found tcp was mandatory, as we have multiple IPs
and UDP was refusing to play nice), also adding dumbtimer to avoid the
log messages about server not responding.

Shortly thereafter, our project manager complained about a lot of
timeouts on his web servers. More research suggested that bigger is not
always better for NFS buffers (some articles indicate that r/w buffers
are only for UDP, which is clearly rubbish in this case). I have tried
fiddling with varying parameters to find optimal values, but I am having
difficulty working out what they are.

Our web servers all mount via NFS - my expectation therefore is that I
should see approximately equal amounts of traffic in and out (working on
the principle that the web request and the file request cancel out and
the file read from the NFS server is going out unmodified). My
observations have borne out this expectation. However, tuning various
values from 1k up to 8k has resulted in slight increases in traffic - I
don't know whether this is more throughput or transmission retries. Once
I get up to 16k and 32k the amount of traffic into the boxes exceeds the
amount out quite substantially, which seems to indicate problems (and
indeed is causing timeouts).

On the NFS server side of things, we saw a lot of watchdog timeouts,
with the network choking at around 20Mbs. The card in the server is 1Gb
so it really should be able to push through more than that without a
sweat (and it does - it hits about 60Mbs during backups).

There's something pretty screwy going on with all of this. As mentioned,
the server is 1Gb, all the clients are running at 100Mbs. All machines
are on the same switch, so there is nothing to degrade the performance.
The NFS section of the FreeBSD handbook talked about slow FreeBSD
clients with highspeed servers, and I am wondering if this is that type
of scenario.

I'd appreciate hearing the experience of others in terms of tuning a
setup like this, or even if someone could point me in the direction of
some more useful analysis tools.

Thanks

--
Duncan Anker
Senior Systems Administrato
Dark Blue Sea




To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message