Well, I ran out of time to troubleshoot it and CIFS is working quite 
well so I dropped it for now. This is only for running VMs over the 
network so I don't need any of the UNIX'isms of NFS, so we're good.

Maybe I'll figure it out another time. ;)

--
Dustin Puryear
President and Sr. Consultant
Puryear Information Technology, LLC
225-706-8414 x112
http://www.puryear-it.com

Author, "Best Practices for Managing Linux and UNIX Servers"
   http://www.puryear-it.com/pubs/linux-unix-best-practices/


Dixon Cole wrote:
> Back in the day when I was at Mindspring,  the words NFS and Linux were 
> generally in close proximity to the word Sucks.  FreeBSD's implementation was 
> far superior.
> 
> Eight years later, I hope that has changed.  No firsthand knowledge these 
> days as I am basking the the MacOSX glow.  
> 
> -D
> -Dixon
> 
> -----Original Message-----
> From: "Shannon Roddy" <[EMAIL PROTECTED]>
> 
> Date: Fri, 4 Apr 2008 17:42:49 
> To:[email protected]
> Subject: Re: [brlug-general] Slow NFS on GigE
> 
> 
> Find an NFS tuning guide... google.  Most things like block size, etc.
> are tunable for NFS, and it may just be that you need to tune the r/w
> block sizes.
> 
> Eg. for Solaris:
> 
> #  Maximize NFS block size
> 
>     * Solaris 8 has a max of 32k
>     * Solaris 9 allows up to 1MB! (Solaris 9 NFS Server required)
> 
> # Set nfs3_bsize and nfs3_max_transfer_size system parameter on client
> and server
> 
>     * Futher tuning down of blocksize can be done via mount options:
> rsize, wsize
> 
> 
> 
> 
> On Fri, Apr 4, 2008 at 2:08 PM, Dustin Puryear <[EMAIL PROTECTED]> wrote:
>> I'm curious if anyone has had any performance issues with NFS over GigE?
>>   We are bringing up a pretty standard VMware scenario: VMware servers
>>  are connected to GigE with bonded pair and our Dell NF500 NAS is running
>>  RAID10. Fast and easy. Only..
>>
>>  The NFS performance sucks. I need to get some firm numbers, but it looks
>>  like we can't get NFS to perform better than if it were on a Fast
>>  Ethernet network. That said, if we change over to mounting our VM
>>  filesystem using CIFS we can scream at pretty much wire speeds. (By the
>>  way, if using CentOS 5.0 and mount.cifs, upgrade to 5.1 because the 5.0
>>  kernel will panic sometimes with a mounted CIFS in high usage.)
>>
>>  Here's out setup:
>>
>>  2 Dell 1850's CentOS 5.1 with Intel GigE cards (2 cards each, 2 ports
>>  per card, 1 card = bonded pair = VMware Network, 1 card = 1 port = Admin
>>  Network)
>>
>>  1 Dell NF500 running Windows Storage Server 2003 with 4 disk RAID10 and GigE
>>
>>  Regardless of whether we use bonding/LAG (Dell PowerConnect 5000+) or
>>  just simple GigE over one port, our NFS sucks it. CIFS screams though
>>  and pretty much saturates the connection.
>>
>>  Right now I've tested Linux <--> NAS. When I have time I'll try Linux to
>>  Linux.
>>
>>  --
>>  Dustin Puryear
>>  President and Sr. Consultant
>>  Puryear Information Technology, LLC
>>  225-706-8414 x112
>>  http://www.puryear-it.com
>>
>>  Author, "Best Practices for Managing Linux and UNIX Servers"
>>    http://www.puryear-it.com/pubs/linux-unix-best-practices/
>>
>>  _______________________________________________
>>  General mailing list
>>  [email protected]
>>  http://mail.brlug.net/mailman/listinfo/general_brlug.net
>>
> 
> _______________________________________________
> General mailing list
> [email protected]
> http://mail.brlug.net/mailman/listinfo/general_brlug.net
> _______________________________________________
> General mailing list
> [email protected]
> http://mail.brlug.net/mailman/listinfo/general_brlug.net

_______________________________________________
General mailing list
[email protected]
http://mail.brlug.net/mailman/listinfo/general_brlug.net

Reply via email to