He can also experiment with jumbo frames if his equipment supports it

Sent from my iPhone

On Apr 4, 2008, at 6:41 PM, -ray <[EMAIL PROTECTED]> wrote:

>
> Definitely... but i'm sure Dustin has tried all of that.  Make sure  
> you're
> using nfs3.  Also try using tcp instead of udp.
>
> Do you happen to have a Solaris or Netapp box lying around?  You could
> test with that.  I've heard, but not experienced personally, that  
> most NFS
> server implementations suck hard when it comes to performance (even
> Linux).  Apparently Sun, and surprisingly Netapp, are the only ones  
> that
> got it right haha.
>
> ray
>
>
> On Fri, 4 Apr 2008, Shannon Roddy wrote:
>
>> Find an NFS tuning guide... google.  Most things like block size,  
>> etc.
>> are tunable for NFS, and it may just be that you need to tune the r/w
>> block sizes.
>>
>> Eg. for Solaris:
>>
>> #  Maximize NFS block size
>>
>>   * Solaris 8 has a max of 32k
>>   * Solaris 9 allows up to 1MB! (Solaris 9 NFS Server required)
>>
>> # Set nfs3_bsize and nfs3_max_transfer_size system parameter on  
>> client
>> and server
>>
>>   * Futher tuning down of blocksize can be done via mount options:
>> rsize, wsize
>>
>>
>>
>>
>> On Fri, Apr 4, 2008 at 2:08 PM, Dustin Puryear <[EMAIL PROTECTED] 
>> it.com> wrote:
>>> I'm curious if anyone has had any performance issues with NFS over  
>>> GigE?
>>>  We are bringing up a pretty standard VMware scenario: VMware  
>>> servers
>>> are connected to GigE with bonded pair and our Dell NF500 NAS is  
>>> running
>>> RAID10. Fast and easy. Only..
>>>
>>> The NFS performance sucks. I need to get some firm numbers, but it  
>>> looks
>>> like we can't get NFS to perform better than if it were on a Fast
>>> Ethernet network. That said, if we change over to mounting our VM
>>> filesystem using CIFS we can scream at pretty much wire speeds.  
>>> (By the
>>> way, if using CentOS 5.0 and mount.cifs, upgrade to 5.1 because  
>>> the 5.0
>>> kernel will panic sometimes with a mounted CIFS in high usage.)
>>>
>>> Here's out setup:
>>>
>>> 2 Dell 1850's CentOS 5.1 with Intel GigE cards (2 cards each, 2  
>>> ports
>>> per card, 1 card = bonded pair = VMware Network, 1 card = 1 port =  
>>> Admin
>>> Network)
>>>
>>> 1 Dell NF500 running Windows Storage Server 2003 with 4 disk  
>>> RAID10 and GigE
>>>
>>> Regardless of whether we use bonding/LAG (Dell PowerConnect 5000+)  
>>> or
>>> just simple GigE over one port, our NFS sucks it. CIFS screams  
>>> though
>>> and pretty much saturates the connection.
>>>
>>> Right now I've tested Linux <--> NAS. When I have time I'll try  
>>> Linux to
>>> Linux.
>>>
>>> --
>>> Dustin Puryear
>>> President and Sr. Consultant
>>> Puryear Information Technology, LLC
>>> 225-706-8414 x112
>>> http://www.puryear-it.com
>>>
>>> Author, "Best Practices for Managing Linux and UNIX Servers"
>>>   http://www.puryear-it.com/pubs/linux-unix-best-practices/
>>>
>>> _______________________________________________
>>> General mailing list
>>> [email protected]
>>> http://mail.brlug.net/mailman/listinfo/general_brlug.net
>>>
>>
>> _______________________________________________
>> General mailing list
>> [email protected]
>> http://mail.brlug.net/mailman/listinfo/general_brlug.net
>>
>
> -- 
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Ray DeJean                              http://www.r-a-y.org
> Systems Engineer                    Southeastern Louisiana University
> IBM Certified Specialist            AIX Administration, AIX Support
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>
>
> _______________________________________________
> General mailing list
> [email protected]
> http://mail.brlug.net/mailman/listinfo/general_brlug.net

_______________________________________________
General mailing list
[email protected]
http://mail.brlug.net/mailman/listinfo/general_brlug.net

Reply via email to