Re: [zfs-discuss] nfs and smb performance

2008-03-28 Thread abs
That is the first thing i checked.  Prior to that I was getting somewhere 
around 1 ~ 5 MB/sec.  Thank you though.

Dale Ghent [EMAIL PROTECTED] wrote: 

Have you turned on the Ignore cache flush commands option on the  
xraids? You should ensure this is on when using ZFS on them.

/dale

On Mar 27, 2008, at 6:16 PM, abs wrote:
 hello all,
 i have two xraids connect via fibre to a poweredge2950.  the 2  
 xraids are configured with 2 raid5 volumes each, giving me a total  
 of 4 raid5 volumes.  these are striped across in zfs.  the read and  
 write speeds local to the machine are as expected but i have noticed  
 some performance hits in the read and write speed over nfs and samba.

 here is the observation:

 each filesystem is shared via nfs as well as samba.
 i am able to mount via nfs and samba on a Mac OS 10.5.2 client.
 i am able to only mount via nfs on a Mac OS 10.4.11 client. (there  
 seems to be authentication/encryption issue between the 10.4.11  
 client and solaris box in this scenario. i know this is a bug on the  
 client side)

 when writing a file via nfs from the 10.5.2 client the speeds are 60  
 ~ 70 MB/sec.
 when writing a file via samba from the 10.5.2 client the speeds are  
 30 ~ 50 MB/sec

 when writing a file via nfs from the 10.4.11 client the speeds are  
 20 ~ 30 MB/sec.

 when writing a file via samba from a Windows XP client the speeds  
 are 30 ~ 40 MB.

 i know that there is an implementational difference in nfs and samba  
 on both Mac OS 10.4.11 and 10.5.2 clients but that still does not  
 explain the Windows scenario.


 i was wondering if anyone else was experiencing similar issues and  
 if there is some tuning i can do or am i just missing something.   
 thanx in advance.

 cheers,
 abs






 Never miss a thing. Make Yahoo your  
 homepage.___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



   
-
Never miss a thing.   Make Yahoo your homepage.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs and smb performance

2008-03-28 Thread abs
Sorry for being vague but I actually tried it with the cifs in zfs option, but 
I think I will try the samba option now that you mention it.  Also is there a 
way to actually improve the nfs performance specifically?

cheers,
abs

Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, 
Sun MicroSystems [EMAIL PROTECTED] wrote:   Hello abs
 
 Would you be able to repeat the same tests for the cifs in zfs option instead 
of using samba?
 Would be interesting to see how the kernel cifs versus the samba performance 
compare.
 
 Peter
 
 abs wrote: hello all, 
 i have two xraids connect via fibre to a poweredge2950.  the 2 xraids are 
configured with 2 raid5 volumes each, giving me a total of 4 raid5 volumes.  
these are striped across in zfs.  the read and write speeds local to the 
machine are as expected but i have noticed some performance hits in the read 
and write speed over nfs and samba.
   
 here is the observation:
   
 each filesystem is shared via nfs as well as samba.  
 i am able to mount via nfs and samba on a Mac OS 10.5.2 client.
 i am able to only mount via nfs on a Mac OS 10.4.11 client. (there seems to be 
authentication/encryption issue between the 10.4.11 client and solaris box in 
this scenario. i know this is a bug on the client side)
   
 when writing a file via nfs from the 10.5.2 client the speeds are 60 ~ 70 
MB/sec.
 when writing a file via samba from the 10.5.2 client the speeds are 30 ~ 50 
MB/sec
   
 when writing a file via nfs from the 10.4.11 client the speeds are 20 ~ 30 
MB/sec.
   
 when writing a file via samba from a Windows XP client the speeds are 30 ~ 40 
MB.
   
 i know that there is an implementational difference in nfs and samba on both 
Mac OS 10.4.11 and 10.5.2 clients but that still does not explain the Windows 
scenario.
   
   
 i was wondering if anyone else was experiencing similar issues and if there is 
some tuning i can do or am i just missing something.  thanx in advance.
   
 cheers, 
 abs
   
   
   
   
   

   
-
Never miss a thing.  Make Yahoo your homepage.   

-

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
  
 
-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l

 

   
-
Never miss a thing.   Make Yahoo your homepage.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Periodic flush

2008-03-27 Thread abs
you may want to try disabling the disk write cache on the single disk.
also for the RAID disable 'host cache flush' if such an option exists.  that 
solved the problem for me.

let me know.


Bob Friesenhahn [EMAIL PROTECTED] wrote: On Thu, 27 Mar 2008, Neelakanth 
Nadgir wrote:

 This causes the sync to happen much faster, but as you say, suboptimal.
 Haven't had the time to go through the bug report, but probably
 CR 6429205 each zpool needs to monitor its throughput
 and throttle heavy writers
 will help.

I hope that this feature is implemented soon, and works well. :-)

I tested with my application outputting to a UFS filesystem on a 
single 15K RPM SAS disk and saw that it writes about 50MB/second and 
without the bursty behavior of ZFS.  When writing to ZFS filesystem on 
a RAID array, zpool I/O stat reports an average (over 10 seconds) 
write rate of 54MB/second.  Given that the throughput is not much 
higher on the RAID array, I assume that the bottleneck is in my 
application.

 Are the 'zpool iostat' statistics accurate?

 Yes. You could also look at regular iostat
 and correlate it.

Iostat shows that my RAID array disks are loafing with only 9MB/second 
writes to each but with 82 writes/second.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] nfs and smb performance

2008-03-27 Thread abs
hello all, 
i have two xraids connect via fibre to a poweredge2950.  the 2 xraids are 
configured with 2 raid5 volumes each, giving me a total of 4 raid5 volumes.  
these are striped across in zfs.  the read and write speeds local to the 
machine are as expected but i have noticed some performance hits in the read 
and write speed over nfs and samba.

here is the observation:

each filesystem is shared via nfs as well as samba.  
i am able to mount via nfs and samba on a Mac OS 10.5.2 client.
i am able to only mount via nfs on a Mac OS 10.4.11 client. (there seems to be 
authentication/encryption issue between the 10.4.11 client and solaris box in 
this scenario. i know this is a bug on the client side)

when writing a file via nfs from the 10.5.2 client the speeds are 60 ~ 70 
MB/sec.
when writing a file via samba from the 10.5.2 client the speeds are 30 ~ 50 
MB/sec

when writing a file via nfs from the 10.4.11 client the speeds are 20 ~ 30 
MB/sec.

when writing a file via samba from a Windows XP client the speeds are 30 ~ 40 
MB.

i know that there is an implementational difference in nfs and samba on both 
Mac OS 10.4.11 and 10.5.2 clients but that still does not explain the Windows 
scenario.


i was wondering if anyone else was experiencing similar issues and if there is 
some tuning i can do or am i just missing something.  thanx in advance.

cheers, 
abs






   
-
Never miss a thing.   Make Yahoo your homepage.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss