Re: [zfs-discuss] zfs + NFS + FreeBSD with performance prob

2013-02-05 Thread Albert Shih
 Le 04/02/2013 ? 11:21:12-0500, Paul Kraus a écrit
 On Jan 31, 2013, at 5:16 PM, Albert Shih wrote:
 
  Well I've server running FreeBSD 9.0 with (don't count / on
  differents disks) zfs pool with 36 disk.
  
  The performance is very very good on the server.
  
  I've one NFS client running FreeBSD 8.3 and the performance over NFS
  is very good :
  
  For example : Read from the client and write over NFS to ZFS:
  
  [root@ .tmp]# time tar xf /tmp/linux-3.7.5.tar
  
  real1m7.244s user0m0.921s sys 0m8.990s
  
  this client is on 1Gbits/s network cable and same network switch as
  the server.
  
  I've a second NFS client running FreeBSD 9.1-Stable, and on this
  second client the performance is catastrophic. After 1 hour the tar
  isn't finish.  OK this second client is connect with 100Mbit/s and
  not on the same switch.  But well from 2 min -- ~ 90 min ...:-(
  
  I've try for this second client to change on the ZFS-NFS server the
  
  zfs set sync=disabled
  
  and that change nothing.
 
 I have been using FreeBSD 9 with ZFS and NFS to a couple Mac OS X
 (10.6.8 Snow Leopard) boxes and I get between 40 and 50 MB/sec

Thanks for your answer.

Can you give me the average ping time between you'r client and NFS server ? 

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
mar 5 fév 2013 16:15:11 CET
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SSD pools (again)

2013-02-05 Thread Tyler Walter
Hi all, long time lurker, first time poster.

As SSD per-GB prices continue to drop at a healthy rate I've been thinking more 
and more about the benefit of creating pools out of SSDs rather than HDDs. My 
virtualization hosts seem to have at least a 2x dedup ratio, but I've been 
cautious on dedup due to the reports here of much more random I/O.

I'm wondering what experiences or input some might have already had in running 
SSD pools and if doing so might be a magic pill to make some zfs features 
such as dedup and/or wider vdev stripes mostly pain free?

My assumption is that the SSD pool could handle the higher random I/O 
requirement without really needing to worry so much about ARC or L2ARC 
requirements for dedup. Would this be true?

If I wanted a pool that spans multiple chassis, should I be concerned about 
elevated IOPS rate if I were using a handful of JBOD/SAS expanders each 
connected with an external 6gbit link?

Anything else I might be overlooking?

Any input would be appreciated!

Thanks,
Tyler___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs + NFS + FreeBSD with performance prob

2013-02-05 Thread Sašo Kiselkov
On 01/31/2013 11:16 PM, Albert Shih wrote:
 Hi all,
 
 I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
 (I known it's bad). 
 
 Well I've server running FreeBSD 9.0 with (don't count / on differents
 disks) zfs pool with 36 disk.
 
 The performance is very very good on the server.
 
 I've one NFS client running FreeBSD 8.3 and the performance over NFS is
 very good : 
 
 For example : Read from the client and write over NFS to ZFS:
 
 [root@ .tmp]# time tar xf /tmp/linux-3.7.5.tar 
 
 real1m7.244s
 user0m0.921s
 sys 0m8.990s
 
 this client is on 1Gbits/s network cable and same network switch as the
 server.
 
 I've a second NFS client running FreeBSD 9.1-Stable, and on this second
 client the performance is catastrophic. After 1 hour the tar isn't finish.
 OK this second client is connect with 100Mbit/s and not on the same switch.
 But well from 2 min -- ~ 90 min ...:-(
 
 I've try for this second client to change on the ZFS-NFS server the
 
   zfs set sync=disabled 
 
 and that change nothing.
 
 On a third NFS client linux (recent Ubuntu) I got the almost same 
 catastrophic 
 performance. With or without sync=disabled. 
 
 Those three NFS client use TCP. 
 
 If I do a classic scp I got normal speed ~9-10 Mbytes/s so the network is
 not the problem.
 
 I try to something like (find with google): 
 
   net.inet.tcp.sendbuf_max: 2097152 - 16777216
   net.inet.tcp.recvbuf_max: 2097152 - 16777216
   net.inet.tcp.sendspace: 32768 - 262144
   net.inet.tcp.recvspace: 65536 - 262144
   net.inet.tcp.mssdflt: 536 - 1452
   net.inet.udp.recvspace: 42080 - 65535
   net.inet.udp.maxdgram: 9216 - 65535
   net.local.stream.recvspace: 8192 - 65535
   net.local.stream.sendspace: 8192 - 65535
 
 
 and that change nothing either. 
 
 Anyone have any idea ? 

What you describe sounds like a bad networking issue. Check your network
via the usual tools like ping, mtr, netperf, etc. Verify cabling and
interface counters on your machines too, for stuff like CRC errors or
jabbers - a few of those and the throughput of a TCP link goes down the
drain.

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs + NFS + FreeBSD with performance prob

2013-02-05 Thread Sašo Kiselkov
On 02/05/2013 05:04 PM, Sašo Kiselkov wrote:
 On 01/31/2013 11:16 PM, Albert Shih wrote:
 Hi all,

 I'm not sure if the problem is with FreeBSD or ZFS or both so I cross-post
 (I known it's bad). 

 Well I've server running FreeBSD 9.0 with (don't count / on differents
 disks) zfs pool with 36 disk.

 The performance is very very good on the server.

 I've one NFS client running FreeBSD 8.3 and the performance over NFS is
 very good : 

 For example : Read from the client and write over NFS to ZFS:

 [root@ .tmp]# time tar xf /tmp/linux-3.7.5.tar 

 real1m7.244s
 user0m0.921s
 sys 0m8.990s

 this client is on 1Gbits/s network cable and same network switch as the
 server.

 I've a second NFS client running FreeBSD 9.1-Stable, and on this second
 client the performance is catastrophic. After 1 hour the tar isn't finish.
 OK this second client is connect with 100Mbit/s and not on the same switch.
 But well from 2 min -- ~ 90 min ...:-(

 I've try for this second client to change on the ZFS-NFS server the

  zfs set sync=disabled 

 and that change nothing.

 On a third NFS client linux (recent Ubuntu) I got the almost same 
 catastrophic 
 performance. With or without sync=disabled. 

 Those three NFS client use TCP. 

 If I do a classic scp I got normal speed ~9-10 Mbytes/s so the network is
 not the problem.

 I try to something like (find with google): 

  net.inet.tcp.sendbuf_max: 2097152 - 16777216
  net.inet.tcp.recvbuf_max: 2097152 - 16777216
  net.inet.tcp.sendspace: 32768 - 262144
  net.inet.tcp.recvspace: 65536 - 262144
  net.inet.tcp.mssdflt: 536 - 1452
  net.inet.udp.recvspace: 42080 - 65535
  net.inet.udp.maxdgram: 9216 - 65535
  net.local.stream.recvspace: 8192 - 65535
  net.local.stream.sendspace: 8192 - 65535


 and that change nothing either. 

 Anyone have any idea ? 
 
 What you describe sounds like a bad networking issue. Check your network
 via the usual tools like ping, mtr, netperf, etc. Verify cabling and
 interface counters on your machines too, for stuff like CRC errors or
 jabbers - a few of those and the throughput of a TCP link goes down the
 drain.

Just one more thing: simply doing SCP need not show a problem. SCP is
very uni-directional, and you may be hitting an issue in the opposite
direction (I've seen TP cables where one pair was fine and the other was
giving bad data).

Also check for dropped packets on your source and target machines via
tools like DTrace.

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss