Re: [zfs-discuss] ZFS performance question over NFS

2011-08-19 Thread Thomas Nau
Hi Bob

 I don't know what the request pattern from filebench looks like but it seems 
 like your ZEUS RAM devices are not keeping up or
 else many requests are bypassing the ZEUS RAM devices.
 
 Note that very large synchronous writes will bypass your ZEUS RAM device and 
 go directly to a log in the main store.  Small (=
 128K) writes should directly benefit from the dedicated zil device.
 
 Find a copy of zilstat.ksh and run it while filebench is running in order to 
 understand more about what is going on.
 
 Bob

The pattern looks like:

   N-Bytes  N-Bytes/s N-Max-RateB-Bytes  B-Bytes/s B-Max-Rateops  =4kB 
4-32kB =32kB
   958865695886569588656   88399872   88399872   88399872 90  0 
 0 90
   666228066622806662280   87031808   87031808   87031808 83  0 
 0 83
   636672863667286366728   72790016   72790016   72790016 79  0 
 0 79
   631635263163526316352   83886080   83886080   83886080 80  0 
 0 80
   668761666876166687616   84594688   84594688   84594688 92  0 
 0 92
   490904849090484909048   69238784   69238784   69238784 73  0 
 0 73
   660528066052806605280   81924096   81924096   81924096 79  0 
 0 79
   689533668953366895336   81625088   81625088   81625088 85  0 
 0 85
   653212865321286532128   87486464   87486464   87486464 90  0 
 0 90
   692513669251366925136   86118400   86118400   86118400 83  0 
 0 83

So does it look good, bad or ugly ;)

Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS performance question over NFS

2011-08-18 Thread Thomas Nau
Dear all.
We finally got all the parts for our new fileserver following several
recommendations we got over this list. We use

Dell R715, 96GB RAM, dual 8-core Opterons
1 10GE Intel dual-port NIC
2 LSI 9205-8e SAS controllers
2 DataON DNS-1600 JBOD chassis
46 Seagate constellation SAS drives
2 STEC ZEUS RAM


The base zpool config utilizes 42 drives plus the STECs as mirrored
log devices. The Seagates are setup as a stripe of 7 times 6-drive-RAIDZ2
junks plus as said a dedicated ZIL made of the mirrored STECs.

As a quick'n dirty check we ran filebench with the fileserver
workload. Running locally we get

statfile15476ops/s   0.0mb/s  0.6ms/op  179us/op-cpu
deletefile1  5476ops/s   0.0mb/s  1.0ms/op  454us/op-cpu
closefile3   5476ops/s   0.0mb/s  0.0ms/op5us/op-cpu
readfile15476ops/s 729.5mb/s  0.2ms/op  128us/op-cpu
openfile25477ops/s   0.0mb/s  0.8ms/op  204us/op-cpu
closefile2   5477ops/s   0.0mb/s  0.0ms/op5us/op-cpu
appendfilerand1  5477ops/s  42.8mb/s  0.3ms/op  184us/op-cpu
openfile15477ops/s   0.0mb/s  0.9ms/op  209us/op-cpu
closefile1   5477ops/s   0.0mb/s  0.0ms/op6us/op-cpu
wrtfile1 5477ops/s 688.4mb/s  0.4ms/op  220us/op-cpu
createfile1  5477ops/s   0.0mb/s  2.7ms/op 1068us/op-cpu



with a single remote client (similar Dell System) using NFS

statfile1  90ops/s   0.0mb/s 27.6ms/op  145us/op-cpu
deletefile190ops/s   0.0mb/s 64.5ms/op  401us/op-cpu
closefile3 90ops/s   0.0mb/s 25.8ms/op   40us/op-cpu
readfile1  90ops/s  11.4mb/s  3.1ms/op  363us/op-cpu
openfile2  90ops/s   0.0mb/s 66.0ms/op  263us/op-cpu
closefile2 90ops/s   0.0mb/s 22.6ms/op  124us/op-cpu
appendfilerand190ops/s   0.7mb/s  0.5ms/op  101us/op-cpu
openfile1  90ops/s   0.0mb/s 72.6ms/op  269us/op-cpu
closefile1 90ops/s   0.0mb/s 43.6ms/op  189us/op-cpu
wrtfile1   90ops/s  11.2mb/s  0.2ms/op  211us/op-cpu
createfile190ops/s   0.0mb/s226.5ms/op  709us/op-cpu



the same remote client with zpool sync disabled on the server

statfile1 479ops/s   0.0mb/s  6.2ms/op  130us/op-cpu
deletefile1   479ops/s   0.0mb/s 13.0ms/op  351us/op-cpu
closefile3480ops/s   0.0mb/s  3.0ms/op   37us/op-cpu
readfile1 480ops/s  62.7mb/s  0.8ms/op  174us/op-cpu
openfile2 480ops/s   0.0mb/s 14.1ms/op  235us/op-cpu
closefile2480ops/s   0.0mb/s  6.0ms/op  123us/op-cpu
appendfilerand1   480ops/s   3.7mb/s  0.2ms/op   53us/op-cpu
openfile1 480ops/s   0.0mb/s 13.7ms/op  235us/op-cpu
closefile1480ops/s   0.0mb/s 11.1ms/op  190us/op-cpu
wrtfile1  480ops/s  60.3mb/s  0.2ms/op  233us/op-cpu
createfile1   480ops/s   0.0mb/s 35.6ms/op  683us/op-cpu


Disabling ZIL is no option but I expected a much better performance
especially the ZEUS RAM only gets us a speed-up of about 1.8x

Is this test realistic for a typical fileserver scenario or does it require many
more clients to push the limits?

Thanks
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance question over NFS

2011-08-18 Thread Tim Cook
What are the specs on the client?
On Aug 18, 2011 10:28 AM, Thomas Nau thomas@uni-ulm.de wrote:
 Dear all.
 We finally got all the parts for our new fileserver following several
 recommendations we got over this list. We use

 Dell R715, 96GB RAM, dual 8-core Opterons
 1 10GE Intel dual-port NIC
 2 LSI 9205-8e SAS controllers
 2 DataON DNS-1600 JBOD chassis
 46 Seagate constellation SAS drives
 2 STEC ZEUS RAM


 The base zpool config utilizes 42 drives plus the STECs as mirrored
 log devices. The Seagates are setup as a stripe of 7 times 6-drive-RAIDZ2
 junks plus as said a dedicated ZIL made of the mirrored STECs.

 As a quick'n dirty check we ran filebench with the fileserver
 workload. Running locally we get

 statfile1 5476ops/s 0.0mb/s 0.6ms/op 179us/op-cpu
 deletefile1 5476ops/s 0.0mb/s 1.0ms/op 454us/op-cpu
 closefile3 5476ops/s 0.0mb/s 0.0ms/op 5us/op-cpu
 readfile1 5476ops/s 729.5mb/s 0.2ms/op 128us/op-cpu
 openfile2 5477ops/s 0.0mb/s 0.8ms/op 204us/op-cpu
 closefile2 5477ops/s 0.0mb/s 0.0ms/op 5us/op-cpu
 appendfilerand1 5477ops/s 42.8mb/s 0.3ms/op 184us/op-cpu
 openfile1 5477ops/s 0.0mb/s 0.9ms/op 209us/op-cpu
 closefile1 5477ops/s 0.0mb/s 0.0ms/op 6us/op-cpu
 wrtfile1 5477ops/s 688.4mb/s 0.4ms/op 220us/op-cpu
 createfile1 5477ops/s 0.0mb/s 2.7ms/op 1068us/op-cpu



 with a single remote client (similar Dell System) using NFS

 statfile1 90ops/s 0.0mb/s 27.6ms/op 145us/op-cpu
 deletefile1 90ops/s 0.0mb/s 64.5ms/op 401us/op-cpu
 closefile3 90ops/s 0.0mb/s 25.8ms/op 40us/op-cpu
 readfile1 90ops/s 11.4mb/s 3.1ms/op 363us/op-cpu
 openfile2 90ops/s 0.0mb/s 66.0ms/op 263us/op-cpu
 closefile2 90ops/s 0.0mb/s 22.6ms/op 124us/op-cpu
 appendfilerand1 90ops/s 0.7mb/s 0.5ms/op 101us/op-cpu
 openfile1 90ops/s 0.0mb/s 72.6ms/op 269us/op-cpu
 closefile1 90ops/s 0.0mb/s 43.6ms/op 189us/op-cpu
 wrtfile1 90ops/s 11.2mb/s 0.2ms/op 211us/op-cpu
 createfile1 90ops/s 0.0mb/s 226.5ms/op 709us/op-cpu



 the same remote client with zpool sync disabled on the server

 statfile1 479ops/s 0.0mb/s 6.2ms/op 130us/op-cpu
 deletefile1 479ops/s 0.0mb/s 13.0ms/op 351us/op-cpu
 closefile3 480ops/s 0.0mb/s 3.0ms/op 37us/op-cpu
 readfile1 480ops/s 62.7mb/s 0.8ms/op 174us/op-cpu
 openfile2 480ops/s 0.0mb/s 14.1ms/op 235us/op-cpu
 closefile2 480ops/s 0.0mb/s 6.0ms/op 123us/op-cpu
 appendfilerand1 480ops/s 3.7mb/s 0.2ms/op 53us/op-cpu
 openfile1 480ops/s 0.0mb/s 13.7ms/op 235us/op-cpu
 closefile1 480ops/s 0.0mb/s 11.1ms/op 190us/op-cpu
 wrtfile1 480ops/s 60.3mb/s 0.2ms/op 233us/op-cpu
 createfile1 480ops/s 0.0mb/s 35.6ms/op 683us/op-cpu


 Disabling ZIL is no option but I expected a much better performance
 especially the ZEUS RAM only gets us a speed-up of about 1.8x

 Is this test realistic for a typical fileserver scenario or does it
require many
 more clients to push the limits?

 Thanks
 Thomas
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance question over NFS

2011-08-18 Thread Thomas Nau
Tim
the client is identical as the server but no SAS drives attached.
Also right now only one 1gbit Intel NIC Is available

Thomas


Am 18.08.2011 um 17:49 schrieb Tim Cook t...@cook.ms:

 What are the specs on the client?
 
 On Aug 18, 2011 10:28 AM, Thomas Nau thomas@uni-ulm.de wrote:
  Dear all.
  We finally got all the parts for our new fileserver following several
  recommendations we got over this list. We use
  
  Dell R715, 96GB RAM, dual 8-core Opterons
  1 10GE Intel dual-port NIC
  2 LSI 9205-8e SAS controllers
  2 DataON DNS-1600 JBOD chassis
  46 Seagate constellation SAS drives
  2 STEC ZEUS RAM
  
  
  The base zpool config utilizes 42 drives plus the STECs as mirrored
  log devices. The Seagates are setup as a stripe of 7 times 6-drive-RAIDZ2
  junks plus as said a dedicated ZIL made of the mirrored STECs.
  
  As a quick'n dirty check we ran filebench with the fileserver
  workload. Running locally we get
  
  statfile1 5476ops/s 0.0mb/s 0.6ms/op 179us/op-cpu
  deletefile1 5476ops/s 0.0mb/s 1.0ms/op 454us/op-cpu
  closefile3 5476ops/s 0.0mb/s 0.0ms/op 5us/op-cpu
  readfile1 5476ops/s 729.5mb/s 0.2ms/op 128us/op-cpu
  openfile2 5477ops/s 0.0mb/s 0.8ms/op 204us/op-cpu
  closefile2 5477ops/s 0.0mb/s 0.0ms/op 5us/op-cpu
  appendfilerand1 5477ops/s 42.8mb/s 0.3ms/op 184us/op-cpu
  openfile1 5477ops/s 0.0mb/s 0.9ms/op 209us/op-cpu
  closefile1 5477ops/s 0.0mb/s 0.0ms/op 6us/op-cpu
  wrtfile1 5477ops/s 688.4mb/s 0.4ms/op 220us/op-cpu
  createfile1 5477ops/s 0.0mb/s 2.7ms/op 1068us/op-cpu
  
  
  
  with a single remote client (similar Dell System) using NFS
  
  statfile1 90ops/s 0.0mb/s 27.6ms/op 145us/op-cpu
  deletefile1 90ops/s 0.0mb/s 64.5ms/op 401us/op-cpu
  closefile3 90ops/s 0.0mb/s 25.8ms/op 40us/op-cpu
  readfile1 90ops/s 11.4mb/s 3.1ms/op 363us/op-cpu
  openfile2 90ops/s 0.0mb/s 66.0ms/op 263us/op-cpu
  closefile2 90ops/s 0.0mb/s 22.6ms/op 124us/op-cpu
  appendfilerand1 90ops/s 0.7mb/s 0.5ms/op 101us/op-cpu
  openfile1 90ops/s 0.0mb/s 72.6ms/op 269us/op-cpu
  closefile1 90ops/s 0.0mb/s 43.6ms/op 189us/op-cpu
  wrtfile1 90ops/s 11.2mb/s 0.2ms/op 211us/op-cpu
  createfile1 90ops/s 0.0mb/s 226.5ms/op 709us/op-cpu
  
  
  
  the same remote client with zpool sync disabled on the server
  
  statfile1 479ops/s 0.0mb/s 6.2ms/op 130us/op-cpu
  deletefile1 479ops/s 0.0mb/s 13.0ms/op 351us/op-cpu
  closefile3 480ops/s 0.0mb/s 3.0ms/op 37us/op-cpu
  readfile1 480ops/s 62.7mb/s 0.8ms/op 174us/op-cpu
  openfile2 480ops/s 0.0mb/s 14.1ms/op 235us/op-cpu
  closefile2 480ops/s 0.0mb/s 6.0ms/op 123us/op-cpu
  appendfilerand1 480ops/s 3.7mb/s 0.2ms/op 53us/op-cpu
  openfile1 480ops/s 0.0mb/s 13.7ms/op 235us/op-cpu
  closefile1 480ops/s 0.0mb/s 11.1ms/op 190us/op-cpu
  wrtfile1 480ops/s 60.3mb/s 0.2ms/op 233us/op-cpu
  createfile1 480ops/s 0.0mb/s 35.6ms/op 683us/op-cpu
  
  
  Disabling ZIL is no option but I expected a much better performance
  especially the ZEUS RAM only gets us a speed-up of about 1.8x
  
  Is this test realistic for a typical fileserver scenario or does it require 
  many
  more clients to push the limits?
  
  Thanks
  Thomas
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance question over NFS

2011-08-18 Thread Bob Friesenhahn

On Thu, 18 Aug 2011, Thomas Nau wrote:


Tim
the client is identical as the server but no SAS drives attached.
Also right now only one 1gbit Intel NIC Is available


I don't know what the request pattern from filebench looks like but it 
seems like your ZEUS RAM devices are not keeping up or else many 
requests are bypassing the ZEUS RAM devices.


Note that very large synchronous writes will bypass your ZEUS RAM 
device and go directly to a log in the main store.  Small (= 128K) 
writes should directly benefit from the dedicated zil device.


Find a copy of zilstat.ksh and run it while filebench is running in 
order to understand more about what is going on.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss