Re: [zfs-discuss] StorageTek 2540 performance radically changed

2009-04-20 Thread Robert Milkowski
Hello Bob,

Wednesday, April 15, 2009, 1:01:02 AM, you wrote:

BF Today I updated the firmware on my StorageTek 2540 to the latest 
BF recommended version and am seeing radically difference performance 
BF when testing with iozone than I did in February of 2008.  I am using 
BF Solaris 10 U5 with all the latest patches.

BF This is the performance achieved (on a 32GB file) in February last 
BF year:

BFKB  reclen   write rewritereadreread
BF  33554432  64  279863  167138   458807   449817
BF  33554432 128  265099  250903   455623   460668
BF  33554432 256  265616  259599   451944   448061
BF  33554432 512  278530  294589   522930   471253

BF This is the new performance:

BFKB  reclen   write rewritereadreread
BF  33554432  64   76688   27870   552106   555438
BF  33554432 128  103120  369527   538206   555049
BF  33554432 256  355237  366563   534333   553660
BF  33554432 512  379515  364515   535635   553940

BF When using the 64KB record length, the service times are terrible.  At
BF first I thought that my drive array must be broken but now it seems 
BF like a change in the ZFS caching behavior (i.e. caching gone!):

BF   extended device statistics
BF devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
BF sd0   0.00.00.00.0  0.0  0.00.0   0   0
BF sd1   0.00.00.00.0  0.0  0.00.0   0   0
BF sd2   1.30.36.82.0  0.0  0.01.7   0   0
BF sd10  0.0   99.30.0 12698.3  0.0 32.2  324.5   0  97
BF sd11  0.3  105.9   38.4 12753.3  0.0 31.8  299.9   0  99
BF sd12  0.0  100.20.0 12095.9  0.0 26.4  263.8   0  82
BF sd13  0.0  102.30.0 12959.7  0.0 31.0  303.4   0  94
BF sd14  0.1   97.2   12.8 12291.8  0.0 30.4  312.0   0  92
BF sd15  0.0   99.70.0 12057.5  0.0 26.0  260.8   0  80
BF sd16  0.1   98.8   12.8 12634.3  0.0 31.9  322.1   0  96
BF sd17  0.0   99.00.0 12522.2  0.0 30.9  312.0   0  94
BF sd18  0.2  102.1   25.6 12934.1  0.0 29.7  290.4   0  90
BF sd19  0.0  103.40.0 12486.3  0.0 32.0  309.1   0  97
BF sd20  0.0  105.00.0 12678.3  0.0 32.1  305.6   0  98
BF sd21  0.1  103.9   12.8 12501.7  0.0 31.2  299.6   0  96
BF sd22  0.00.00.00.0  0.0  0.00.0   0   0
BF sd23  0.00.00.00.0  0.0  0.00.0   0   0
BF sd28  0.00.00.00.0  0.0  0.00.0   0   0
BF sd29  0.00.00.00.0  0.0  0.00.0   0   0
BF nfs1  0.00.00.00.0  0.0  0.00.0   0   0

BF Notice that the peak performance with large block writes is much 
BF better than it was before but the peak performance with smaller writes
BF is much worse.  When doing the smaller writes, the performance meter 
BF shows little blips every 10 seconds or so.

BF One change is that I had applied a firmware tweak from Joel Miller 
BF (apparently no longer at Sun) to tell the array to ignore cache sync 
BF commands (i.e. don't wait for disk).  This updated firmware seems 
BF totally different so it is unlikely that the firmware tweak will work.

Well, you need to disable cache flushes on zfs side then (or make a
firmware change work) and it will make a difference.



-- 
Best regards,
 Robert Milkowski
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] StorageTek 2540 performance radically changed

2009-04-20 Thread Bob Friesenhahn

On Tue, 21 Apr 2009, Robert Milkowski wrote:


BF One change is that I had applied a firmware tweak from Joel Miller
BF (apparently no longer at Sun) to tell the array to ignore cache sync
BF commands (i.e. don't wait for disk).  This updated firmware seems
BF totally different so it is unlikely that the firmware tweak will work.

Well, you need to disable cache flushes on zfs side then (or make a
firmware change work) and it will make a difference.


Based on results obtained when I re-ran the benchmark, it seems that 
these various tweaks are either no longer needed, or the existing 
tweaks were carried forward from the older firmware.  I don't know 
what was going on the first time I ran the benchmark and saw odd 
performance with 16GB files.


Large file writes are now almost wire speed given that I have two 4 
Gbit FC optical links and am using mirroring with the mirror pairs 
carefully split across the two controllers.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] StorageTek 2540 performance radically changed

2009-04-20 Thread Torrey McMahon

On 4/20/2009 7:26 PM, Robert Milkowski wrote:

Well, you need to disable cache flushes on zfs side then (or make a
firmware change work) and it will make a difference.
   


If you're running recent OpenSolaris/Solaris/SX builds you shouldn't 
have to disable cache flushing on the array. The driver stack should set 
the correct modes.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] StorageTek 2540 performance radically changed

2009-04-20 Thread Bob Friesenhahn

On Mon, 20 Apr 2009, Torrey McMahon wrote:


On 4/20/2009 7:26 PM, Robert Milkowski wrote:

Well, you need to disable cache flushes on zfs side then (or make a
firmware change work) and it will make a difference.


If you're running recent OpenSolaris/Solaris/SX builds you shouldn't have to 
disable cache flushing on the array. The driver stack should set the correct 
modes.


Whatever Sun did with this new firmware (and of course the zfs 
enhancements) did wonderful things.


This is the type of performance I am now seeing from the six mirror 
pairs:


Synchronous random writes with 8k blocks and 8 writers:
  3708.89 ops/sec

Large file write:
  359MB/second

Large file read:
  550MB/second

All of which is much better than before.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] StorageTek 2540 performance radically changed

2009-04-14 Thread Bob Friesenhahn
I should have allowed the iozone run to go futher.  What is really 
interesting is that performance is very much tied to file size:


  KB  reclen   write rewritereadreread
33554432  64   76688   27870   552106   555438
33554432 128  103120  369527   538206   555049
33554432 256  355237  366563   534333   553660
33554432 512  379515  364515   535635   553940
67108864  64  354186   41313   553426   555094
67108864 128  354590  352197   551686   555935
67108864 256  357401  351549   552545   556920
67108864 512  359299  356335   551557   555180

This machine has 20GB of RAM and I notice that arcsz is frozen at 10GB 
during the benchmark.  Notice that with the 32GB file the performance 
is very poor with short record lengths but for some reason performance 
becomes very nice with all tested record lengths with a file size of 
64GB.


I am running across a broader span of file sizes now to see what is 
going on.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss