Hello Bob,

Wednesday, April 15, 2009, 1:01:02 AM, you wrote:

BF> Today I updated the firmware on my StorageTek 2540 to the latest 
BF> recommended version and am seeing radically difference performance 
BF> when testing with iozone than I did in February of 2008.  I am using 
BF> Solaris 10 U5 with all the latest patches.

BF> This is the performance achieved (on a 32GB file) in February last 
BF> year:

BF>                KB  reclen   write rewrite    read    reread
BF>          33554432      64  279863  167138   458807   449817
BF>          33554432     128  265099  250903   455623   460668
BF>          33554432     256  265616  259599   451944   448061
BF>          33554432     512  278530  294589   522930   471253

BF> This is the new performance:

BF>                KB  reclen   write rewrite    read    reread
BF>          33554432      64   76688   27870   552106   555438
BF>          33554432     128  103120  369527   538206   555049
BF>          33554432     256  355237  366563   534333   553660
BF>          33554432     512  379515  364515   535635   553940

BF> When using the 64KB record length, the service times are terrible.  At
BF> first I thought that my drive array must be broken but now it seems 
BF> like a change in the ZFS caching behavior (i.e. caching gone!):

BF>                   extended device statistics
BF> device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
BF> sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
BF> sd1       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
BF> sd2       1.3    0.3    6.8    2.0  0.0  0.0    1.7   0   0
BF> sd10      0.0   99.3    0.0 12698.3  0.0 32.2  324.5   0  97
BF> sd11      0.3  105.9   38.4 12753.3  0.0 31.8  299.9   0  99
BF> sd12      0.0  100.2    0.0 12095.9  0.0 26.4  263.8   0  82
BF> sd13      0.0  102.3    0.0 12959.7  0.0 31.0  303.4   0  94
BF> sd14      0.1   97.2   12.8 12291.8  0.0 30.4  312.0   0  92
BF> sd15      0.0   99.7    0.0 12057.5  0.0 26.0  260.8   0  80
BF> sd16      0.1   98.8   12.8 12634.3  0.0 31.9  322.1   0  96
BF> sd17      0.0   99.0    0.0 12522.2  0.0 30.9  312.0   0  94
BF> sd18      0.2  102.1   25.6 12934.1  0.0 29.7  290.4   0  90
BF> sd19      0.0  103.4    0.0 12486.3  0.0 32.0  309.1   0  97
BF> sd20      0.0  105.0    0.0 12678.3  0.0 32.1  305.6   0  98
BF> sd21      0.1  103.9   12.8 12501.7  0.0 31.2  299.6   0  96
BF> sd22      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
BF> sd23      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
BF> sd28      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
BF> sd29      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
BF> nfs1      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0

BF> Notice that the peak performance with large block writes is much 
BF> better than it was before but the peak performance with smaller writes
BF> is much worse.  When doing the smaller writes, the performance meter 
BF> shows little blips every 10 seconds or so.

BF> One change is that I had applied a firmware tweak from Joel Miller 
BF> (apparently no longer at Sun) to tell the array to ignore cache sync 
BF> commands (i.e. don't wait for disk).  This updated firmware seems 
BF> totally different so it is unlikely that the firmware tweak will work.

Well, you need to disable cache flushes on zfs side then (or make a
firmware change work) and it will make a difference.



-- 
Best regards,
 Robert Milkowski
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to