Today I updated the firmware on my StorageTek 2540 to the latest recommended version and am seeing radically difference performance when testing with iozone than I did in February of 2008. I am using Solaris 10 U5 with all the latest patches.

This is the performance achieved (on a 32GB file) in February last year:

              KB  reclen   write rewrite    read    reread
        33554432      64  279863  167138   458807   449817
        33554432     128  265099  250903   455623   460668
        33554432     256  265616  259599   451944   448061
        33554432     512  278530  294589   522930   471253

This is the new performance:

              KB  reclen   write rewrite    read    reread
        33554432      64   76688   27870   552106   555438
        33554432     128  103120  369527   538206   555049
        33554432     256  355237  366563   534333   553660
        33554432     512  379515  364515   535635   553940

When using the 64KB record length, the service times are terrible. At first I thought that my drive array must be broken but now it seems like a change in the ZFS caching behavior (i.e. caching gone!):

                 extended device statistics
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b
sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd1       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd2       1.3    0.3    6.8    2.0  0.0  0.0    1.7   0   0
sd10      0.0   99.3    0.0 12698.3  0.0 32.2  324.5   0  97
sd11      0.3  105.9   38.4 12753.3  0.0 31.8  299.9   0  99
sd12      0.0  100.2    0.0 12095.9  0.0 26.4  263.8   0  82
sd13      0.0  102.3    0.0 12959.7  0.0 31.0  303.4   0  94
sd14      0.1   97.2   12.8 12291.8  0.0 30.4  312.0   0  92
sd15      0.0   99.7    0.0 12057.5  0.0 26.0  260.8   0  80
sd16      0.1   98.8   12.8 12634.3  0.0 31.9  322.1   0  96
sd17      0.0   99.0    0.0 12522.2  0.0 30.9  312.0   0  94
sd18      0.2  102.1   25.6 12934.1  0.0 29.7  290.4   0  90
sd19      0.0  103.4    0.0 12486.3  0.0 32.0  309.1   0  97
sd20      0.0  105.0    0.0 12678.3  0.0 32.1  305.6   0  98
sd21      0.1  103.9   12.8 12501.7  0.0 31.2  299.6   0  96
sd22      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd23      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd28      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd29      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
nfs1      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0

Notice that the peak performance with large block writes is much better than it was before but the peak performance with smaller writes is much worse. When doing the smaller writes, the performance meter shows little blips every 10 seconds or so.

One change is that I had applied a firmware tweak from Joel Miller (apparently no longer at Sun) to tell the array to ignore cache sync commands (i.e. don't wait for disk). This updated firmware seems totally different so it is unlikely that the firmware tweak will work. A CAM cache feature that I tweaked (to disable write mirring across controllers) is no longer present.

Does anyone have advice as to how the performance may be improved with smaller block writes on huge files? Should I even care?

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to