> You mentioned that the pool was somewhat full, can you send the output > of 'zpool iostat -v pool0'?
~# zpool iostat -v pool0 capacity operations bandwidth pool alloc free read write read write ------------------ ----- ----- ----- ----- ----- ----- pool0 14.1T 25.4T 926 2.35K 7.20M 15.7M raidz1 673G 439G 42 117 335K 790K c5t5d0 - - 20 20 167K 273K c5t6d0 - - 20 20 167K 272K c5t7d0 - - 20 20 167K 273K c5t8d0 - - 20 20 167K 272K raidz1 710G 402G 38 84 309K 546K c5t9d0 - - 18 16 158K 189K c5t10d0 - - 18 16 157K 187K c5t11d0 - - 18 16 158K 189K c5t12d0 - - 18 16 157K 187K raidz1 719G 393G 43 95 348K 648K c5t13d0 - - 20 17 172K 224K c5t14d0 - - 20 17 171K 223K c5t15d0 - - 20 17 172K 224K c5t16d0 - - 20 17 172K 223K raidz1 721G 391G 42 96 341K 653K c5t21d0 - - 20 16 170K 226K c5t22d0 - - 20 16 169K 224K c5t23d0 - - 20 16 170K 226K c5t24d0 - - 20 16 170K 224K raidz1 721G 391G 43 100 342K 667K c5t25d0 - - 20 17 172K 231K c5t26d0 - - 20 17 172K 229K c5t27d0 - - 20 17 172K 231K c5t28d0 - - 20 17 172K 229K raidz1 721G 391G 43 101 341K 672K c5t29d0 - - 20 18 173K 233K c5t30d0 - - 20 18 173K 231K c5t31d0 - - 20 18 173K 233K c5t32d0 - - 20 18 173K 231K raidz1 722G 390G 42 100 339K 667K c5t33d0 - - 20 19 171K 231K c5t34d0 - - 20 19 172K 229K c5t35d0 - - 20 19 171K 231K c5t36d0 - - 20 19 171K 229K raidz1 709G 403G 42 107 341K 714K c5t37d0 - - 20 20 171K 247K c5t38d0 - - 20 19 170K 245K c5t39d0 - - 20 20 171K 247K c5t40d0 - - 20 19 170K 245K raidz1 744G 368G 39 79 316K 530K c5t41d0 - - 18 16 163K 183K c5t42d0 - - 18 15 163K 182K c5t43d0 - - 18 16 163K 183K c5t44d0 - - 18 15 163K 182K raidz1 737G 375G 44 98 355K 668K c5t45d0 - - 21 18 178K 231K c5t46d0 - - 21 18 178K 229K c5t47d0 - - 21 18 178K 231K c5t48d0 - - 21 18 178K 229K raidz1 733G 379G 43 103 344K 683K c5t49d0 - - 20 19 175K 237K c5t50d0 - - 20 19 175K 235K c5t51d0 - - 20 19 175K 237K c5t52d0 - - 20 19 175K 235K raidz1 732G 380G 43 104 344K 685K c5t53d0 - - 20 19 176K 237K c5t54d0 - - 20 19 175K 235K c5t55d0 - - 20 19 175K 237K c5t56d0 - - 20 19 175K 235K raidz1 733G 379G 43 101 344K 672K c5t57d0 - - 20 17 175K 233K c5t58d0 - - 20 17 174K 231K c5t59d0 - - 20 17 175K 233K c5t60d0 - - 20 17 174K 231K raidz1 806G 1.38T 50 123 401K 817K c5t61d0 - - 24 22 201K 283K c5t62d0 - - 24 22 201K 281K c5t63d0 - - 24 22 201K 283K c5t64d0 - - 24 22 201K 281K raidz1 794G 1.40T 47 120 377K 786K c5t65d0 - - 22 23 194K 272K c5t66d0 - - 22 23 194K 270K c5t67d0 - - 22 23 194K 272K c5t68d0 - - 22 23 194K 270K raidz1 788G 1.40T 47 115 376K 763K c5t69d0 - - 22 22 191K 264K c5t70d0 - - 22 22 191K 262K c5t71d0 - - 22 22 191K 264K c5t72d0 - - 22 22 191K 262K raidz1 786G 1.40T 46 106 373K 723K c5t73d0 - - 22 18 185K 250K c5t74d0 - - 22 19 185K 248K c5t75d0 - - 22 18 185K 250K c5t76d0 - - 22 19 185K 248K raidz1 767G 1.42T 40 79 323K 534K c5t77d0 - - 19 16 165K 185K c5t78d0 - - 19 16 165K 183K c5t79d0 - - 19 16 165K 185K c5t80d0 - - 19 16 165K 183K c5t2d0 3.40M 46.5G 0 4 0 90.0K mirror 61.7G 1.75T 4 25 33.2K 149K c5t81d0 - - 1 9 24.3K 149K c5t82d0 - - 1 9 24.3K 149K mirror 140G 1.68T 19 71 158K 504K c5t83d0 - - 6 13 95.2K 504K c5t84d0 - - 6 14 96.6K 504K mirror 141G 1.67T 18 79 148K 535K c5t85d0 - - 6 16 93.0K 535K c5t86d0 - - 6 16 93.5K 535K mirror 131G 1.68T 20 65 166K 419K c5t87d0 - - 6 14 156K 419K c5t97d0 - - 4 20 66.7K 683K mirror 145G 1.67T 19 77 157K 525K c5t89d0 - - 6 15 97.4K 525K c5t90d0 - - 6 15 97.7K 525K mirror 147G 1.67T 18 80 152K 539K c5t91d0 - - 6 15 96.2K 539K c5t92d0 - - 6 15 95.3K 539K mirror 150G 1.67T 19 81 156K 547K c5t93d0 - - 6 15 98.1K 547K c5t94d0 - - 6 16 97.7K 547K mirror 155G 1.66T 19 80 154K 538K c5t95d0 - - 6 16 97.1K 538K c5t96d0 - - 6 17 97.3K 538K c5t18d0 3.11M 46.5G 0 4 0 91.2K ------------------ ----- ----- ----- ----- ----- ----- > You can also try doing the following to > reduce 'metaslab_min_alloc_size' to 4K: > > echo "metaslab_min_alloc_size/Z 1000" | mdb -kw > > NOTE: This will change the running system so you may want to make this > change during off-peak hours. > Then check your performance and see if it makes a difference. I'll make this change tonight and see if it helps. Thanks, -Don _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss