Ian Collins wrote:
I was running zpool iostat on a pool comprising a stripe of raidz2 vdevs that appears to be writing slowly and I notice a considerable imbalance of both free space and write operations. The pool is currently feeding a tape backup while receiving a large filesystem.

Is this imbalance normal? I would expect a more even distribution as the poll configuration hasn't been changed since creation.

The second and third ones are pretty much full, with the others having well over 10 times more free space, so I wouldn't expect many writes to the full ones.

Have the others ever been in a degraded state? That might explain why the fill level has become unbalanced.


The system is running Solaris 10 update 7.

                capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
tank          15.9T  2.19T     87    119  2.34M  1.88M
 raidz2      2.90T   740G     24     27   762K  95.5K
   c0t1d0        -      -     14     13   273K  18.6K
   c1t1d0        -      -     15     13   263K  18.3K
   c4t2d0        -      -     17     14   288K  18.2K
   spare         -      -     17     20   104K  17.2K
     c5t2d0      -      -     16     13   277K  17.6K
     c7t5d0      -      -      0     14      0  17.6K
   c6t3d0        -      -     15     12   242K  18.7K
   c7t3d0        -      -     15     12   242K  17.6K
   c6t4d0        -      -     16     12   272K  18.1K
   c1t0d0        -      -     15     13   275K  16.8K
 raidz2      3.59T  37.8G     20      0   546K      0
   c0t2d0        -      -     11      0   184K    361
   c1t3d0        -      -     10      0   182K    361
   c4t5d0        -      -     14      0   237K    361
   c5t5d0        -      -     13      0   220K    361
   c6t6d0        -      -     12      0   155K    361
   c7t6d0        -      -     11      0   149K    361
   c7t4d0        -      -     14      0   219K    361
   c4t0d0        -      -     14      0   213K    361
 raidz2      3.58T  44.1G     27      0  1.01M      0
   c0t5d0        -      -     16      0   290K    361
   c1t6d0        -      -     15      0   301K    361
   c4t7d0        -      -     20      0   375K    361
   c5t1d0        -      -     19      0   374K    361
   c6t7d0        -      -     17      0   285K    361
   c7t7d0        -      -     15      0   253K    361
   c0t0d0        -      -     18      0   328K    361
   c6t0d0        -      -     18      0   348K    361
 raidz2      3.05T   587G      7     47  24.9K  1.07M
   c0t4d0        -      -      3     21   254K   187K
   c1t2d0        -      -      3     22   254K   187K
   c4t3d0        -      -      5     22   350K   187K
   c5t3d0        -      -      5     21   350K   186K
   c6t2d0        -      -      4     22   265K   187K
   c7t1d0        -      -      4     21   271K   187K
   c6t1d0        -      -      5     22   345K   186K
   c4t1d0        -      -      5     24   333K   184K
 raidz2      2.81T   835G      8     45  30.9K   733K
   c0t3d0        -      -      5     16   339K   126K
   c1t5d0        -      -      5     16   333K   126K
   c4t6d0        -      -      6     16   441K   127K
   c5t6d0        -      -      6     17   435K   126K
   c6t5d0        -      -      4     18   294K   126K
   c7t2d0        -      -      4     18   282K   124K
   c0t6d0        -      -      7     19   446K   124K
   c5t7d0        -      -      7     21   452K   122K
------------  -----  -----  -----  -----  -----  -----
--
Andrew

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to