Here is another example of the performance problems I am seeing:

~# dd if=/dev/zero of=/pool0/ds.test bs=1024k count=2000 2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 56.2184 s, 37.3 MB/s

37MB/s seems like some sort of bad joke for all these disks. I can
write the same amount of data to a set of 6 SAS disks on a Dell
PERC6/i at a rate of 160MB/s and those disks are hosting 25 vm's and a
lot more IOPS than this box.

zpool iostat during the same time shows:
pool0       14.2T  25.3T    124  1.30K   981K  4.02M
pool0       14.2T  25.3T    277    914  2.16M  23.2M
pool0       14.2T  25.3T     65  4.03K   526K  90.2M
pool0       14.2T  25.3T     18  1.76K   136K  6.81M
pool0       14.2T  25.3T    460  5.55K  3.60M   111M
pool0       14.2T  25.3T    160      0  1.24M      0
pool0       14.2T  25.3T    182  2.34K  1.41M  33.3M

The zero's and other low numbers don't make any sense. And as I
mentioned- the busy percent and service times of these disks are never
abnormally high- especially when compared to the much smaller, better
performing pool I have.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to