On Thu, 23 Apr 2009, Rince wrote:
I presume I'm missing something, but I have no idea what. Halp?
The main thing to be aware of with raidz and raidz2 is that the devices in a vdev are basically "chained together" so that you get an effective one disk's worth of IOPS per vdev. They are very space efficient but not so good for multi-user use.
Reporting random read or write thoughput is not a useful measure unless you know how many simultaneous users are going to need that throughput at the same time. The blocksize also makes a difference. Thoughput with 128K blocks is obviously different than with 8K blocks.
Since you have 10 disks in your raidz2, they all need to be running similarly very well in order to obtain the optimum performance. Even one pokey disk will cause the whole vdev to suffer. You can use 'iostat -nx 10' (or similar) to discover if you have a pokey disk which is bottlenecking your pool.
If your pool design is bottlenecked for IOPS then a single heavy IOPS consumer could cause performance for other users to suffer. Sometimes IOPS consumers may come from unexpected places such as FireFox. Scripts from the DTrace Toolkit can help identify which processes are consuming the IOPS.
Using a SSD as your log device will help with higher-level synchronous write performance but it will not be able to heal a problem with your raidz2 vdev, which needs to do individual disk transactions at a lower level.
Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss