Hi, Very interesting suggestions as I'm contemplating a Supermicro-based server 
for my work as well, but probably in a lower budget as a backup store for an 
aging Thumper (not as its superior replacement).

Still, I have a couple of questions regarding your raidz layout recommendation.

On one hand, I've read that as current drives get larger (while their random 
IOPS/MBPS don't grow nearly as fast with new generations), it is becoming more 
and more reasonable to use RAIDZ3 with 3 redundancy drives, at least for vdevs 
made of many disks - a dozen or so. When a drive fails, you still have two 
redundant parities, and with a resilver window expected to be in hours if not 
days range, I would want that airbag, to say the least. You know, failures 
rarely come one by one ;)

On another hand, I've recently seen many recommendations that in a RAIDZ* drive 
set, the number of data disks should be a power of two - so that ZFS 
blocks/stripes and those of of its users (like databases) which are inclined to 
use 2^N-sized blocks can be often accessed in a single IO burst across all 
drives, and not in "one and one-quarter IO" on the average, which might delay 
IOs to other stripes while some of the disks in a vdev are busy processing 
leftovers of a previous request, and others are waiting for their peers.

In case of RAIDZ2 this recommendation leads to vdevs sized 6 (4+2), 10 (8+2) or 
18 (16+2) disks - the latter being mentioned in the original post.

Did you consider this aspect or test if the theoretical warnings are valid?

Thanks,
//Jim Klimov
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to