As you can add multiple vdevs to a pool, my suggestion would be to do several 
smaller raidz1 or raidz2 vdevs in the pool.

With your setup - assuming 2 HBAs @ 24 drives each your setup would have 
yielded 20 drives usable storage (about) (assuming raidz2 with 2 spares on each 
HBA) and then mirrored.

Maximum number of drives before failure (idea scenario): 5 (assuming the spare 
hasn't caught up yet), 9 (assuming the spare had caught up and more drives 
failed)  

Suggested setup (at least as far as I'm concerned - and I am kinda new at ZFS, 
but not new to storage systems):
5 x raidz2 w/ 9 disks = 35 drives usable (9 disks ea x 5 raidz2 = 45 total 
drives - (5 raidz2 x 2 parity drives ea))
This leaves you with 3 drives that you can assign as spares (assuming 48 drives 
total)

Maximum number of drives before failure (ideal scenario): 11 (assuming the 
spare hasn't caught up yet), 14 (assuming the spare had caught up and more 
drives failed)

Keep in mind, the parity information will take up additional space as well, but 
it seems you were looking for maximum redundancy (and this setup would give you 
that).

Sorry, I just saw you were talking about 12 drives in each chassis.  A similar 
thing applies, I would do 1 9 drive raidz2 in each chassis and add 2 total 
spares and then add drives 9 at a time (and 1 more spare at some point).

Note: Keep in mind, I'm still kinda new to ZFS, so I may be completely wrong... 
 (if I am, somebody, please correct me)

P-Chan
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to