Darren J Moffat wrote:
So with that in mind this is my plan so far.

On the target (the V880):
Put all the 12 36G disks into a single zpool (call it iscsitpool).
Use iscsitadm to create 2 targets of 202G each.

On the initiator (the v40z):
Use iscsiadm to discover (import) the 2 202G targets.
Create a zpool that is a mirror of the two 202G targets.

This just doesn't feel like the best way to do this from an availability view point, and I'm not sure at all what to think of the possible
performance at the moment.
>
So I'm looking for some advice on how best to do this. I suspect that after much discussion some best practice advice might come out of this.

Do you want to optimize space, performance, data availability, or data
retention?

Without knowing your answer, I can say that, in general, it takes longer
to recover bigger LUNs/vdevs.  Availability is very dependent on the recovery
time (smaller is better).  To some degree, data retention follows the same
trend.  So, for data availability and retention, you want to use at least one
spare, and smaller, redundant vdevs.  More redundancy really helps retention,
so triple mirror or raidz2 is preferred with regular scrubbing.

This (small vdevs) might also fit well with the dynamic striping for 
performance,
but I have no performance data to support my assumption.
 -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to