2012-05-25 21:45, Sašo Kiselkov wrote:
On 05/25/2012 07:35 PM, Jim Klimov wrote:
Sorry I can't comment on MPxIO, except that I thought zfs could by
itself discern two paths to the same drive, if only to protect
against double-importing the disk into pool.

Unfortunately, it isn't the same thing. MPxIO provides redundant
signaling to the drives, independent of the storage/RAID layer above
it, so it does have its place (besides simply increasing throughput).

Yes, I know - I just don't have hands-on experience with that
in Solaris (and limited in Linux), not so many double-link
boxes around here :)

I'd use lower protection if it were available :)
> The data on that array is not very important, the primary design
> parameter is low cost per MB.

Why not just stripe it all then? That would give good speeds ;)
Arguably, mirroring would indeed cost about twice as much per MB,
but as a tradeoff which may be useful to you - it can also give
a lot more IOPS due to more TLVDEVs being available for striping,
and doubling read speeds due to mirroring.

We're in a very demanding IO environment, we need large
quantities of high-throughput, high-IOPS storage, but we don't need
stellar reliability.

Does your array include SSD L2ARC caches?

I guess (and want to be corrected if wrong) - since ZFS can tolerate
loss of L2ARCs so much that mirroring them is not even supported,
you may get away with several single-link SSDs connected to one or
another controller (or likely a dedicated one other than those
driving the disk arrays - since IOPS on the few SSDs will be higher
than on tens of disks). Likely you shouldn't connect those single
link (SATA) SSDs to the dual-link backplane either - i.e. mount
them in the server chassis, not in the JBOD box.

I may be wrong though :)

If the pool gets corrupted due to unfortunate
double-drive failure, well, that's tough, but not unbearable (the pool
stores customer channel recordings for nPVR, so nothing critical really).


zfs-discuss mailing list

Reply via email to