Mike Gerdts wrote:
A couple scenarios from environments that I work in, using "legacy"
file systems and volume managers:

1) Various test copies need to be on different spindles to remove any
perceived or real performance impact imposed by one or the other.
Arguably by having the IO activity spread across all the spindles
there would be fewer bottlenecks.  However, if you are trying to
simulate the behavior of X production spindles, doing so with 1.3 X
or 2 X spindles is not a proper comparison.  Hence being wasteful and
getting suboptimal performance may be desirable.  If you don't
understand that logic, you haven't worked in a big enough company or
studied Dilbert enough.  :)

Here it makes sense to be using X spindles. However, using a clone filesystem will perform the same as a non-clone filesystem. So if you have enough space on those X spindles for the clone, I don't think there's any need for additional "separation".

Of course, this may not eliminate imagined performance difference (eg, your Dilbert reference :-), in which case you can simply use 'zfs send | zfs recv' to send the snapshot to a suitably-isolated pool/machine.

2) One of the copies of the data needs to be portable to another
system while the original stays put.  This could be done to refresh
non-production instances from production, to perform backups in such a
way that it doesn't put load on the production spindles, networks,
etc.

This is a case where you should be using multiple pools (possibly on the same host), and using 'zfs send | zfs recv' between them. In some cases, you may be able to attach the storage to the destination machine and use the network to move the data, eg. 'zfs send | ssh dest zfs recv'.

--matt
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to