On 08/10/10 10:09 PM, Phil Harman wrote:
On 10 Aug 2010, at 10:22, Ian Collins<i...@ianshome.com>  wrote:

On 08/10/10 09:12 PM, Andrew Gabriel wrote:
Another option - use the new 2TB drives to swap out the existing 1TB drives.
If you can find another use for the swapped out drives, this works well, and 
avoids ending up with sprawling lower capacity drives as your pool grows in 
size. This is what I do at home. The freed-up drives get used in other systems 
and for off-site backups. Over the last 4 years, I've upgraded from 1/4TB, to 
1/2TB, and now on 1TB drives.

I have been doing the same.

The reason I mentioned performance (and I did mean 6 drives!) is in order to 
get some space on a budget I replaced one mirror in a stripe with bigger 
drives.  The others soon became nearly full and most of the IO went to the 
bigger pair, so I lost nearly all the benefit of the stripe.  I have also grown 
stripes and seen similar issues and I had to remove and replace large chunks of 
data to even things out.

I really think mixing vdev sizes is a bad idea.
I'd agree if this was a new pool, but this question was about expanding an 
existing pool (which is nearly full and where the performance is presumably 
acceptable).

Adding another vdev, whatever its size, is a simple zero downtime option for 
growing the pool (adding another pool would fragment the name space). With a 
similar number of spindles in a similar RAID configuration, performance is 
unlikely to get worse, indeed (as already noted) it is likely to get better 
until the new vdev fills up.

The best option for growing a pool is often swapping out the drives for larger ones, which is also a zero down time option.

Many systems only need to be good enough, not optimum. The best is often the 
enemy of the good. Anyone using RAIDZn is cost conscious to some degree (or why 
not just go for a hige stripe of 4-way mirrored SSDs and be done with it?)

That depends on the situation. If a particular topology was chosen to give a capacity/performance trade off, degrading one or another may not be acceptable.

--
Ian.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to