Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Ian Collins

On 08/10/10 06:21 PM, Terry Hull wrote:
I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive 
RAID Z2 arrays striped together. However, I would like the capability 
of adding additional stripes of 2TB drives in the future. Will this be 
a problem? I thought I read it is best to keep the stripes the same 
width and was planning to do that, but I was wondering about using 
drives of different sizes. These drives would all be in a single pool.


It would work, but you run the risk of the smaller drives becoming full 
and all new writes doing to the bigger vdev. So while usable, 
performance would suffer.


One option would be to add 2TB drives as 5 drive raidz3 vdevs. That way 
your vdevs would be approximately the same size and you would have the 
optimum redundancy for the 2TB drives.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Phil Harman

On 10 Aug 2010, at 08:49, Ian Collins i...@ianshome.com wrote:


On 08/10/10 06:21 PM, Terry Hull wrote:
I am wanting to build a server with 16 - 1TB drives with 2 – 8 dri 
ve RAID Z2 arrays striped together. However, I would like the capa 
bility of adding additional stripes of 2TB drives in the future. W 
ill this be a problem? I thought I read it is best to keep the str 
ipes the same width and was planning to do that, but I was wonderi 
ng about using drives of different sizes. These drives would all b 
e in a single pool.


It would work, but you run the risk of the smaller drives becoming  
full and all new writes doing to the bigger vdev. So while usable,  
performance would suffer.


Almost by definition, the 1TB drives are likely to be getting full  
when the new drives are added (presumably because of running out of  
space).


Performance can only be said to suffer relative to a new pool built  
entirely with drives of the same size. Even if he added 8x 2TB drives  
in a RAIDZ3 config it is hard to predict what the performance gap will  
be (on the one hand: RAIDZ3 vs RAIDZ2, on the other: an empty group vs  
an almost full, presumably fragmented, group).


One option would be to add 2TB drives as 5 drive raidz3 vdevs. That  
way your vdevs would be approximately the same size and you would  
have the optimum redundancy for the 2TB drives.


I think you meant 6, but I don't see a good reason for matching the  
group sizes. I'm for RAIDZ3, but I don't see much logic in mixing  
groups of 6+2 x 1TB and 3+3 x 2TB in the same pool (in one group I  
appear to care most about maximising space, in the other I'm  
maximising availability)


The other issue is that of hot spares. In a pool of mixed size drives  
you either waste array slots (by having spares of different sizes) or  
plan to have unavailable space when small drives are replaced by large  
ones.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Andrew Gabriel

Phil Harman wrote:

On 10 Aug 2010, at 08:49, Ian Collins i...@ianshome.com wrote:


On 08/10/10 06:21 PM, Terry Hull wrote:
I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive 
RAID Z2 arrays striped together. However, I would like the 
capability of adding additional stripes of 2TB drives in the future. 
Will this be a problem? I thought I read it is best to keep the 
stripes the same width and was planning to do that, but I was 
wondering about using drives of different sizes. These drives would 
all be in a single pool.


It would work, but you run the risk of the smaller drives becoming 
full and all new writes doing to the bigger vdev. So while usable, 
performance would suffer.


Almost by definition, the 1TB drives are likely to be getting full 
when the new drives are added (presumably because of running out of 
space).


Performance can only be said to suffer relative to a new pool built 
entirely with drives of the same size. Even if he added 8x 2TB drives 
in a RAIDZ3 config it is hard to predict what the performance gap will 
be (on the one hand: RAIDZ3 vs RAIDZ2, on the other: an empty group vs 
an almost full, presumably fragmented, group).


One option would be to add 2TB drives as 5 drive raidz3 vdevs. That 
way your vdevs would be approximately the same size and you would 
have the optimum redundancy for the 2TB drives.


I think you meant 6, but I don't see a good reason for matching the 
group sizes. I'm for RAIDZ3, but I don't see much logic in mixing 
groups of 6+2 x 1TB and 3+3 x 2TB in the same pool (in one group I 
appear to care most about maximising space, in the other I'm 
maximising availability)


Another option - use the new 2TB drives to swap out the existing 1TB drives.
If you can find another use for the swapped out drives, this works well, 
and avoids ending up with sprawling lower capacity drives as your pool 
grows in size. This is what I do at home. The freed-up drives get used 
in other systems and for off-site backups. Over the last 4 years, I've 
upgraded from 1/4TB, to 1/2TB, and now on 1TB drives.



--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Ian Collins

On 08/10/10 09:12 PM, Andrew Gabriel wrote:

Phil Harman wrote:

On 10 Aug 2010, at 08:49, Ian Collins i...@ianshome.com wrote:

On 08/10/10 06:21 PM, Terry Hull wrote:
I am wanting to build a server with 16 - 1TB drives with 2 – 8 
drive RAID Z2 arrays striped together. However, I would like the 
capability of adding additional stripes of 2TB drives in the 
future. Will this be a problem? I thought I read it is best to keep 
the stripes the same width and was planning to do that, but I was 
wondering about using drives of different sizes. These drives would 
all be in a single pool.


It would work, but you run the risk of the smaller drives becoming 
full and all new writes doing to the bigger vdev. So while usable, 
performance would suffer.


Almost by definition, the 1TB drives are likely to be getting full 
when the new drives are added (presumably because of running out of 
space).


Performance can only be said to suffer relative to a new pool built 
entirely with drives of the same size. Even if he added 8x 2TB drives 
in a RAIDZ3 config it is hard to predict what the performance gap 
will be (on the one hand: RAIDZ3 vs RAIDZ2, on the other: an empty 
group vs an almost full, presumably fragmented, group).


One option would be to add 2TB drives as 5 drive raidz3 vdevs. That 
way your vdevs would be approximately the same size and you would 
have the optimum redundancy for the 2TB drives.


I think you meant 6, but I don't see a good reason for matching the 
group sizes. I'm for RAIDZ3, but I don't see much logic in mixing 
groups of 6+2 x 1TB and 3+3 x 2TB in the same pool (in one group I 
appear to care most about maximising space, in the other I'm 
maximising availability)


Another option - use the new 2TB drives to swap out the existing 1TB 
drives.
If you can find another use for the swapped out drives, this works 
well, and avoids ending up with sprawling lower capacity drives as 
your pool grows in size. This is what I do at home. The freed-up 
drives get used in other systems and for off-site backups. Over the 
last 4 years, I've upgraded from 1/4TB, to 1/2TB, and now on 1TB drives.



I have been doing the same.

The reason I mentioned performance (and I did mean 6 drives!) is in 
order to get some space on a budget I replaced one mirror in a stripe 
with bigger drives.  The others soon became nearly full and most of the 
IO went to the bigger pair, so I lost nearly all the benefit of the 
stripe.  I have also grown stripes and seen similar issues and I had to 
remove and replace large chunks of data to even things out.


I really think mixing vdev sizes is a bad idea.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Phil Harman
On 10 Aug 2010, at 10:22, Ian Collins i...@ianshome.com wrote:

 On 08/10/10 09:12 PM, Andrew Gabriel wrote:
 Phil Harman wrote:
 On 10 Aug 2010, at 08:49, Ian Collins i...@ianshome.com wrote:
 On 08/10/10 06:21 PM, Terry Hull wrote:
 I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive RAID 
 Z2 arrays striped together. However, I would like the capability of 
 adding additional stripes of 2TB drives in the future. Will this be a 
 problem? I thought I read it is best to keep the stripes the same width 
 and was planning to do that, but I was wondering about using drives of 
 different sizes. These drives would all be in a single pool.
 
 It would work, but you run the risk of the smaller drives becoming full 
 and all new writes doing to the bigger vdev. So while usable, performance 
 would suffer.
 
 Almost by definition, the 1TB drives are likely to be getting full when the 
 new drives are added (presumably because of running out of space).
 
 Performance can only be said to suffer relative to a new pool built 
 entirely with drives of the same size. Even if he added 8x 2TB drives in a 
 RAIDZ3 config it is hard to predict what the performance gap will be (on 
 the one hand: RAIDZ3 vs RAIDZ2, on the other: an empty group vs an almost 
 full, presumably fragmented, group).
 
 One option would be to add 2TB drives as 5 drive raidz3 vdevs. That way 
 your vdevs would be approximately the same size and you would have the 
 optimum redundancy for the 2TB drives.
 
 I think you meant 6, but I don't see a good reason for matching the group 
 sizes. I'm for RAIDZ3, but I don't see much logic in mixing groups of 6+2 x 
 1TB and 3+3 x 2TB in the same pool (in one group I appear to care most 
 about maximising space, in the other I'm maximising availability)
 
 Another option - use the new 2TB drives to swap out the existing 1TB drives.
 If you can find another use for the swapped out drives, this works well, and 
 avoids ending up with sprawling lower capacity drives as your pool grows in 
 size. This is what I do at home. The freed-up drives get used in other 
 systems and for off-site backups. Over the last 4 years, I've upgraded from 
 1/4TB, to 1/2TB, and now on 1TB drives.
 
 I have been doing the same.
 
 The reason I mentioned performance (and I did mean 6 drives!) is in order to 
 get some space on a budget I replaced one mirror in a stripe with bigger 
 drives.  The others soon became nearly full and most of the IO went to the 
 bigger pair, so I lost nearly all the benefit of the stripe.  I have also 
 grown stripes and seen similar issues and I had to remove and replace large 
 chunks of data to even things out.
 
 I really think mixing vdev sizes is a bad idea.

I'd agree if this was a new pool, but this question was about expanding an 
existing pool (which is nearly full and where the performance is presumably 
acceptable).

Adding another vdev, whatever its size, is a simple zero downtime option for 
growing the pool (adding another pool would fragment the name space). With a 
similar number of spindles in a similar RAID configuration, performance is 
unlikely to get worse, indeed (as already noted) it is likely to get better 
until the new vdev fills up.

Many systems only need to be good enough, not optimum. The best is often the 
enemy of the good. Anyone using RAIDZn is cost conscious to some degree (or why 
not just go for a hige stripe of 4-way mirrored SSDs and be done with it?)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Terry Hull
 
 I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive
 RAID Z2 arrays striped together.   However, I would like the capability
 of adding additional stripes of 2TB drives in the future.  Will this be
 a problem?   I thought I read it is best to keep the stripes the same
 width and was planning to do that, but I was wondering about using
 drives of different sizes.  These drives would all be in a single pool.

There is no problem.

Even if you wanted to add all sorts of randomly sized  shaped vdevs in the
future, it would work.  The reason they recommend sticking with one type of
configuration is because of performance characteristics.  That's not
necessarily to say it performs better with only a single type, it's more
you don't know what to expect if you have a raidz striped with a mirror
and a raidz2, etc.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Terry Hull

 From: Phil Harman phil.har...@gmail.com
 Date: Tue, 10 Aug 2010 09:24:52 +0100
 To: Ian Collins i...@ianshome.com
 Cc: Terry Hull t...@nrg-inc.com, zfs-discuss@opensolaris.org
 zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] RAID Z stripes
 
 On 10 Aug 2010, at 08:49, Ian Collins i...@ianshome.com wrote:
 
 On 08/10/10 06:21 PM, Terry Hull wrote:
 I am wanting to build a server with 16 - 1TB drives with 2 ­ 8 dri
 ve RAID Z2 arrays striped together. However, I would like the capa
 bility of adding additional stripes of 2TB drives in the future. W
 ill this be a problem? I thought I read it is best to keep the str
 ipes the same width and was planning to do that, but I was wonderi
 ng about using drives of different sizes. These drives would all b
 e in a single pool.
 
 It would work, but you run the risk of the smaller drives becoming
 full and all new writes doing to the bigger vdev. So while usable,
 performance would suffer.
 
 Almost by definition, the 1TB drives are likely to be getting full
 when the new drives are added (presumably because of running out of
 space).
 
 Performance can only be said to suffer relative to a new pool built
 entirely with drives of the same size. Even if he added 8x 2TB drives
 in a RAIDZ3 config it is hard to predict what the performance gap will
 be (on the one hand: RAIDZ3 vs RAIDZ2, on the other: an empty group vs
 an almost full, presumably fragmented, group).
 
 One option would be to add 2TB drives as 5 drive raidz3 vdevs. That
 way your vdevs would be approximately the same size and you would
 have the optimum redundancy for the 2TB drives.
 
 I think you meant 6, but I don't see a good reason for matching the
 group sizes. I'm for RAIDZ3, but I don't see much logic in mixing
 groups of 6+2 x 1TB and 3+3 x 2TB in the same pool (in one group I
 appear to care most about maximising space, in the other I'm
 maximising availability)
 
 The other issue is that of hot spares. In a pool of mixed size drives
 you either waste array slots (by having spares of different sizes) or
 plan to have unavailable space when small drives are replaced by large
 ones.


So do I understand correctly that really the Right thing to do is to build
a pool not only with a consistent strip width, but also to build it with
drives on only one size?   It also sounds like from a practical point of
view that building the pool full-sized is the best policy so that the data
can be spread relatively uniformly across all the drives from the very
beginning.  In my case, I think what I will do is to start with the 16
drives in a single pool and when I need more space, I'll create a new pool
and manually move the some of the existing data to the new pool to spread
the IO load.   

The other issue here seems to be RAIDZ2 vs RAIDZ3.  I assume there is not a
significant performance difference between the two for most loads, but
rather I choose between them based on how badly I want the array to stay
intact.  

-
Terry



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Ian Collins

On 08/10/10 10:09 PM, Phil Harman wrote:

On 10 Aug 2010, at 10:22, Ian Collinsi...@ianshome.com  wrote:

   

On 08/10/10 09:12 PM, Andrew Gabriel wrote:

Another option - use the new 2TB drives to swap out the existing 1TB drives.
If you can find another use for the swapped out drives, this works well, and 
avoids ending up with sprawling lower capacity drives as your pool grows in 
size. This is what I do at home. The freed-up drives get used in other systems 
and for off-site backups. Over the last 4 years, I've upgraded from 1/4TB, to 
1/2TB, and now on 1TB drives.

   

I have been doing the same.

The reason I mentioned performance (and I did mean 6 drives!) is in order to 
get some space on a budget I replaced one mirror in a stripe with bigger 
drives.  The others soon became nearly full and most of the IO went to the 
bigger pair, so I lost nearly all the benefit of the stripe.  I have also grown 
stripes and seen similar issues and I had to remove and replace large chunks of 
data to even things out.

I really think mixing vdev sizes is a bad idea.
 

I'd agree if this was a new pool, but this question was about expanding an 
existing pool (which is nearly full and where the performance is presumably 
acceptable).

Adding another vdev, whatever its size, is a simple zero downtime option for 
growing the pool (adding another pool would fragment the name space). With a 
similar number of spindles in a similar RAID configuration, performance is 
unlikely to get worse, indeed (as already noted) it is likely to get better 
until the new vdev fills up.

   
The best option for growing a pool is often swapping out the drives for 
larger ones, which is also a zero down time option.



Many systems only need to be good enough, not optimum. The best is often the 
enemy of the good. Anyone using RAIDZn is cost conscious to some degree (or why 
not just go for a hige stripe of 4-way mirrored SSDs and be done with it?)
   


That depends on the situation.  If a particular topology was chosen to 
give a capacity/performance trade off, degrading one or another may not 
be acceptable.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAID Z stripes

2010-08-10 Thread Ian Collins

On 08/11/10 05:16 AM, Terry Hull wrote:

So do I understand correctly that really the Right thing to do is to build
a pool not only with a consistent strip width, but also to build it with
drives on only one size?   It also sounds like from a practical point of
view that building the pool full-sized is the best policy so that the data
can be spread relatively uniformly across all the drives from the very
beginning.  In my case, I think what I will do is to start with the 16
drives in a single pool and when I need more space, I'll create a new pool
and manually move the some of the existing data to the new pool to spread
the IO load.

   

That is what I have done when Thumpers fill up!


The other issue here seems to be RAIDZ2 vs RAIDZ3.  I assume there is not a
significant performance difference between the two for most loads, but
rather I choose between them based on how badly I want the array to stay
intact.

   
The real issue is how long large capacity drives take to resilver and is 
the risk of loosing a second drive during that window high enough to 
cause concern. In a lot of situations with 2TB drives, it is.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss