Personally, I did 2 6-disk raidz2 vdevs in my pool with Samsung 1.5TB drives. 
With these large drives, I don't trust single parity, and I wanted a little 
more performance than a single vdev would be able to provide. While it would 
certainly saturate a single gigabit link, I do a fair bit of processing on the 
machine itself and faster disk I/O helps there. There is also the future 
possibility of doing bonded links if I buy a managed switch, so I would have 2 
or more gigabit links to this server. Probably unnecessary, but nice to have 
the option. 

>From reading, it sounds like a raidz is about as fast as the slowest drive in 
>the array. Stripping more than one raidz together is about the same as adding 
>the performance of the arrays. As a rough idea it seems to work. So if you 
>don't mind being I/O bound to the speed of a single drive, go for it. 
>Personally, I wouldn't trust even raidz2 with 14 1TB drives. I'd go to raidz3. 

The other thing to consider for a home server, is that to upgrade storage space 
on this machine you have to do a full vdev at once. So if you made a single 14 
disk raidz, you have to upgrade ALL 14 drives to see any additional space in 
the array. With the 2 array setup, you can upgrade 7 drives at a time, and the 
old drives become cold spares, or can be put into an older machine for backups. 
That was a big reason I went to 2 arrays, it's MUCH easier to get funding for 6 
drives than 12. Yes, I "lost" 4 disks to parity. However, I now have 8 disks of 
very secure, redundant data. I found that to be a reasonable trade for not 
having to rip all those CDs, DVDs and BDs again. :) 

I also used my old Linux fileserver parts to build a backup machine for really 
important data that I intend to move off-site here soon using CrashPlan to back 
up to it.

Keep in mind that with the large drives we have today, replacing one is going 
to take many hours of heavy I/O. And if another drive fails while you're doing 
that... bye-bye data. There were some great articles posted in the ZFS list a 
while back about the time to data loss with various parity levels. I found it 
helpful in deciding my strategy. 

Remember to scrub periodically. I have it set in cron for once a month. I just 
log into the server and check with "zfs status" to make sure things are working 
well still.
-- 
This message posted from opensolaris.org
_______________________________________________
opensolaris-help mailing list
[email protected]

Reply via email to