I am trying to determine the best way to move forward with about 35 x86 X4200's
Each box has 4x 73GB internal drives.

All the boxes will be built using Solaris 10 11/06. Additionally, these boxes 
are part of a highly available production environment with an uptime 
expectation of 6 9's ( just a few seconds per month unscheduled downtime 
allowed)

Ideally, I would like to use a single RaidZ2 pool of all 4 disks, but 
apparently that is not supported yet. I understand there is the ZFSmount 
software for making a ZFS root, but I don't think I want to use that for an 
environment of this grade and I can't wait until Sun comes out with it 
integrated later this year...have to use 11/06

For perspective, these systems are currently running using pure UFS. With only 
2 of the 4 disks being used in a software raid 1
/ = 5GB
/var = 5GB
/tmp = 4GB
/home = 2GB
/data = 50GB

I am looking for recommendations on how to maximize the use of ZFS and minimize 
the use of UFS without resorting to anything "experimental".

So assuming that each 73GB disk yields 70GB usable space...
Would it make sense to create a UFS root partition of 5GB that is a 4 way 
mirror across all 4 disks? I haven't used SVM to create these types of mirrors 
before so if anyone has any experience here let me know. My expectation is that 
up to any 3 of the 4 disks could fail while leaving the root partition intact. 
Basically, every time root has data updated that data would be written 3 times 
more to each other disk

So this would leave each disk with 68GB of free space. I would then create a 
4GB UFS /tmp (swap) partition that would be 4 way mirrored across the remaining 
3 disks just as I am suggesting above with the root partition. So again, up to 
any 3 disks could fail and the swap filesystem would still be intact.

This would leave each disk with 64GB of free space, totaling 256GB. I would 
then create a single ZFS pool of all the remaining freespace on each of the 4 
disks. 

How should this be done? 

Perhaps a form of mirring? What would be the difference in doing?
zpool create tank mirror c1d0 c2d0 c3d0 c4d0
or
zpool create tank mirror c1d0 c2d0 mirror c3d0 c4d0

Would it be better to use RaidZ with a hotspare or RAIDZ2

I would like /data, /home, and /var to be able to grow as needed and be able to 
withstand at least 2 disk failures (doesn't have to be any 2). I am open to 
using a hotspare

Suggestions?
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to