[zfs-discuss] Recomendations for Storage Pool Config

2010-06-26 Thread tie...@lotas-smartman.net
Good morning all.

This question has probably poped up before, but maybe not in this exact way...

I am planning on building a SAN for my home meta centre, and have some of the 
raid cards I need for the build. I will be ordering the case soon, and then the 
drives. The cards I have are 2 8 port PXI-Express cards (A dell Perc 5 and a 
Adaptec card...). The case will have 20 hot swap SAS/SATA drives, and I will be 
adding a third RAID controller to allow the full 20 drives.

I have read something about trying to setup redundancy with the RAID 
controllers, so having zpools spanning multiple controllers. Given I won't be 
using the on-board RAID features of the cards, I am wondering how this should 
be setup...

I was thinking of zpools: 2+2+1 X 4 in ZRAID2. This way, I could lose a 
controller and not lose any data from the pools... But is this theory correct? 
If I were to use 2Tb drives, each zpool would be 10Tb RAW and 6TB useable... 
giving me a total of 40Tb RAW and 24Tb usable...

Is this over kill? Should I be worrying about losing a controller?

Thanks in advance.

--Tiernan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recomendations for Storage Pool Config

2010-06-26 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Tiernan OToole
 
 I have read something about trying to setup redundancy with the RAID
 controllers, so having zpools spanning multiple controllers. Given I
 won't be using the on-board RAID features of the cards, I am wondering
 how this should be setup.

Option 1:
Suppose you have controllers, A, B, C.
Suppose you have 8 disks per controller, 0, 1, ..., 7
Configure your controller for JBOD mode (so the OS can see each individual
disk) or configure each individual disk as a 1-disk raid0 or raid1, which is
essentially the same as JBOD.  Point is, the OS needs to see each individual
disk.

mirror A0 B0 C0 mirror A1 B1 C1 mirror A2 B2 C2 ...

You will have the total usable capacity of 8 disks, and 16 disks redundancy.

Option 2:

raidz A0 B0 C0 raidz A1 B1 C1 raidz A2 B2 C2 ...

You will have the total usable capacity of 16 disks, and 8 disks redundancy.

Option 3:

If you have any less than 8 disks redundancy, you won't be protected against
controller failure.  It's a calculated risk; a crashed controller would
result in the zpool going offline, and probably an OS crash.  I don't know
the probability of data loss in that scenario, but I know none of this
should be a problem if you *do* have the controller redundancy. 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recomendations for Storage Pool Config

2010-06-26 Thread Roy Sigurd Karlsbakk
 I am planning on building a SAN for my home meta centre, and have some
 of the raid cards I need for the build. I will be ordering the case
 soon, and then the drives. The cards I have are 2 8 port PXI-Express
 cards (A dell Perc 5 and a Adaptec card…). The case will have 20 hot
 swap SAS/SATA drives, and I will be adding a third RAID controller to
 allow the full 20 drives.

First, you won't need a RAID controller. Just get something cheap with lots of 
ports. It'll suffice - ZFS will do the smart stuff.

 I was thinking of zpools: 2+2+1 X 4 in ZRAID2. This way, I could lose
 a controller and not lose any data from the pools… But is this theory
 correct? If I were to use 2Tb drives, each zpool would be 10Tb RAW and
 6TB useable… giving me a total of 40Tb RAW and 24Tb usable…

I don't see the point of using more than one pool. Just use more VDEVs in the 
same pool, and redundancy will be just as good as with more pools, and a single 
pool is far more flexible.
 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recomendations for Storage Pool Config

2010-06-25 Thread Tiernan OToole
Good morning all.



This question has probably poped up before, but maybe not in this exact way…



I am planning on building a SAN for my home meta centre, and have some of
the raid cards I need for the build. I will be ordering the case soon, and
then the drives. The cards I have are 2 8 port PXI-Express cards (A dell
Perc 5 and a Adaptec card…). The case will have 20 hot swap SAS/SATA drives,
and I will be adding a third RAID controller to allow the full 20 drives.



I have read something about trying to setup redundancy with the RAID
controllers, so having zpools spanning multiple controllers. Given I won’t
be using the on-board RAID features of the cards, I am wondering how this
should be setup…



I was thinking of zpools: 2+2+1 X 4 in ZRAID2. This way, I could lose a
controller and not lose any data from the pools… But is this theory correct?
If I were to use 2Tb drives, each zpool would be 10Tb RAW and 6TB useable…
giving me a total of 40Tb RAW and 24Tb usable…



Is this over kill? Should I be worrying about losing a controller?



Thanks in advance.



--Tiernan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recomendations for Storage Pool Config

2010-06-25 Thread Cindy Swearingen

Tiernan,

Hardware redundancy is important, but I would be thinking about how you
are going to back up data in the 6-24 TB range, if you actually need
that much space.

Balance your space requirements with good redundancy and how much data
you can safely back up because stuff happens: hardware fails, power
fails, and you can lose data.

More suggestions:

1. Test some configs for your specific data/environment.

2. Start with smaller mirrored pools, which offer redundancy, good
performance, and more flexibility.

With a SAN, I would assume you are using multiple systems. Did you mean
meta centre or media centre?

3. Consider a mirrored source pool and then create snapshots that you
send to a mirrored backup pool on another system. Mirrored pools can be
easily expanded when you need more space.

4. If you are running a recent OpenSolaris build, you could use the
zpool split command to attach and detach disks from your source pool to
replicate it on another system, in addition to doing more regular
snapshots of source data.

Thanks,

Cindy

On 06/25/10 13:26, Tiernan OToole wrote:

Good morning all.

 


This question has probably poped up before, but maybe not in this exact way…

 

I am planning on building a SAN for my home meta centre, and have some 
of the raid cards I need for the build. I will be ordering the case 
soon, and then the drives. The cards I have are 2 8 port PXI-Express 
cards (A dell Perc 5 and a Adaptec card…). The case will have 20 hot 
swap SAS/SATA drives, and I will be adding a third RAID controller to 
allow the full 20 drives.


 

I have read something about trying to setup redundancy with the RAID 
controllers, so having zpools spanning multiple controllers. Given I 
won’t be using the on-board RAID features of the cards, I am wondering 
how this should be setup…


 

I was thinking of zpools: 2+2+1 X 4 in ZRAID2. This way, I could lose a 
controller and not lose any data from the pools… But is this theory 
correct? If I were to use 2Tb drives, each zpool would be 10Tb RAW and 
6TB useable… giving me a total of 40Tb RAW and 24Tb usable…


 


Is this over kill? Should I be worrying about losing a controller?

 


Thanks in advance.

 


--Tiernan




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss