Lars-Gunnar Persson wrote:
> I would like to go back to my question for a second:
> 
> I checked with my Nexsan supplier and they confirmed that access to
> every single disk in SATABeast is not possible. The smallest entities
> I can create on the SATABeast are RAID 0 or 1 arrays. With RAID 1 I'll
> loose too much disk space and I believe that leaves me with RAID 0 as
> the only reasonable option. But with this unsecure RAID format I'll
> need higher redundancy in the ZFS configuration. I think I'll go with
> the following configuration:
> 
> On the Nexsan SATABeast:
> * 14 disks configured in 7 RAID arrays with RAID level 0 (each disk is
> 1 TB which gives me a total of 14 TB raw disk space).
> * Each RAID 0 array configured as one volume.

So what the front end will see is 7 disks, 2TB each disk.

> 
> On the Sun Fire X4100 M2 with Solaris 10:
> * Add all 7 volumes to one zpool configured in on raidz2 (gives me
> approx. 8,8 TB available disk space)

You'll get 5 LUNs worth of space in this config, or 10TB of usable space.

> 
> Any comments or suggestions?

Given the hardware constraints (no single-disk volumes allowed) this is a good 
configuration for most purposes.

The advantages/disadvantages are:
. 10TB of usable disk space, out of 14TB purchased.
. At least three hard disk failures are required to lose the ZFS pool.
. Random non-cached read performance will be about 300 IO/sec.
. Sequential reads and writes of the whole ZFS blocksize will be fast (up to 
2000 IO/sec).
. One hard drive failure will cause the used blocks of the 2TB LUN (raid0 pair) 
to be resilvered, even though the other half of the pair is not damaged.  The 
other half of the pair is more likely to fail during the ZFS resilvering 
operation because of increased load.

You'll want to pay special attention to the cache settings on the Nexsan.  You 
earlier showed that the write cache is enabled, but IIRC the array doesn't have 
a nonvolatile (battery-backed) cache.  If that's the case, MAKE SURE it's 
hooked up to a UPS that can support it for the 30 second cache flush timeout on 
the array.  And make sure you don't power it down hard.  I think you want to 
uncheck the "ignore FUA" setting, so that FUA requests are respected.  My guess 
is that this will cause the array to properly handle the cache_flush requests 
that ZFS uses to ensure data consistancy.

--Joe
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to