Given that I have lots of ProLiant equipment, are there any recommended
controllers that would work in this situation? Is this an issue unique to
the Smart Array controllers? If I do choose to use some level of hardware
RAID on the existing Smart Array P400, what's the best way to use it with
ZFS (assume 8 disks with an emphasis on capacity)?

-- 
Edmund William White
ewwh...@mac.com



> From: Craig Morgan <craig.mor...@sun.com>
> Date: Tue, 27 Jan 2009 13:54:46 +0000
> To: Edmund White <ewwh...@mac.com>
> Cc: Alex <a...@pancentric.com>, <zfs-discuss@opensolaris.org>
> Subject: Re: [zfs-discuss] Problems using ZFS on Smart Array P400
> 
> You need to step back and appreciate that the manner in which you are
> presenting Solaris with disks is the problem and not necessarily ZFS.
> 
> As your storage system is incapable of JBOD operation, you have
> decided to present each disk as a 'simple' RAID0 volume. Whilst this
> looks like a 'pass-thru' access method to the disk and its contents,
> it is far from it. The HW RAID sub-system is creating a logical volume
> based on this single spindle (in exactly the same way it would be for
> multiple spindles, aka a stripe), metadata is recorded by the RAID
> system with regard to the make-up of said volume.
> 
> The important issue here is that you have a non-redundant RAID (!)
> config, hence a single failure (in this case your single spindle
> failure) causes the RAID sub-system to declare the volume (and hence
> its operational status) as failed, this in turn is declared to the OS
> as a failed volume. At this juncture, intervention is normally
> necessary to re-destroy/re-create a volume (remember no redundancy---
> so this is manual!) and hence re-present it to the OS (which will find
> a new UID for the volume and treat it as a new device). On occasions
> it may be possible to intervene and "resurrect" a volume by manually
> overriding the status of the RAID0 volume, but in many HW RAID systems
> this is not to be recommended.
> 
> In short, you've got more abstractions (layers) in place than you need/
> desire and that is fundamentally the cause of your problem ... either
> plump for a simpler array or swallow some loss of transparency in the
> ZFS layer and present redundant RAID sets from your array, but live
> with the consequences of increased admin and complexity and some loss
> of transparency/protection---but hopefully the RAID sub-system will be
> capable of automated recovery in most circumstances of simple failures.
> 
> Craig
> 
> On 27 Jan 2009, at 13:00, Edmund White wrote:
> 
> 
> -- 
> Craig
> 
> Craig Morgan
> t: +44 (0)791 338 3190
> f: +44 (0)870 705 1726
> e: craig.mor...@sun.com
> 
> ~ 
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>   NOTICE:  This email message is for the sole use of the intended
>   recipient(s) and may contain confidential and privileged information.
>   Any unauthorized review, use, disclosure or distribution is
> prohibited.
>   If you are not the intended recipient, please contact the sender by
>   reply email and destroy all copies of the original message.
> ~ 
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> 


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to