UNCLASSIFIED

Sorry for the top posting, but I'll echo Doug's points.

We had something similar a few years back.  I wasn't directly involved,
but the things I did notice were:

1. Locating the failed drive was extremely difficult.  We ran linux with
md's.  Software location was ok, but the hardware didn't correspond.
Like Doug has said, the MegaCLI probably got in the way.

2. It had 1 system disk!!  When that failed, or had read errors, the
faith in the system went out the door.  Trying to rebuild onto another
disk was a nightmare.  I can't seem to see where number of systems disks
is mentioned in the specs.

Fortunately, these systems weren't that important, as the data could be
rebuilt from source, elsewhere.

They ran fine for about a year to 18 months before we saw issues.  They
weren't ours and I don't want to touch this stuff ever again.  Just
don't have the time.

Greg.

-----Original Message-----
From: [email protected] [mailto:[email protected]]
On Behalf Of Doug Hughes
Sent: Sunday, 19 June 2011 8:51 AM
To: [email protected]
Subject: Re: [lopsa-tech] NAS Recommendations

On 6/18/2011 3:11 PM, [email protected] wrote:
> On Fri, Jun 17, 2011 at 03:29:04PM -0400, Doug Hughes wrote:
>> I just learned about this today. It's a JBOD, so perfect for a big 
>> zpool, highly dense, and should be easy to maintain. It's remarkably 
>> similar to a DDN drawer in many ways (might be same manufacturer).
>> 120TB raw in 4U.
>> http://www.raidinc.com/products/storage-solutions/ebod/4u-ebod/
>
> I've been looking at dense storage myself;  I'be been looking at the
> 36 in 4u stuff from supermicro:
> http://www.supermicro.com/products/chassis/4U/847/SC847E16-RJBOD1.cfm
> (external disk only chassis with 45 disks)
>
> or
> http://www.supermicro.com/products/chassis/4U/847/SC847E2-R1400LP.cfm
> which is a 2u server chassis integrated into the above (ends up making

> for only 36 disks)
>
> I mean, I haven't actually gotten any of these in testing, but these
> are what it looks like I'm going to be using.   Plain-old hot swap on
> the front and back of the server.  I rack the thing, screw it in, and 
> I don't slide it out until it's time to replace the thing.
>
>
> Now, I had eliminated the top-loading designs like the one you mention

> above from consideration because I really don't like sliding live 
> servers out of the rack while they are live, and having to shut down a
35+ disk array to
> just swap one disk would be very, very inconvienient.    Besides, I
haven't
> found any place around here that can get me a better cost per watt at 
> densities higher than 3840 watts per rack usable, and that's low 
> enough density that I'm not too worried about a few U.
>

I have 2 of the ones you pointed out. (4u chassis + 4u expansion). I run
Solaris x86 running on them as an archive server. It's a very
inexpensive box, but it has a few problems:
1) the failure rate of the 2TB drives is pretty high (I know this isn't
specifically a problem with the box)
2) the multiple levels of indirection make it difficult to determine
that you are pulling the right disk. You can either map one logical disk
to physical disk in MegaCLI and then put your zpool on those, or you can
create bigger raid-6 units (10 disks is a pretty optimal amount, and
leaves 1 spare). There are many little idiosyncracies like trying to
balance across front, back, and channels on the raid SAS card, but the
worst is identifying a failed disk. Since you use MegaCLI to try to
light the disk LED, if the disk is broken, you can't do that. With the
EBOD chassis you always have a completely consistent mapping that works,
and, if it's the same as the DDN chassis and/or thumper, the LED light
is on the mobo and presented up through a light guide, which is superior
identification. Plus, the stencils inside the lids clearly identify the
controller/channel positions. For reasons I don't understand, the
mapping to controllers and channels in the supermicro sometimes skip a
slot between the megacli mapping and the solaris mapping.

I have no problem sliding the chassis out. We have 24 thumpers and 20
DDN trays each with a vertical pull out and have never had a problem. 
Cable management arms keep everything secure.

YMMV.
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
http://lopsa.org/

IMPORTANT: This email remains the property of the Department of Defence
and is subject to the jurisdiction of section 70 of the Crimes Act 1914.
If you have received this email in error, you are requested to contact
the sender and delete the email.

_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to