On 6/18/2011 3:11 PM, [email protected] wrote:
> On Fri, Jun 17, 2011 at 03:29:04PM -0400, Doug Hughes wrote:
>> I just learned about this today. It's a JBOD, so perfect for a big
>> zpool, highly dense, and should be easy to maintain. It's remarkably
>> similar to a DDN drawer in many ways (might be same manufacturer).
>> 120TB raw in 4U.
>> http://www.raidinc.com/products/storage-solutions/ebod/4u-ebod/
>
> I've been looking at dense storage myself;  I'be been looking
> at the 36 in 4u stuff from supermicro:
> http://www.supermicro.com/products/chassis/4U/847/SC847E16-RJBOD1.cfm
> (external disk only chassis with 45 disks)
>
> or
> http://www.supermicro.com/products/chassis/4U/847/SC847E2-R1400LP.cfm
> which is a 2u server chassis integrated into the above (ends up making
> for only 36 disks)
>
> I mean, I haven't actually gotten any of these in testing, but these
> are what it looks like I'm going to be using.   Plain-old hot swap on
> the front and back of the server.  I rack the thing, screw it in,
> and I don't slide it out until it's time to replace the thing.
>
>
> Now, I had eliminated the top-loading designs like the one you mention above
> from consideration because I really don't like sliding live servers out of
> the rack while they are live, and having to shut down a 35+ disk array to
> just swap one disk would be very, very inconvienient.    Besides, I haven't
> found any place around here that can get me a better cost per watt at
> densities higher than 3840 watts per rack usable, and that's low enough
> density that I'm not too worried about a few U.
>

I have 2 of the ones you pointed out. (4u chassis + 4u expansion). I run 
Solaris x86 running on them as an archive server. It's a very 
inexpensive box, but it has a few problems:
1) the failure rate of the 2TB drives is pretty high (I know this isn't 
specifically a problem with the box)
2) the multiple levels of indirection make it difficult to determine 
that you are pulling the right disk. You can either map one logical disk 
to physical disk in MegaCLI and then put your zpool on those, or you can 
create bigger raid-6 units (10 disks is a pretty optimal amount, and 
leaves 1 spare). There are many little idiosyncracies like trying to 
balance across front, back, and channels on the raid SAS card, but the 
worst is identifying a failed disk. Since you use MegaCLI to try to 
light the disk LED, if the disk is broken, you can't do that. With the 
EBOD chassis you always have a completely consistent mapping that works, 
and, if it's the same as the DDN chassis and/or thumper, the LED light 
is on the mobo and presented up through a light guide, which is superior 
identification. Plus, the stencils inside the lids clearly identify the 
controller/channel positions. For reasons I don't understand, the 
mapping to controllers and channels in the supermicro sometimes skip a 
slot between the megacli mapping and the solaris mapping.

I have no problem sliding the chassis out. We have 24 thumpers and 20 
DDN trays each with a vertical pull out and have never had a problem. 
Cable management arms keep everything secure.

YMMV.
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to