On Fri, Jun 17, 2011 at 03:29:04PM -0400, Doug Hughes wrote: > I just learned about this today. It's a JBOD, so perfect for a big > zpool, highly dense, and should be easy to maintain. It's remarkably > similar to a DDN drawer in many ways (might be same manufacturer). > 120TB raw in 4U. > http://www.raidinc.com/products/storage-solutions/ebod/4u-ebod/
I've been looking at dense storage myself; I'be been looking at the 36 in 4u stuff from supermicro: http://www.supermicro.com/products/chassis/4U/847/SC847E16-RJBOD1.cfm (external disk only chassis with 45 disks) or http://www.supermicro.com/products/chassis/4U/847/SC847E2-R1400LP.cfm which is a 2u server chassis integrated into the above (ends up making for only 36 disks) I mean, I haven't actually gotten any of these in testing, but these are what it looks like I'm going to be using. Plain-old hot swap on the front and back of the server. I rack the thing, screw it in, and I don't slide it out until it's time to replace the thing. Now, I had eliminated the top-loading designs like the one you mention above from consideration because I really don't like sliding live servers out of the rack while they are live, and having to shut down a 35+ disk array to just swap one disk would be very, very inconvienient. Besides, I haven't found any place around here that can get me a better cost per watt at densities higher than 3840 watts per rack usable, and that's low enough density that I'm not too worried about a few U. Those of you who actually use top-load disk bins, how does that work out? are your cable management arm things really good enough that you can reliably slide that guy out while the system is running? or do you just leave in a lot of hot spares and migrate off the data when you start running out of spares? _______________________________________________ Tech mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech This list provided by the League of Professional System Administrators http://lopsa.org/
