Did anyone see the Facebook announcement and Open Compute Project? http://opencompute.org/
I'm curious if you that are more experience have any thoughts on it. Ryan Peck BS Information Security and Forensics, May 2012 Rochester Institute of Technology On Thu, Jul 7, 2011 at 9:05 PM, Doug Hughes <[email protected]> wrote: > On 7/7/2011 5:32 PM, Matt Lawrence wrote: > >> On Thu, 7 Jul 2011, Eric Sproul wrote: >> >> I'd also add that for heterogeneous equipment, favor putting equipment >>> with similar rack depth together. There's nothing worse than having a >>> short 1U server sandwiched between two deeper ones and not being able >>> to get your hand in there to fiddle with cables and such. It's also >>> an airflow no-no, as the outflow from the short server will tend to >>> warm up the chassis above and below it. >>> >> >> Very true. But I tend to consider the 1U form factor to be a bad idea >> for computers anyway. >> >> Speaking of airflow, what's the feeling on blanking panels? Does it >>> matter only "at scale", i.e. more than 2-3 racks in a small room, or >>> is it always a good idea? The profit margins on what is basically a >>> chunk of painted sheet metal are rather insulting... >>> >> >> I've been known to buy sheets of "acrylic glazing" and make my own. >> >> We use them extensively, but have to do get the density we want. We are > spec'd for up to 25KW per rack, and you can't have that without separation > of cold and hot, and that means complete separation. We use bulk mouse pad > material to drape above the racks at the partition between the space, and > foam to put in uneven spots, between, and under racks. Blanking panels are a > must for us. Yes, they run about $8 ea. for 1U, but the bigger the space, > the cheaper per unit. There are many sellers of the middle atlantic version > of them which are cheaper than the APC ones. > > IMO, separation is always a good idea. A colo like equinix doesn't have it > and so they can only get 4-6KW (8 if really pushed) per rack, and the PUE is > somewhere around 2 to 2.5. We have a PUE of about 1.3. Yahoo and google have > different datacenters that are close to unity (free cooling). > > > ______________________________**_________________ > Discuss mailing list > [email protected] > https://lists.lopsa.org/cgi-**bin/mailman/listinfo/discuss<https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss> > This list provided by the League of Professional System Administrators > http://lopsa.org/ >
_______________________________________________ Discuss mailing list [email protected] https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss This list provided by the League of Professional System Administrators http://lopsa.org/
