On Thu, 07 Jul 2011 21:57:44 -0400, Ryan <[email protected]> wrote:

Did anyone see the Facebook announcement and Open Compute Project?

http://opencompute.org/

I'm curious if you that are more experience have any thoughts on it.

I've taken a look at it, as we're looking to expand into a new rack.

For our specific needs, it doesn't scale. There are two reasons:

1: We don't own the datacenter, we use a colo.
2: We're not running enough servers to get the economies of scale.

Because point 1 is where a large chunk of the efficiencies are to be found, scaling point 2 down to our current scale-out doesn't make much sense. That said, once our next product takes the industry by storm and customers are beating a path to our electronic door, THEN it starts making a lot more sense. Get us beyond say, *waves hands* 4 racks and a server system as described will make for definite efficiencies.

Taking a look at the smaller scales, there are vendors out there that attempt to do all of the above in a single unified frame: these are the blade-rack vendors selling all-in-one systems with compute, network, and storage all in one frame. Rack-level power efficiency for when you can't control the facilities or are more space-constrained than power-constrained. On our scale, less than 4 racks, systems like these provide a lot of punch for the money/power.

In summation: a great idea if you're large enough to build your own datacenter, with some nice tips if you're not.

Greg Riedesel

On Thu, Jul 7, 2011 at 9:05 PM, Doug Hughes <[email protected]> wrote:
On 7/7/2011 5:32 PM, Matt Lawrence wrote:
On Thu, 7 Jul 2011, Eric Sproul wrote:

I'd also add that for heterogeneous equipment, favor putting equipment
with similar rack depth together. There's nothing worse than having a
short 1U server sandwiched between two deeper ones and not being able
to get your hand in there to fiddle with cables and such. It's also
an airflow no-no, as the outflow from the short server will tend to
warm up the chassis above and below it.

Very true. But I tend to consider the 1U form factor to be a bad idea
for computers anyway.

Speaking of airflow, what's the feeling on blanking panels? Does it
matter only "at scale", i.e. more than 2-3 racks in a small room, or
is it always a good idea? The profit margins on what is basically a
chunk of painted sheet metal are rather insulting...

I've been known to buy sheets of "acrylic glazing" and make my own.

We use them extensively, but have to do get the density we want. We are spec'd for up to 25KW per rack, and you can't have that without separation of cold and hot, and that means complete separation. We use bulk mouse pad material to drape above the racks at the partition between the space, and foam to put in uneven spots, between, and under racks. Blanking panels are a must for us. Yes, they run about $8 ea. for 1U, but the bigger the space, the cheaper per unit. There are many sellers of the middle atlantic version of them which are cheaper than the APC ones.

IMO, separation is always a good idea. A colo like equinix doesn't have it and so they can only get 4-6KW (8 if really pushed) per rack, and the PUE is somewhere around 2 to 2.5. We have a PUE of about 1.3. Yahoo and google have different datacenters that are close to unity (free cooling).


_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
http://lopsa.org/




--
Law of Probable Dispersal:
Whatever it is that hits the fan will not be evenly distributed.
_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to