Yes, exactly. In a 300 square foot suite in a major IX point you can easily
run out of power + cooling before you run out of space. Meaning you could
not populate several cabinets with 1RU servers densely.

In real world use nobody sane will use more than 84% of the load capacity
on a 30A circuit. About 5300W thermal max. But that's only half of the
problem, one needs a way to reject the heat.

With -48VDC stuff one typical configuration would be one 208V 30A circuit
feeding a rectifier shelf, wired to the rear inputs of two 2500W
rectifiers. Then a second 208V 30A circuit feeding the other two rectifier
modules in that same shelf.



On Mon, May 21, 2018 at 11:34 AM, Chuck McCown <[email protected]> wrote:

> 208 x 30 = 6240 watts.
>
> That is like having 4 portable electric heaters plugged in inside your
> rack.
>
> Obviously they have to have HVAC capable of pumping that out of the
> building.
>
> *From:* Eric Kuhnke
> *Sent:* Monday, May 21, 2018 11:50 AM
> *To:* [email protected]
> *Subject:* Re: [AFMUG] OT NOC server choice
>
> In some older carrier hotel and IX sites it can be totally common to run
> out of power and air conditioning capacity in a suite before you run out of
> physical space. Or run into a power system bottleneck like needing a
> $70,000 NRC to upgrade new air conditioning and riser power capacity before
> you can add any more new 208V 20A or 208V 30A circuits.
>
> At a certain point it makes very logical operating-cost sense to
> consolidate a bunch of older 1RU servers down onto one, newer, physical
> much more powerful 1RU system (such as a dual socket, 32-cores of Xeon with
> 128GB of RAM) running xen, kvm or esxi.
>
> On Wed, May 16, 2018 at 5:47 AM, Paul Stewart <[email protected]>
> wrote:
>
>> We utilize a combination of blade systems and 1U/2U servers …. Wouldn’t
>> say blade systems are going away.  Basically the biggest sell point for
>> them is space/power (footprint).  If space/power is at a premium (ie. 3rd
>> party data center) and you need to put many servers into it then it can
>> make sense …. In our case this is exactly why we continue to deploy Cisco
>> ACS blade systems in particular – they work well and footprint is small.
>>
>>
>>
>> In areas where we have abundant space/power then 1U servers are preferred
>>
>>
>>
>> Paul
>>
>>
>>
>>
>>
>> *From: *Af <[email protected]> on behalf of Josh Luthman <
>> [email protected]>
>> *Reply-To: *<[email protected]>
>> *Date: *Tuesday, May 15, 2018 at 4:04 PM
>> *To: *<[email protected]>
>> *Subject: *Re: [AFMUG] OT NOC server choice
>>
>>
>>
>> Servers for what?
>>
>>
>>
>> Blades are kind of a thing of the past, I think.  It's way easier and
>> cheaper to do something like HA with ESXi.
>>
>>
>>
>>
>> Josh Luthman
>> Office: 937-552-2340
>> Direct: 937-552-2343
>> 1100 Wayne St
>> <https://maps.google.com/?q=1100+Wayne+St+Suite+1337+Troy,+OH+45373&entry=gmail&source=g>
>> Suite 1337
>> <https://maps.google.com/?q=1100+Wayne+St+Suite+1337+Troy,+OH+45373&entry=gmail&source=g>
>> Troy, OH 45373
>> <https://maps.google.com/?q=1100+Wayne+St+Suite+1337+Troy,+OH+45373&entry=gmail&source=g>
>>
>>
>>
>> On Tue, May 15, 2018 at 2:03 PM, <[email protected]> wrote:
>>
>> I need a pair of servers, prefer DC powered but not absolutely stuck on
>> that.  Like to have nice blade server system with hot standby etc.  Been
>> some time since I spec’d out servers.
>>
>>
>>
>> Any suggestions?
>>
>>
>>
>
>

Reply via email to