Absolutely agree… these are not “normal” cabinets for customers … although if 
you have a cage, you have more freedom to do within it what you want (within 
reason) hence why these cabinets were chosen/built.

 

>From a DC perspective, yeah the more you can jam in the better (again, within 
>reason)

 

;)

 

From: Af [mailto:[email protected]] On Behalf Of Eric Kuhnke
Sent: May 19, 2016 3:28 PM
To: [email protected]
Subject: Re: [AFMUG] Data center temperatures

 

Agreed from a end user perspective but not so much in a $$$/sq ft revenue 
perspective of a datacenter operator. With the very highest density/high power 
cabinets I've seen recently, the cable management is actually not so bad. For 
hypervisor platforms what used to be 4 x 1000BaseT connections and maybe a 5th 
cable for OOB in a previous generation design is now a few 10GbE and 40Gb links 
to a TOR switch over regular, thin, yellow spaghetti two strand singlemode.

 

 

On Thu, May 19, 2016 at 12:23 PM, Josh Reynolds <[email protected] 
<mailto:[email protected]> > wrote:

Extra wide cabinets are awesome for cable management.

On Thu, May 19, 2016 at 12:42 PM, Paul Stewart <[email protected] 
<mailto:[email protected]> > wrote:
> The cabinets are 50 or 52U in size – custom size I know for sure… extra wide
> too which is nice
>
>
>
> When filled (pure SSD, almost 200TB raw capacity) they draw around 16kW of
> power J
>
>
>
>
>
> From: Af [mailto:[email protected] <mailto:[email protected]> ] On 
> Behalf Of Eric Kuhnke
> Sent: May 14, 2016 7:50 PM
> To: [email protected] <mailto:[email protected]> 
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> How does a 44U cabinet need 208V 60A for storage arrays?
>
> In a 4U chassis the max hard drives (front and rear) is about 60 x 3.5"...
>
> Say each drive is 7.5W TDP, that's 450W of drives. Add another 200W for
> controller/motherboard and fans. 650W in 4U.
>
> 44 / 4 = 11
>
> Multply by 650
>
> 7150W
>
> More realistically with a normal amount of drives (like 40 per 4U) a single
> 208 30A is sufficient,
>
> 208 x 30 = 6240W
>
> Run at max 0.85 load on the circuit, so
>
> 6240 x 0.85 = 5304W
>
> In a really dense 2.5" environment all of the above is of course invalid,
> you could probably need up to 7900W per cabinet
> Then there's 52U cabinets as well...
>
> On May 13, 2016 6:16 PM, "Paul Stewart" <[email protected] 
> <mailto:[email protected]> > wrote:
>

> Yup … general trends on new data centers are pushing those temperatures
> higher for efficiency but also with better designs ..
>
>
>
> One of our data centers runs at 78F and have no issues – each cabinet is
> standard 208V 30A as you mention but can go per cabinet much higher if
> needed (ie. 208V 60A for storage arrays)
>
>
>
> From: Af [mailto:[email protected] <mailto:[email protected]> ] On 
> Behalf Of Eric Kuhnke
> Sent: May 11, 2016 5:15 PM
>
>
> To: [email protected] <mailto:[email protected]> 
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> There have been some fairly large data set studies done shown that air
> intake temperature for huge numbers of servers, at 77-78F does not correlate
> with a statistically significant rate of failure.
>
> http://www.datacenterknowledge.com/archives/2008/09/18/intel-servers-do-fine-with-outside-air/
>
> http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/
>
> how/what you do for cooling is definitely dependent on the load. Designing a
> colo facility to use a full 208V 30A circuit per cabinet (5.5kW) in a
> hot/cold air separated configuration is very different than 'normal' older
> facilities that are one large open room.
>
>
>
> On Wed, May 11, 2016 at 1:58 PM, Ken Hohhof <[email protected] 
> <mailto:[email protected]> > wrote:
>
> I’m not sure you can answer the question without knowing the max heat load
> per cabinet and how you manage airflow in the cabinets.
>
>
>
> AFAIK it used to be standard practice to keep data centers as cold as
> possible without requiring people to wear parkas, but energy efficiency is a
> consideration now.
>
>
>
>
>
> From: That One Guy /sarcasm
>
> Sent: Wednesday, May 11, 2016 3:51 PM
>
> To: [email protected] <mailto:[email protected]> 
>
> Subject: Re: [AFMUG] Data center temperatures
>
>
>
> apparently 72 is the the ideal for our noc, i set our thermostat to 60 and
> it always gets turned back to 72, so i just say fuck it, I wanted new gear
> in the racks anyway
>
>
>
> On Wed, May 11, 2016 at 3:46 PM, Larry Smith <[email protected] 
> <mailto:[email protected]> > wrote:
>
> On Wed May 11 2016 15:37, Josh Luthman wrote:
>> Just curious what the ideal temp is for a data center.  Our really nice
>> building that Sprint ditched ranges from 60 to 90F (on a site monitor).
>
> I try to keep my NOC room at about 62F, that puts many of the CPU's
> at 83 to 90F.  Many of the bigger places I visit will generally be 55 to
> 60F.
> Loads of computers (data center type) are primarily groupings of little
> heaters...
>
> --
> Larry Smith
> [email protected] <mailto:[email protected]> 
>
>
>
>
>
> --
>
> If you only see yourself as part of the team but you don't see your team as
> part of yourself you have already failed as part of the team.
>
>

 

Reply via email to