Re: [eX-bulk] : Re: Rasberry pi - high density

2015-05-14 Thread nanog

Greetings,

	Do we really need them to be swappable at that point?  The reason we 
swap HDD's (if we do) is because they are rotational, and mechanical 
things break.  Do we swap CPUs and memory hot?  Do we even replace 
memory on a server that's gone bad, or just pull the whole thing during 
the periodic dead body collection and replace it?  Might it not be 
more efficient (and space saving) to just add 20% more storage to a 
server than the design goal, and let the software use the extra space to 
keep running when an SSD fails?  When the overall storage falls below 
tolerance, the unit is dead.  I think we will soon need to (if we aren't 
already) stop thinking about individual components as FRUs.  The server 
(or rack, or container) is the FRU.


Christopher



On 9 May 2015, at 12:26, Eugeniu Patrascu wrote:


On Sat, May 9, 2015 at 9:55 PM, Barry Shein b...@world.std.com wrote:



On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) 
wrote:



So I just crunched the numbers. How many pies could I cram in a 
rack?


For another list I just estimated how many M.2 SSD modules one could
cram into a 3.5 disk case. Around 40 w/ some room to spare (assuming
heat and connection routing aren't problems), at 500GB/each that's
20TB in a standard 3.5 case.

It's getting weird out there.


I think the next logical step in servers would be to remove the 
traditional
hard drive cages and put SSD module slots that can be hot swapped. 
Imagine
inserting small SSD modules on the front side of the servers and 
directly

connect them via PCIe to the motherboard. No more bottlenecks and a
software RAID of some sorts would actually make a lot more sense than 
the

current controller based solutions.



--
李柯睿
Avt tace, avt loqvere meliora silentio
Check my PGP key here: http://www.asgaard.org/cdl/cdl.asc
Current vCard here: http://www.asgaard.org/cdl/cdl.vcf
keybase: https://keybase.io/liljenstolpe


Re: [eX-bulk] : Re: Rasberry pi - high density

2015-05-14 Thread charles

On 2015-05-13 19:42, na...@cdl.asgaard.org wrote:

Greetings,

Do we really need them to be swappable at that point?  The reason we
swap HDD's (if we do) is because they are rotational, and mechanical
things break.


Right.


Do we swap CPUs and memory hot?


Nope. Usually just toss the whole thing. Well I keep spare ram around 
cause it's so cheap. But if CPU goes, chuck it in the ewaste pile in the 
back.



 Do we even replace

memory on a server that's gone bad, or just pull the whole thing
during the periodic dead body collection and replace it?



Usually swap memory. But yeah, often times the hardware ops folks just 
cull old boxes on a quarterly basis and backfill with the latest batch 
of inbound kit. At large scale (which many on this list operate at), you 
have pallets of gear sitting in the to deploy queue, and another couple 
pallets worth racked up but not even imaged yet.


(This is all supposition of course. I'm used to working with $HUNDREDS 
of racks worth of gear). Containers, moonshot type things etc are 
certainly on the radar.



 Might it

not be more efficient (and space saving) to just add 20% more storage
to a server than the design goal, and let the software use the extra
space to keep running when an SSD fails?


Yes. Also a few months ago I read an article about several SSD brands 
having $MANY terabytes written to them. Can't find it just now. But they 
seem to take quite a long time (data wise/number of write wise) to fail.


  When the overall storage

falls below tolerance, the unit is dead.  I think we will soon need to
(if we aren't already) stop thinking about individual components as
FRUs.  The server (or rack, or container) is the FRU.

Christopher



Yes. Agree.

Most of the very large scale shops (the ones I've worked at) are 
massively horizontal scaled, cookie cutter. Many boxes 
replicating/extending/expanding a set of well defined workloads.


Re: Rasberry pi - high density

2015-05-13 Thread Lamar Owen

On 05/11/2015 06:50 PM, Brandon Martin wrote:


8kW/rack is something it seems many a typical computing oriented 
datacenter would be used to dealing with, no?  Formfactor within the 
rack is just a little different which may complicate how you can 
deliver the cooling - might need unusually forceful forced air or a 
water/oil type heat exchanger for the oil immersion method being 
discussed elsewhere in the thread.


You still need giant wires and busses to move 800A worth of current. ...


This thread brings me back to 1985, what with talk of full immersion 
cooling (Fluorinert, anyone?) and hundreds of amps at 5VDC reminds 
me of the Cray-2, which dropped 150-200KW in 6 rack location units of 
space; 2 for the CPU itself, 2 for space, and 2 for the cooling 
waterfall [ https://en.wikipedia.org/wiki/File:Cray2.jpeg by referencing 
floor tile space occupied and taking 16 sq ft (four tiles) as one RLU 
].  Each 'stack' of the CPU pulled 2,200A at 5V [source: 
https://en.wikipedia.org/wiki/Cray-2#History ].  At those currents you 
use busbar, not wire.  Our low-voltage (120/208V three-phase) switchgear 
here uses 6,000A rated busbar, so it's readily available, if expensive.




Re: Rasberry pi - high density

2015-05-12 Thread Barry Shein

To some extent people are comparing apples (not TM) and oranges.

Are you trying to maximize the number of total cores or the number of
total computes? They're not the same.

It depends on the job mix you expect.

For example a map-reduce kind of problem, search of a massive
database, probably is improved with lots of cores even if each core
isn't that fast. You partition a database across thousands of cores
and broadcast who has XYZ? and wait for an answer, in short.

There are a lot of problems like that, and a lot of problems which
cannot be improved by lots of cores. For example if you have to wait
for one answer before you can compute the next (matrix inversion is
notorious for this property and very important.) You just can't keep
the pipeline filled.

And then there are the relatively inexpensive GPUs which can do many
floating point ops in parallel and are good at certain jobs like, um,
graphics! rendering, ray-tracing, etc. But they're not very good at
general purpose integer ops like string searching, as a general rule,
or problems which can't be decomposed to take advantage of the
parallelism.

You've got your work cut out for you analyzing these things!

-- 
-Barry Shein

The World  | b...@theworld.com   | http://www.TheWorld.com
Purveyors to the Trade | Voice: 800-THE-WRLD| Dial-Up: US, PR, Canada
Software Tool  Die| Public Access Internet | SINCE 1989 *oo*


Re: Rasberry pi - high density

2015-05-12 Thread Rafael Possamai
Here's someone's comparison between the B and B+ in terms of power:

http://raspi.tv/2014/how-much-less-power-does-the-raspberry-pi-b-use-than-the-old-model-b

On Mon, May 11, 2015 at 10:25 PM, Joel Maslak jmas...@antelope.net wrote:

 Rather then guessing on power consumption, I measured it.

 I took a Pi (Model B - but I suspect B+ and the new version is relatively
 similar in power draw with the same peripherials), hooked it up to a lab
 power supply, and took a current measurement.  My pi has a Sandisk SD card
 and a Sandisk USB stick plugged into it, so, if anything, it will be a bit
 high in power draw.  I then fired off a tight code loop and a ping -f from
 another host towards it, to busy up the processor and the network/USB on
 the Pi.  I don't have a way of making the video do anything, so if you were
 using that, your draw would be up.  I also measured idle usage (sitting at
 a command prompt).

 Power draw was 2.3W under load, 2.0W at idle.

 If it was my project, I'd build a backplane board with USB-to-ethernet and
 ethernet switch chips, along with sockets for Pi compute modules (or
 something similar).  I'd want one power cable and one network cable per
 backplane board if my requirements allowed it.  Stick it all in a nice card
 cage and you're done.

 As for performance per watt, I'd be surprised if this beat a modern video
 processor for the right workload.


 On Mon, May 11, 2015 at 5:16 PM, Rafael Possamai raf...@gav.ufsc.br
 wrote:

  Maybe I messed up the math in my head, my line of thought was one pi is
  estimated to use 1.2 watts, whereas the nuc is at around 65 watts. 10
 pi's
  = 12 watts. My comparison was 65watts/12watts = 5.4 times more power than
  10 pi's put together. This is really a rough estimate because I got the
  NUC's power consumption from the AC/DC converter that comes with it,
 which
  has a maximum output of 65 watts. I could be wrong (up to 5 times) and
  still the pi would use less power.
 
  Now that I think about it, the best way to simplify this is to calculate
  benchmark points per watt, so rasp pi is at around 406/1.2 which equals
  338. The NUC, roughly estimated to be at 3857/65 which equals 60. Let's
 be
  very skeptical and say that at maximum consumption the pi is using 5
 watts,
  then 406/5 is around 81. At this point the rasp pi still scores better.
 
  Only problem we are comparing ARM to x86 which isn't necessarily fair (i
 am
  not an expert in computer architectures)
 
 
 
 
 
  On Mon, May 11, 2015 at 5:24 PM, Hugo Slabbert h...@slabnet.com wrote:
 
   Did I miss anything? Just a quick comparison.
  
  
   If those numbers are accurate, then it leans towards the NUC rather
 than
   the Pi, no?
  
   Perf:   1x i5 NUC = 10x Pi
   $$: 1x i5 NUC = 10x Pi
   Power:  1x i5 NUC = 5x Pi
  
   So...if a single NUC gives you the performance of 10x Pis at the
 capital
   cost of 10x Pis but uses half the power of 10x Pis and only a single
   Ethernet port, how does the Pi win?
  
   --
   Hugo
  
  
   On Mon 2015-May-11 17:08:43 -0500, Rafael Possamai raf...@gav.ufsc.br
 
   wrote:
  
Interesting! Knowing a pi costs approximately $35, then you need
   approximately $350 to get near an i5.. The smallest and cheapest
 desktop
   you can get that would have similar power is the Intel NUC with an i5
  that
   goes for approximately $350. Power consumption of a NUC is about 5x
 that
   of
   the raspberry pi, but the number of ethernet ports required is 10x
 less.
   Usually in a datacenter you care much more about power than switch
  ports,
   so in this case if the overhead of controlling 10x the number of nodes
  is
   worth it, I'd still consider the raspberry pi. Did I miss anything?
  Just a
   quick comparison.
  
  
  
   On Mon, May 11, 2015 at 4:40 PM, Michael Thomas m...@mtcc.com
 wrote:
  
As it turns out, I've been playing around benchmarking things lately
   using
   the tried and true
   UnixBench suite and here are a few numbers that might put this in
 some
   perspective:
  
   1) My new Rapsberry pi (4 cores, arm): 406
   2) My home i5-like thing (asus 4 cores, 16gb's from last year): 3857
   3) AWS c4.xlarge (4 cores, ~8gb's): 3666
  
   So you'd need to, uh, wedge about 10 pi's to get one half way modern
  x86.
  
   Mike
  
  
   On 5/11/15 1:37 PM, Clay Fiske wrote:
  
On May 8, 2015, at 10:24 PM, char...@thefnf.org wrote:
  
  
   Pi dimensions:
  
   3.37 l (5 front to back)
   2.21 w (6 wide)
   0.83 h
   25 per U (rounding down for Ethernet cable space etc) = 825 pi
  
   Cable management and heat would probably kill this before it ever
   reached completion, but lol…
  
  
   This feels like it should be a Friday thread. :)
  
   If you’re really going for density:
  
   - At 0.83 inches high you could go 2x per U (depends on your
 mounting
   system and how much space it burns)
   - I’d expect you could get at least 7 wide if not 8 with the right
   micro-USB power connector
   - In most datacenter racks I’ve 

Re: Rasberry pi - high density

2015-05-11 Thread Rafael Possamai
Interesting! Knowing a pi costs approximately $35, then you need
approximately $350 to get near an i5.. The smallest and cheapest desktop
you can get that would have similar power is the Intel NUC with an i5 that
goes for approximately $350. Power consumption of a NUC is about 5x that of
the raspberry pi, but the number of ethernet ports required is 10x less.
Usually in a datacenter you care much more about power than switch ports,
so in this case if the overhead of controlling 10x the number of nodes is
worth it, I'd still consider the raspberry pi. Did I miss anything? Just a
quick comparison.



On Mon, May 11, 2015 at 4:40 PM, Michael Thomas m...@mtcc.com wrote:

 As it turns out, I've been playing around benchmarking things lately using
 the tried and true
 UnixBench suite and here are a few numbers that might put this in some
 perspective:

 1) My new Rapsberry pi (4 cores, arm): 406
 2) My home i5-like thing (asus 4 cores, 16gb's from last year): 3857
 3) AWS c4.xlarge (4 cores, ~8gb's): 3666

 So you'd need to, uh, wedge about 10 pi's to get one half way modern x86.

 Mike


 On 5/11/15 1:37 PM, Clay Fiske wrote:

 On May 8, 2015, at 10:24 PM, char...@thefnf.org wrote:

 Pi dimensions:

 3.37 l (5 front to back)
 2.21 w (6 wide)
 0.83 h
 25 per U (rounding down for Ethernet cable space etc) = 825 pi

 Cable management and heat would probably kill this before it ever
 reached completion, but lol…


 This feels like it should be a Friday thread. :)

 If you’re really going for density:

 - At 0.83 inches high you could go 2x per U (depends on your mounting
 system and how much space it burns)
 - I’d expect you could get at least 7 wide if not 8 with the right
 micro-USB power connector
 - In most datacenter racks I’ve seen you could get at least 8 deep even
 with cable breathing room

 So somewhere between 7x8x2 = 112 and 8x8x2 = 128 per U. And if you get
 truly creative about how you stack them you could probably beat that
 without too much effort.

 This doesn’t solve for cooling, but I think even at these numbers you
 could probably make it work with nice, tight cabling.


 -c






Re: Rasberry pi - high density

2015-05-11 Thread Michael Thomas
As it turns out, I've been playing around benchmarking things lately 
using the tried and true
UnixBench suite and here are a few numbers that might put this in some 
perspective:


1) My new Rapsberry pi (4 cores, arm): 406
2) My home i5-like thing (asus 4 cores, 16gb's from last year): 3857
3) AWS c4.xlarge (4 cores, ~8gb's): 3666

So you'd need to, uh, wedge about 10 pi's to get one half way modern x86.

Mike

On 5/11/15 1:37 PM, Clay Fiske wrote:

On May 8, 2015, at 10:24 PM, char...@thefnf.org wrote:

Pi dimensions:

3.37 l (5 front to back)
2.21 w (6 wide)
0.83 h
25 per U (rounding down for Ethernet cable space etc) = 825 pi

Cable management and heat would probably kill this before it ever reached 
completion, but lol…


This feels like it should be a Friday thread. :)

If you’re really going for density:

- At 0.83 inches high you could go 2x per U (depends on your mounting system 
and how much space it burns)
- I’d expect you could get at least 7 wide if not 8 with the right micro-USB 
power connector
- In most datacenter racks I’ve seen you could get at least 8 deep even with 
cable breathing room

So somewhere between 7x8x2 = 112 and 8x8x2 = 128 per U. And if you get truly 
creative about how you stack them you could probably beat that without too much 
effort.

This doesn’t solve for cooling, but I think even at these numbers you could 
probably make it work with nice, tight cabling.


-c






Re: Rasberry pi - high density

2015-05-11 Thread Clay Fiske

 On May 8, 2015, at 10:24 PM, char...@thefnf.org wrote:
 
 Pi dimensions:
 
 3.37 l (5 front to back)
 2.21 w (6 wide)
 0.83 h
 25 per U (rounding down for Ethernet cable space etc) = 825 pi
 
 Cable management and heat would probably kill this before it ever reached 
 completion, but lol…


This feels like it should be a Friday thread. :)

If you’re really going for density:

- At 0.83 inches high you could go 2x per U (depends on your mounting system 
and how much space it burns)
- I’d expect you could get at least 7 wide if not 8 with the right micro-USB 
power connector
- In most datacenter racks I’ve seen you could get at least 8 deep even with 
cable breathing room

So somewhere between 7x8x2 = 112 and 8x8x2 = 128 per U. And if you get truly 
creative about how you stack them you could probably beat that without too much 
effort.

This doesn’t solve for cooling, but I think even at these numbers you could 
probably make it work with nice, tight cabling.


-c




Re: Rasberry pi - high density

2015-05-11 Thread Dave Taht
On Mon, May 11, 2015 at 1:37 PM, Clay Fiske c...@bloomcounty.org wrote:

 On May 8, 2015, at 10:24 PM, char...@thefnf.org wrote:

 Pi dimensions:

 3.37 l (5 front to back)
 2.21 w (6 wide)
 0.83 h
 25 per U (rounding down for Ethernet cable space etc) = 825 pi

The parallella board is about the same size and has interesting
properties all by itself.
In addition to ethernet it also brings out a lot of pins.

http://www.adapteva.com/parallella-board/

there are also various and sundry quad core arm boards in the same form factor.

 Cable management and heat would probably kill this before it ever reached 
 completion, but lol…


 This feels like it should be a Friday thread. :)

 If you’re really going for density:

 - At 0.83 inches high you could go 2x per U (depends on your mounting system 
 and how much space it burns)
 - I’d expect you could get at least 7 wide if not 8 with the right micro-USB 
 power connector
 - In most datacenter racks I’ve seen you could get at least 8 deep even with 
 cable breathing room

 So somewhere between 7x8x2 = 112 and 8x8x2 = 128 per U. And if you get truly 
 creative about how you stack them you could probably beat that without too 
 much effort.

 This doesn’t solve for cooling, but I think even at these numbers you could 
 probably make it work with nice, tight cabling.

Dip them all in a vat of oil.


-- 
Dave Täht
Open Networking needs **Open Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67


Re: Rasberry pi - high density

2015-05-11 Thread Peter Baldridge
 Pi dimensions:

 3.37 l (5 front to back)
 2.21 w (6 wide)
 0.83 h
 25 per U (rounding down for Ethernet cable space etc) = 825 pi

You butt up against major power/heat issues here in a single rack, not
that it's impossible.  From what I could find the rPi2 requires .5A
min.  The few SSD specs that I could find required something like .8 -
1.6A.  Assuming that part of .5A is for driving a SSD, 1A/pi would be
an optimistic requirement.  So 825-1600 amp in a single rack.  It's
not crazy to throw 120AMP in a rack for higher density but you would
need room to put a PDU ever 2 u or so if you were running a 30amp
circus.

That's before switching infrastructure. You'll also need airflow since
that's not built into the pi.  I've seen guys do this with mac minis
and they end up needing to push everything back in the rack 4 inches
to put 3 or 4 fans with 19in blades on the front door to make the
airflow data center ready.

So to start, you'd probably need to take a row out of the front of the
rack for fans and a row out of the back for power.

Cooling isn't really an issue since you can cool anything that you can
blow air on[1]. At 825 rpi @ 1Amp each, you'd get about 3000 btu/h
(double for the higher power estimate).  You'd need need 3-6 tons of
avalible cooling capacity without redundancy.

I don't know how to do the math for the 'vat of oil scenario'.  It's
not something I've ever wanted to work with.

In the end, I think you end up putting way too much money
(power/cooling) into the redundant green board around the CPU.


 This feels like it should be a Friday thread. :)

Maybe I'm having a read only may 10-17.


1. Please don't list the things that can't be cooled by blowing air.

-- 

Pete


Re: Rasberry pi - high density

2015-05-11 Thread Joel Maslak
Rather then guessing on power consumption, I measured it.

I took a Pi (Model B - but I suspect B+ and the new version is relatively
similar in power draw with the same peripherials), hooked it up to a lab
power supply, and took a current measurement.  My pi has a Sandisk SD card
and a Sandisk USB stick plugged into it, so, if anything, it will be a bit
high in power draw.  I then fired off a tight code loop and a ping -f from
another host towards it, to busy up the processor and the network/USB on
the Pi.  I don't have a way of making the video do anything, so if you were
using that, your draw would be up.  I also measured idle usage (sitting at
a command prompt).

Power draw was 2.3W under load, 2.0W at idle.

If it was my project, I'd build a backplane board with USB-to-ethernet and
ethernet switch chips, along with sockets for Pi compute modules (or
something similar).  I'd want one power cable and one network cable per
backplane board if my requirements allowed it.  Stick it all in a nice card
cage and you're done.

As for performance per watt, I'd be surprised if this beat a modern video
processor for the right workload.


On Mon, May 11, 2015 at 5:16 PM, Rafael Possamai raf...@gav.ufsc.br wrote:

 Maybe I messed up the math in my head, my line of thought was one pi is
 estimated to use 1.2 watts, whereas the nuc is at around 65 watts. 10 pi's
 = 12 watts. My comparison was 65watts/12watts = 5.4 times more power than
 10 pi's put together. This is really a rough estimate because I got the
 NUC's power consumption from the AC/DC converter that comes with it, which
 has a maximum output of 65 watts. I could be wrong (up to 5 times) and
 still the pi would use less power.

 Now that I think about it, the best way to simplify this is to calculate
 benchmark points per watt, so rasp pi is at around 406/1.2 which equals
 338. The NUC, roughly estimated to be at 3857/65 which equals 60. Let's be
 very skeptical and say that at maximum consumption the pi is using 5 watts,
 then 406/5 is around 81. At this point the rasp pi still scores better.

 Only problem we are comparing ARM to x86 which isn't necessarily fair (i am
 not an expert in computer architectures)





 On Mon, May 11, 2015 at 5:24 PM, Hugo Slabbert h...@slabnet.com wrote:

  Did I miss anything? Just a quick comparison.
 
 
  If those numbers are accurate, then it leans towards the NUC rather than
  the Pi, no?
 
  Perf:   1x i5 NUC = 10x Pi
  $$: 1x i5 NUC = 10x Pi
  Power:  1x i5 NUC = 5x Pi
 
  So...if a single NUC gives you the performance of 10x Pis at the capital
  cost of 10x Pis but uses half the power of 10x Pis and only a single
  Ethernet port, how does the Pi win?
 
  --
  Hugo
 
 
  On Mon 2015-May-11 17:08:43 -0500, Rafael Possamai raf...@gav.ufsc.br
  wrote:
 
   Interesting! Knowing a pi costs approximately $35, then you need
  approximately $350 to get near an i5.. The smallest and cheapest desktop
  you can get that would have similar power is the Intel NUC with an i5
 that
  goes for approximately $350. Power consumption of a NUC is about 5x that
  of
  the raspberry pi, but the number of ethernet ports required is 10x less.
  Usually in a datacenter you care much more about power than switch
 ports,
  so in this case if the overhead of controlling 10x the number of nodes
 is
  worth it, I'd still consider the raspberry pi. Did I miss anything?
 Just a
  quick comparison.
 
 
 
  On Mon, May 11, 2015 at 4:40 PM, Michael Thomas m...@mtcc.com wrote:
 
   As it turns out, I've been playing around benchmarking things lately
  using
  the tried and true
  UnixBench suite and here are a few numbers that might put this in some
  perspective:
 
  1) My new Rapsberry pi (4 cores, arm): 406
  2) My home i5-like thing (asus 4 cores, 16gb's from last year): 3857
  3) AWS c4.xlarge (4 cores, ~8gb's): 3666
 
  So you'd need to, uh, wedge about 10 pi's to get one half way modern
 x86.
 
  Mike
 
 
  On 5/11/15 1:37 PM, Clay Fiske wrote:
 
   On May 8, 2015, at 10:24 PM, char...@thefnf.org wrote:
 
 
  Pi dimensions:
 
  3.37 l (5 front to back)
  2.21 w (6 wide)
  0.83 h
  25 per U (rounding down for Ethernet cable space etc) = 825 pi
 
  Cable management and heat would probably kill this before it ever
  reached completion, but lol…
 
 
  This feels like it should be a Friday thread. :)
 
  If you’re really going for density:
 
  - At 0.83 inches high you could go 2x per U (depends on your mounting
  system and how much space it burns)
  - I’d expect you could get at least 7 wide if not 8 with the right
  micro-USB power connector
  - In most datacenter racks I’ve seen you could get at least 8 deep
 even
  with cable breathing room
 
  So somewhere between 7x8x2 = 112 and 8x8x2 = 128 per U. And if you get
  truly creative about how you stack them you could probably beat that
  without too much effort.
 
  This doesn’t solve for cooling, but I think even at these numbers you
  could probably make it work with nice, tight cabling.
 
 
 

Re: Rasberry pi - high density

2015-05-11 Thread Randy Carpenter

- On May 11, 2015, at 5:36 PM, Peter Baldridge petebaldri...@gmail.com 
wrote:

 Pi dimensions:

 3.37 l (5 front to back)
 2.21 w (6 wide)
 0.83 h
 25 per U (rounding down for Ethernet cable space etc) = 825 pi
 
 You butt up against major power/heat issues here in a single rack, not
 that it's impossible.  From what I could find the rPi2 requires .5A
 min.  The few SSD specs that I could find required something like .8 -
 1.6A.  Assuming that part of .5A is for driving a SSD, 1A/pi would be
 an optimistic requirement.  So 825-1600 amp in a single rack.  It's
 not crazy to throw 120AMP in a rack for higher density but you would
 need room to put a PDU ever 2 u or so if you were running a 30amp
 circus.
 

That is .8-1.6A at 5v DC. A far cry from 120V AC. We're talking ~5W versus 
~120W each.

Granted there is some conversion overhead, but worst case you are probably 
talking about 1/20th the power you describe.

-Randy


Re: Rasberry pi - high density

2015-05-11 Thread Peter Baldridge
On Mon, May 11, 2015 at 3:21 PM, Randy Carpenter rcar...@network1.net wrote:

 That is .8-1.6A at 5v DC. A far cry from 120V AC. We're talking ~5W versus 
 ~120W each.

 Granted there is some conversion overhead, but worst case you are probably 
 talking about 1/20th the power you describe.

Yeah, missed that.  You'd still need fans probably for air flow but
the power would be a non issue.

-- 

Pete


Re: Rasberry pi - high density

2015-05-11 Thread Chris Boyd
On Mon, 2015-05-11 at 14:36 -0700, Peter Baldridge wrote:
 I don't know how to do the math for the 'vat of oil scenario'.  It's
 not something I've ever wanted to work with.

It's pretty interesting what you can do with immersion cooling.  I work
with it at $DAYJOB.  Similar to air cooling, but your coolant flow rates
are much lower than air, and you don't need any fans in the systems--The
pumps take the place of those.

We save a lot of money on the cooling side, since we don't need to
compress and expand gases/liquids.  We can run with warmish (25-30C)
water from cooling towers, and still keep the systems at a target
temperature of 35C.

--Chris



Re: Rasberry pi - high density

2015-05-11 Thread Brandon Martin

On 05/11/2015 06:21 PM, Randy Carpenter wrote:

That is .8-1.6A at 5v DC. A far cry from 120V AC. We're talking ~5W versus 
~120W each.

Granted there is some conversion overhead, but worst case you are probably 
talking about 1/20th the power you describe.



His estimates seem to consider that it's only 5V, though.  He has 825 
Pis per rack at ~5-10W each is call it ~8kW on the high end.  8kW is 
2.25 tons of refrigeration at first cut, plus any power conversion 
losses, losses in ducting/chilled water distribution, etc.  Calling for 
at least 3 tons of raw cooling capacity for this rack seems reasonable.


8kW/rack is something it seems many a typical computing oriented 
datacenter would be used to dealing with, no?  Formfactor within the 
rack is just a little different which may complicate how you can deliver 
the cooling - might need unusually forceful forced air or a water/oil 
type heat exchanger for the oil immersion method being discussed 
elsewhere in the thread.


You still need giant wires and busses to move 800A worth of current.  It 
almost seems like you'd have to rig up some sort of 5VDC bus bar system 
along the sides of the cabinet and tap into it for each shelf or 
(probably the approach I'd look at first, instead) give up some space on 
each shelf or so for point-of-load power conversion (120 or 240VAC to 
5VDC using industrial brick style supplies or similar) and 
conventional AC or high voltage (in this context, 48 or 380V is 
high) DC distribution to each shelf.  Getting 800A at 5V to the rack 
with reasonable losses is going to need humongous wires, too.  Looks 
like NEC calls for something on the order of 800kcmil under rosy 
circumstances just to move it safely (which, at only 5V, is not 
necessarily effectively) - yikes that's a big wire.

--
Brandon Martin


Re: Rasberry pi - high density

2015-05-11 Thread Rafael Possamai
Maybe I messed up the math in my head, my line of thought was one pi is
estimated to use 1.2 watts, whereas the nuc is at around 65 watts. 10 pi's
= 12 watts. My comparison was 65watts/12watts = 5.4 times more power than
10 pi's put together. This is really a rough estimate because I got the
NUC's power consumption from the AC/DC converter that comes with it, which
has a maximum output of 65 watts. I could be wrong (up to 5 times) and
still the pi would use less power.

Now that I think about it, the best way to simplify this is to calculate
benchmark points per watt, so rasp pi is at around 406/1.2 which equals
338. The NUC, roughly estimated to be at 3857/65 which equals 60. Let's be
very skeptical and say that at maximum consumption the pi is using 5 watts,
then 406/5 is around 81. At this point the rasp pi still scores better.

Only problem we are comparing ARM to x86 which isn't necessarily fair (i am
not an expert in computer architectures)





On Mon, May 11, 2015 at 5:24 PM, Hugo Slabbert h...@slabnet.com wrote:

 Did I miss anything? Just a quick comparison.


 If those numbers are accurate, then it leans towards the NUC rather than
 the Pi, no?

 Perf:   1x i5 NUC = 10x Pi
 $$: 1x i5 NUC = 10x Pi
 Power:  1x i5 NUC = 5x Pi

 So...if a single NUC gives you the performance of 10x Pis at the capital
 cost of 10x Pis but uses half the power of 10x Pis and only a single
 Ethernet port, how does the Pi win?

 --
 Hugo


 On Mon 2015-May-11 17:08:43 -0500, Rafael Possamai raf...@gav.ufsc.br
 wrote:

  Interesting! Knowing a pi costs approximately $35, then you need
 approximately $350 to get near an i5.. The smallest and cheapest desktop
 you can get that would have similar power is the Intel NUC with an i5 that
 goes for approximately $350. Power consumption of a NUC is about 5x that
 of
 the raspberry pi, but the number of ethernet ports required is 10x less.
 Usually in a datacenter you care much more about power than switch ports,
 so in this case if the overhead of controlling 10x the number of nodes is
 worth it, I'd still consider the raspberry pi. Did I miss anything? Just a
 quick comparison.



 On Mon, May 11, 2015 at 4:40 PM, Michael Thomas m...@mtcc.com wrote:

  As it turns out, I've been playing around benchmarking things lately
 using
 the tried and true
 UnixBench suite and here are a few numbers that might put this in some
 perspective:

 1) My new Rapsberry pi (4 cores, arm): 406
 2) My home i5-like thing (asus 4 cores, 16gb's from last year): 3857
 3) AWS c4.xlarge (4 cores, ~8gb's): 3666

 So you'd need to, uh, wedge about 10 pi's to get one half way modern x86.

 Mike


 On 5/11/15 1:37 PM, Clay Fiske wrote:

  On May 8, 2015, at 10:24 PM, char...@thefnf.org wrote:


 Pi dimensions:

 3.37 l (5 front to back)
 2.21 w (6 wide)
 0.83 h
 25 per U (rounding down for Ethernet cable space etc) = 825 pi

 Cable management and heat would probably kill this before it ever
 reached completion, but lol…


 This feels like it should be a Friday thread. :)

 If you’re really going for density:

 - At 0.83 inches high you could go 2x per U (depends on your mounting
 system and how much space it burns)
 - I’d expect you could get at least 7 wide if not 8 with the right
 micro-USB power connector
 - In most datacenter racks I’ve seen you could get at least 8 deep even
 with cable breathing room

 So somewhere between 7x8x2 = 112 and 8x8x2 = 128 per U. And if you get
 truly creative about how you stack them you could probably beat that
 without too much effort.

 This doesn’t solve for cooling, but I think even at these numbers you
 could probably make it work with nice, tight cabling.


 -c







Re: Rasberry pi - high density

2015-05-11 Thread Hugo Slabbert

Did I miss anything? Just a quick comparison.


If those numbers are accurate, then it leans towards the NUC rather than 
the Pi, no?


Perf:   1x i5 NUC = 10x Pi
$$: 1x i5 NUC = 10x Pi
Power:  1x i5 NUC = 5x Pi

So...if a single NUC gives you the performance of 10x Pis at the capital 
cost of 10x Pis but uses half the power of 10x Pis and only a single 
Ethernet port, how does the Pi win?


--
Hugo

On Mon 2015-May-11 17:08:43 -0500, Rafael Possamai raf...@gav.ufsc.br wrote:


Interesting! Knowing a pi costs approximately $35, then you need
approximately $350 to get near an i5.. The smallest and cheapest desktop
you can get that would have similar power is the Intel NUC with an i5 that
goes for approximately $350. Power consumption of a NUC is about 5x that of
the raspberry pi, but the number of ethernet ports required is 10x less.
Usually in a datacenter you care much more about power than switch ports,
so in this case if the overhead of controlling 10x the number of nodes is
worth it, I'd still consider the raspberry pi. Did I miss anything? Just a
quick comparison.



On Mon, May 11, 2015 at 4:40 PM, Michael Thomas m...@mtcc.com wrote:


As it turns out, I've been playing around benchmarking things lately using
the tried and true
UnixBench suite and here are a few numbers that might put this in some
perspective:

1) My new Rapsberry pi (4 cores, arm): 406
2) My home i5-like thing (asus 4 cores, 16gb's from last year): 3857
3) AWS c4.xlarge (4 cores, ~8gb's): 3666

So you'd need to, uh, wedge about 10 pi's to get one half way modern x86.

Mike


On 5/11/15 1:37 PM, Clay Fiske wrote:


On May 8, 2015, at 10:24 PM, char...@thefnf.org wrote:


Pi dimensions:

3.37 l (5 front to back)
2.21 w (6 wide)
0.83 h
25 per U (rounding down for Ethernet cable space etc) = 825 pi

Cable management and heat would probably kill this before it ever
reached completion, but lol…



This feels like it should be a Friday thread. :)

If you’re really going for density:

- At 0.83 inches high you could go 2x per U (depends on your mounting
system and how much space it burns)
- I’d expect you could get at least 7 wide if not 8 with the right
micro-USB power connector
- In most datacenter racks I’ve seen you could get at least 8 deep even
with cable breathing room

So somewhere between 7x8x2 = 112 and 8x8x2 = 128 per U. And if you get
truly creative about how you stack them you could probably beat that
without too much effort.

This doesn’t solve for cooling, but I think even at these numbers you
could probably make it work with nice, tight cabling.


-c







signature.asc
Description: Digital signature


Re: Rasberry pi - high density

2015-05-09 Thread Dave Taht
On Sat, May 9, 2015 at 11:55 AM, Barry Shein b...@world.std.com wrote:

 On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) wrote:
  
  
   So I just crunched the numbers. How many pies could I cram in a rack?

 For another list I just estimated how many M.2 SSD modules one could
 cram into a 3.5 disk case. Around 40 w/ some room to spare (assuming
 heat and connection routing aren't problems), at 500GB/each that's
 20TB in a standard 3.5 case.

I could see liquid cooling such a device. insert the whole thing into oil.
how many pcie slots are allowed in the standards?

 It's getting weird out there.

Try to project your mind forward another decade with capability/cost like this:

http://www.digitaltrends.com/computing/nine-dollar-computer-kickstarter/

I hope humanity´s last act will be to educate the spambots past their current
puerile contemplation of adolescent fantasies and into contemplating faust.

 --
 -Barry Shein

 The World  | b...@theworld.com   | http://www.TheWorld.com
 Purveyors to the Trade | Voice: 800-THE-WRLD| Dial-Up: US, PR, Canada
 Software Tool  Die| Public Access Internet | SINCE 1989 *oo*



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67


Re: Rasberry pi - high density

2015-05-09 Thread Rafael Possamai
From the work that I've done in the past with clusters, your need for
bandwidth is usually not the biggest issue. When you work with big data,
let's say 500 million data points, most mathematicians would condense it
all down into averages, standard deviations, probabilities, etc, which then
become much smaller to save in your hard disks and also to perform data
analysis with, as well as transfer these stats from master to nodes and
vice-versa. So for one project at a time, your biggest concern is cpu
clock, ram, interrupts, etc. If you want to run all of the BIG 10s academic
projects into one big cluster for example, then networking might become an
issue solely due to volume.

The more data you transfer, the longer it would take to perform any
meaningful analysis on it, so really your bottleneck is TFLOPS rather than
packets per second. With Facebook it's the opposite, it's mostly pictures
and videos of cats coming in and out of the server with lots of reads and
writes on their storage. In that case, switching tbps of traffic is how
they make money.

A good example is creating a dockr container with your application and
deploying a cluster with CoreOS. You save all that capex and spend by the
hour. I believe Azure and EC2 already have support for CoreOS.




On Sat, May 9, 2015 at 12:48 AM, Tim Raphael raphael.timo...@gmail.com
wrote:

 The problem is, I can get more processing power and RAM out of two 10RU
 blade chassis and only needing 64 10G ports...

 32 x 256GB RAM per blade = 8.1TB
 32 x 16 cores x 2.4GHz = 1,228GHz
 (not based on current highest possible, just using reasonable specs)

 Needing only 4 QFX5100s which will cost less than a populated 6513 and
 give lower latency. Power, cooling and cost would be lower too.

 RPi = 900MHz and 1GB RAM. So to equal the two chassis, you'll need:

 1228 / 0.9 = 1364 Pis for compute (main performance aspect of a super
 computer) meaning double the physical space required compared to the
 chassis option.

 So yes, infeasible indeed.

 Regards,

 Tim Raphael

  On 9 May 2015, at 1:24 pm, char...@thefnf.org wrote:
 
 
 
  So I just crunched the numbers. How many pies could I cram in a rack?
 
  Check my numbers?
 
  48U rack budget
  6513 15U (48-15) = 33U remaining for pie
  6513 max of 576 copper ports
 
  Pi dimensions:
 
  3.37 l (5 front to back)
  2.21 w (6 wide)
  0.83 h
  25 per U (rounding down for Ethernet cable space etc) = 825 pi
 
  Cable management and heat would probably kill this before it ever
 reached completion, but lol...
 
 
 



Re: Rasberry pi - high density

2015-05-09 Thread Barry Shein

On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) wrote:
  
  
  So I just crunched the numbers. How many pies could I cram in a rack?

For another list I just estimated how many M.2 SSD modules one could
cram into a 3.5 disk case. Around 40 w/ some room to spare (assuming
heat and connection routing aren't problems), at 500GB/each that's
20TB in a standard 3.5 case.

It's getting weird out there.

-- 
-Barry Shein

The World  | b...@theworld.com   | http://www.TheWorld.com
Purveyors to the Trade | Voice: 800-THE-WRLD| Dial-Up: US, PR, Canada
Software Tool  Die| Public Access Internet | SINCE 1989 *oo*


Re: Rasberry pi - high density

2015-05-09 Thread Eugeniu Patrascu
On Sat, May 9, 2015 at 9:55 PM, Barry Shein b...@world.std.com wrote:


 On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) wrote:
  
  
   So I just crunched the numbers. How many pies could I cram in a rack?

 For another list I just estimated how many M.2 SSD modules one could
 cram into a 3.5 disk case. Around 40 w/ some room to spare (assuming
 heat and connection routing aren't problems), at 500GB/each that's
 20TB in a standard 3.5 case.

 It's getting weird out there.


I think the next logical step in servers would be to remove the traditional
hard drive cages and put SSD module slots that can be hot swapped. Imagine
inserting small SSD modules on the front side of the servers and directly
connect them via PCIe to the motherboard. No more bottlenecks and a
software RAID of some sorts would actually make a lot more sense than the
current controller based solutions.


Rasberry pi - high density

2015-05-08 Thread charles



So I just crunched the numbers. How many pies could I cram in a rack?

Check my numbers?

48U rack budget
6513 15U (48-15) = 33U remaining for pie
6513 max of 576 copper ports

Pi dimensions:

3.37 l (5 front to back)
2.21 w (6 wide)
0.83 h
25 per U (rounding down for Ethernet cable space etc) = 825 pi

Cable management and heat would probably kill this before it ever 
reached completion, but lol...






Re: Rasberry pi - high density

2015-05-08 Thread Tim Raphael
The problem is, I can get more processing power and RAM out of two 10RU blade 
chassis and only needing 64 10G ports...

32 x 256GB RAM per blade = 8.1TB
32 x 16 cores x 2.4GHz = 1,228GHz
(not based on current highest possible, just using reasonable specs)

Needing only 4 QFX5100s which will cost less than a populated 6513 and give 
lower latency. Power, cooling and cost would be lower too.

RPi = 900MHz and 1GB RAM. So to equal the two chassis, you'll need:

1228 / 0.9 = 1364 Pis for compute (main performance aspect of a super computer) 
meaning double the physical space required compared to the chassis option.

So yes, infeasible indeed.

Regards,

Tim Raphael

 On 9 May 2015, at 1:24 pm, char...@thefnf.org wrote:
 
 
 
 So I just crunched the numbers. How many pies could I cram in a rack?
 
 Check my numbers?
 
 48U rack budget
 6513 15U (48-15) = 33U remaining for pie
 6513 max of 576 copper ports
 
 Pi dimensions:
 
 3.37 l (5 front to back)
 2.21 w (6 wide)
 0.83 h
 25 per U (rounding down for Ethernet cable space etc) = 825 pi
 
 Cable management and heat would probably kill this before it ever reached 
 completion, but lol...