Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-09 Thread charles

On 2015-05-09 11:57, Baldur Norddahl wrote:
The standard 48 port with 2 port uplink 1U switch is far from full 
depth.
You put them in the back of the rack and have the small computers in 
the
front. You might even turn the switches around, so the ports face 
inwards
into the rack. The network cables would be very short and go directly 
from
the mini computers (Raspberry Pi?) to the switch, all within the one 
unit

shelf.


Yes this.

I presumed ras pi, but those don't have gigabit Ethernet.

Then I realized:  http://www.parallella.org/ (I've got one of these 
sitting on my standby shelf to be racked, which is what made me think of 
it).


To the OP please do tell us more about what you are doing, it sounds 
very interesting.


Re: Rasberry pi - high density

2015-05-09 Thread Dave Taht
On Sat, May 9, 2015 at 11:55 AM, Barry Shein b...@world.std.com wrote:

 On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) wrote:
  
  
   So I just crunched the numbers. How many pies could I cram in a rack?

 For another list I just estimated how many M.2 SSD modules one could
 cram into a 3.5 disk case. Around 40 w/ some room to spare (assuming
 heat and connection routing aren't problems), at 500GB/each that's
 20TB in a standard 3.5 case.

I could see liquid cooling such a device. insert the whole thing into oil.
how many pcie slots are allowed in the standards?

 It's getting weird out there.

Try to project your mind forward another decade with capability/cost like this:

http://www.digitaltrends.com/computing/nine-dollar-computer-kickstarter/

I hope humanity´s last act will be to educate the spambots past their current
puerile contemplation of adolescent fantasies and into contemplating faust.

 --
 -Barry Shein

 The World  | b...@theworld.com   | http://www.TheWorld.com
 Purveyors to the Trade | Voice: 800-THE-WRLD| Dial-Up: US, PR, Canada
 Software Tool  Die| Public Access Internet | SINCE 1989 *oo*



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67


RE: [probably spam, from NANOG nanog-boun...@nanog.org]

2015-05-09 Thread Keith Medcalf

 On Saturday, 9 May, 2015, at 10:59 John Levine jo...@iecc.com said:

  No test/plain?  Delete without further ado.

 Sadly, it is no longer 1998.

No kidding.  Web-Page e-mail.  Lots of proprietary executable-embedded-in-data 
file formats used for e-mail, and worst, gratuitous JavaScript everywhere 
making the Web unuseable unless you disable all security (or just refuse to 
deal with the schmucks that do that).

 R's,
 John





Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-09 Thread Eduardo Schoedler
Juniper OCX1100 have 72 ports in 1U.

And you can tune Linux IPv4 neighbor:
https://ams-ix.net/technical/specifications-descriptions/config-guide#11

--
Eduardo Schoedler



Em sábado, 9 de maio de 2015, Lamar Owen lo...@pari.edu escreveu:

 On 05/08/2015 02:53 PM, John Levine wrote:

 ...
 Most of the traffic will be from one node to another, with
 considerably less to the outside.  Physical distance shouldn't be a
 problem since everything's in the same room, maybe the same rack.

 What's the rule of thumb for number of hosts per switch, cascaded
 switches vs. routers, and whatever else one needs to design a dense
 network like this?  TIA

  You know, I read this post and immediately thought 'SGI Altix'
 scalable to 512 CPU's per system image and 20 images per cluster (NASA's
 Columbia supercomputer had 10,240 CPUs in that configuration.twelve
 years ago, using 1.5GHz 64-bit RISC CPUs running Linux my, how we've
 come full circle (today's equivalent has less power consumption, at
 least)).  The NUMA technology in those Altix CPU's is a de-facto
 'memory-area network' and thus can have some interesting topologies.

 Clusters can be made using nodes with at least two NICs in them, and no
 switching.  With four or eight ports you can do some nice mesh topologies.
 This wouldn't be L2 bridging, either, but a L3 mesh could be made that
 could be rather efficient, with no switches, as long as you have at least
 three ports per node, and you can do something reasonably efficient with a
 switch or two and some chains of nodes, with two NICs per node.  L3 keeps
 the broadcast domain size small, and broadcast overhead becomes small.

 If you only have one NIC per node, well, time to get some seriously
 high-density switches. but even then how many nodes are going to be per
 42U rack?  A top-of-rack switch may only need 192 ports, and that's only
 4U, with 1U 48 port switches. 8U you can do 384 ports, and three racks will
 do a bit over 1,000.  Octopus cables going from an RJ21 to 8P8C modular are
 available, so you could use high-density blades; Cisco claims you could do
 576 10/100/1000 ports in a 13-slot 6500.  That's half the rack space for
 the switching.  If 10/100 is enough, you could do 12 of the WS-X6196-21AF
 cards (or the RJ-45 'two-ports-per-plug' WS-X6148X2-45AF) and get in theory
 1,152 ports in a 6513 (one SUP; drop 96 ports from that to get a redundant
 SUP).

 Looking at another post in the thread, these moonshot rigs sound
 interesting 45 server blades in 4.3U.  4.3U?!?!?  Heh, some custom
 rails, I guess, to get ten in 47U.  They claim a quad-server blade, so
 1,800 servers (with networking) in a 47U rack.  Yow.  Cost of several
 hundred thousand dollars for that setup.

 The effective limit on subnet size would be of course broadcast overhead;
 1,000 nodes on a /22 would likely be painfully slow due to broadcast
 overhead alone.



-- 
Eduardo Schoedler


Re: Updated prefix filtering

2015-05-09 Thread Dave Taht
On Fri, May 8, 2015 at 3:41 PM, Chaim Rieger chaim.rie...@gmail.com wrote:

 Best example  I’ve found is located at http://jonsblog.lewis.org/ 
 http://jonsblog.lewis.org/

 I too ran out of space, Brocade, not Cisco though, and am looking to filter 
 prefixes. did anybody do a more recent or updated filter list  since 2008 ?

 Offlist is fine.

 Oh and happy friday to all.

I have had a piece long on the spike on how we implemented bcp38 for
linux (openwrt) devices using the ipset facility.

We had a different use case (preventing all possible internal rfc1918
network addresses from escaping, while still allowing punching through
one layer of nat ), but the underlying ipset facility was easily
extendible to actually do bcp38 and fast to use, so that is what we
ended up calling the openwrt package. Please contact me offlist if you
would like a peek at that piece, because the article had some
structural problems and we never got around to finishing/publishing
it, and I would like to

has there been a bcp38 equivalent published for ipv6?

Along the way source specific routing showed up for ipv6 and we ended
up obsoleting the concept of an ipv6 global default route entirely on
a linux based CPE router.

see: http://arxiv.org/pdf/1403.0445.pdf and some relevant homenet wg stuff.

d@nuc-client:~/babeld-1.6.0 $ ip -6 route

default from 2001:558:6045:e9:251a:738a:ac86:eaf6 via
fe80::28c6:8eff:febb:9ff0 dev eth0  proto babel  metric 1024
default from 2601:9:4e00:4cb0::/60 via fe80::28c6:8eff:febb:9ff0 dev
eth0  proto babel  metric 1024
default from fde5:dfb9:df90:fff0::/60 via fe80::225:90ff:fef4:a5c5 dev
eth0  proto babel  metric 1024

So this box will not forward any ipv6 not in the from(src) table.

-- 
Dave Täht
https://plus.google.com/u/0/explore/makewififast


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-09 Thread Eduardo Schoedler
You do not mention low cost before ;)



Em sábado, 9 de maio de 2015, John Levine jo...@iecc.com escreveu:

 In article 
 cahf3uwypqn1ns_umjz-znuk3i5ufczbu9l39b-crovg6yum...@mail.gmail.com
 javascript:; you write:
 Juniper OCX1100 have 72 ports in 1U.

 Yeah, too bad it costs $32,000.  Other than that it'd be perfect.

 R's,
 John



-- 
Eduardo Schoedler


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-09 Thread John Levine
To the OP please do tell us more about what you are doing, it sounds 
very interesting.

There's a conference paper in preparation.  I'll send a pointer when I can.

R's,
John




Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-09 Thread Karl Auer
On Sat, 2015-05-09 at 17:06 -0400, Lamar Owen wrote:
 The effective limit on subnet size would be of course broadcast 
 overhead; 1,000 nodes on a /22 would likely be painfully slow due to 
 broadcast overhead alone.

Would be interesting to see how IPv6 performed, since is one of the
things it was supposed to be able to deliver - massively scalable links
(equivalent to an IPv4 broadcast domain) via massively reduced protocol
chatter (IPv6 multicast groups vs IPv4 broadcast), plus fully automated
L3 address assignment.

IPv4 ARP, for example, hits every on-subnet neighbour; the IPv6
equivalent uses multicast to hit only those neighbours that happen to
share the same 24 low-end L3 address bits as the desired target - a
statistically much smaller subset of on-link neighbours, and in normal
subnets typically only one host. Only chatter that really should go to
all hosts does so - such as router advertisements.

Regards, K.



-- 
~~~
Karl Auer (ka...@biplane.com.au)
http://www.biplane.com.au/kauer
http://twitter.com/kauer389

GPG fingerprint: 3C41 82BE A9E7 99A1 B931 5AE7 7638 0147 2C3C 2AC4
Old fingerprint: EC67 61E2 C2F6 EB55 884B E129 072B 0AF0 72AA 9882




Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-09 Thread John Levine
In article cahf3uwypqn1ns_umjz-znuk3i5ufczbu9l39b-crovg6yum...@mail.gmail.com 
you write:
Juniper OCX1100 have 72 ports in 1U.

Yeah, too bad it costs $32,000.  Other than that it'd be perfect.

R's,
John


Re: [probably spam, from NANOG nanog-boun...@nanog.org]

2015-05-09 Thread Larry Sheldon

On 5/9/2015 18:10, Keith Medcalf wrote:


...making the Web unusable unless you disable all security (or just
refuse to deal with the schmucks that do that).


The only reasonable path for people who do not want to be invaded.

It really is easier in the long run--and I find that it is the only the 
medical community who refuses to be sensible--so they spend the money to 
US-Mail the attachments to me.


--
sed quis custodiet ipsos custodes? (Juvenal)


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-09 Thread Lamar Owen

On 05/08/2015 02:53 PM, John Levine wrote:

...
Most of the traffic will be from one node to another, with
considerably less to the outside.  Physical distance shouldn't be a
problem since everything's in the same room, maybe the same rack.

What's the rule of thumb for number of hosts per switch, cascaded
switches vs. routers, and whatever else one needs to design a dense
network like this?  TIA

You know, I read this post and immediately thought 'SGI Altix' 
scalable to 512 CPU's per system image and 20 images per cluster 
(NASA's Columbia supercomputer had 10,240 CPUs in that 
configuration.twelve years ago, using 1.5GHz 64-bit RISC CPUs 
running Linux my, how we've come full circle (today's equivalent 
has less power consumption, at least)).  The NUMA technology in 
those Altix CPU's is a de-facto 'memory-area network' and thus can have 
some interesting topologies.


Clusters can be made using nodes with at least two NICs in them, and no 
switching.  With four or eight ports you can do some nice mesh 
topologies.  This wouldn't be L2 bridging, either, but a L3 mesh could 
be made that could be rather efficient, with no switches, as long as you 
have at least three ports per node, and you can do something reasonably 
efficient with a switch or two and some chains of nodes, with two NICs 
per node.  L3 keeps the broadcast domain size small, and broadcast 
overhead becomes small.


If you only have one NIC per node, well, time to get some seriously 
high-density switches. but even then how many nodes are going to be 
per 42U rack?  A top-of-rack switch may only need 192 ports, and that's 
only 4U, with 1U 48 port switches. 8U you can do 384 ports, and three 
racks will do a bit over 1,000.  Octopus cables going from an RJ21 to 
8P8C modular are available, so you could use high-density blades; Cisco 
claims you could do 576 10/100/1000 ports in a 13-slot 6500.  That's 
half the rack space for the switching.  If 10/100 is enough, you could 
do 12 of the WS-X6196-21AF cards (or the RJ-45 'two-ports-per-plug' 
WS-X6148X2-45AF) and get in theory 1,152 ports in a 6513 (one SUP; drop 
96 ports from that to get a redundant SUP).


Looking at another post in the thread, these moonshot rigs sound 
interesting 45 server blades in 4.3U.  4.3U?!?!?  Heh, some custom 
rails, I guess, to get ten in 47U.  They claim a quad-server blade, so 
1,800 servers (with networking) in a 47U rack.  Yow.  Cost of several 
hundred thousand dollars for that setup.


The effective limit on subnet size would be of course broadcast 
overhead; 1,000 nodes on a /22 would likely be painfully slow due to 
broadcast overhead alone.




Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-09 Thread Bruce Simpson

On 09/05/2015 23:33, Karl Auer wrote:

IPv4 ARP, for example, hits every on-subnet neighbour; the IPv6
equivalent uses multicast to hit only those neighbours that happen to
share the same 24 low-end L3 address bits as the desired target - a
statistically much smaller subset of on-link neighbours, and in normal
subnets typically only one host. Only chatter that really should go to
all hosts does so - such as router advertisements.



Except when the IPv6 solicited-node multicast groups cause $VENDOR 
switch meltdown:

http://blog.bimajority.org/2014/09/05/the-network-nightmare-that-ate-my-week/


RE: Thousands of hosts on a gigabit LAN, maybe not

2015-05-09 Thread Jerry J. Anderson, CCIE #5000
 Some people I know (yes really) are building a system that will have
 several thousand little computers in some racks.  Each of the
 computers runs Linux and has a gigabit ethernet interface.  It occurs
 to me that it is unlikely that I can buy an ethernet switch with
 thousands of ports, and even if I could, would I want a Linux system
 to have 10,000 entries or more in its ARP table.

 Most of the traffic will be from one node to another, with
 considerably less to the outside.  Physical distance shouldn't be a
 problem since everything's in the same room, maybe the same rack.

 What's the rule of thumb for number of hosts per switch, cascaded
 switches vs. routers, and whatever else one needs to design a dense
 network like this?  TIA

Brocade's Virtual Cluster Switching (VCS) fabric on their VDX switches is a 
good solution for large, flat data center networks like
this.  It's based on TRILL, so no STP or tree structure are required.  All 
ports are live, as is all inter-switch bandwidth.  Cisco
has a similar solution, as do other vendors.

Thank you,
Jerry

-- 
Jerry J. Anderson, CCIE #5000
Member, Anderson Consulting, LLC
800 Ridgeview Ave, Broomfield, CO  80020-6618
Office: 650-523-2132 Mobile: 773-793-7717
www.linkedin.com/in/AndersonConsultingLLC



Fwd: BSDCon Brazil 2015 - Call for Papers

2015-05-09 Thread Vinícius Zavam
-- Forwarded message --
From: BSDCon Brasil 2015 bsd...@bsdcon.com.br
Date: 2015-05-08 12:52 GMT-03:00
Subject: BSDCon Brazil 2015 - Call for Papers
To:


INTRODUCTION

  BSDCon Brazil (http://www.bsdcon.com.br) is the brazilian BSD powered
and flavored conference.
  The first edition was in 2005 and it brought together a great mix of
*BSD developers and users for a nice blend of both developer-centric and
user-centric presentations, and activities.

  This year BSDCon Brazil will be held from 9-10th October 2015, in
Fortaleza (CE).

OFFICIAL CALL

  We are proudly requesting proposals for presentations.

  We do not require academic or formal papers. If you wish to submit a
formal paper, you are welcome to, but it is not required.

  Presentations are expected to be 45~60 minutes and are to be delivered
in Portuguese (prefered), Spanish or English.

  The proposals presentation should be written with a very strong
technical content bias.
  Proposals of a business development or marketing nature are not
appropriate for this venue and will be rejected!

  Topics of interest to the conference include, but are not limited to:

[*] Automation  Embedded Systems
[*] Best Current Practices
[*] Continuous Integration
[*] Database Management Systems
[*] Device Drivers
[*] Documentation  Translation
[*] Filesystems
[*] Firewall  Routing
[*] Getting Started to *BSD Systems
[*] High Availability
[*] Internet of Things (IoT)
[*] IPv6
[*] Kernel Internals
[*] Logging  Monitoring
[*] Network Applications
[*] Orchestration
[*] Performance
[*] Privacy  Security
[*] Third-Party Applications Management
[*] Virtualization
[*] Wireless Transmissions

  We are waiting to read what you got! Please send all proposals to:

submissions (@) bsdcon.com.br

  The proposals should contain a short and concise text description. The
submission should also include a short CV of the speaker and an estimate
of the expected travel expenses.

SCHEDULE

  Proposals Acceptance
May 8th 2015 - BEGINS
June 14th 2015 --- ENDS

  Contact to Accepted Proposals Authors
July 13th 2015

NOTES

  * If your talk is accepted, you are expected to present your talk in
person;
  * Speakers do not register or pay conference fees;
  * We can pay for speakers flight and accommodation;
  * You pay for your own food and drink;
  * There will be a digital projector available in each lecture room.


--
BSDCon Brazil 2015
http://www.bsdcon.com.br


Re: Updated prefix filtering

2015-05-09 Thread Frederik Kriewitz
On Sat, May 9, 2015 at 2:22 AM, Faisal Imtiaz fai...@snappytelecom.net wrote:
 Not sure if you missed it.. there was a discussion on this topic in the 
 recent past...
 I am taking the liberty of re-posting below.. you may find it useful.

You can find the complete thread here:
http://mailman.nanog.org/pipermail/nanog/2015-April/074425.html

Depending on whether you're RIB and/or FIB limited there are a couple
of options.

Regards,
Frederik Kriewitz


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-09 Thread Baldur Norddahl
The standard 48 port with 2 port uplink 1U switch is far from full depth.
You put them in the back of the rack and have the small computers in the
front. You might even turn the switches around, so the ports face inwards
into the rack. The network cables would be very short and go directly from
the mini computers (Raspberry Pi?) to the switch, all within the one unit
shelf.

Assuming a max sized rack with depth of 90 cm and the switches might be 30
cm. That leaves 60 cm to mount mini computers. That is approximately 12000
cubic cm of space per rack unit. A Raspberry PI is approximately 120 cubic
cm. So you might be able to fit 48 of them in that space. It would be a
very tight fit indeed but maybe not impossible.

As to the original question, I would have 48 computers in a subnet. This is
the correct number because you would connect each shelf switch to a top of
rack switch, and spend a few extra bucks on the ToR so that it can do layer
3 routing between shelfs.

Regards,

Baldur


Re: [probably spam, from NANOG nanog-boun...@nanog.org]

2015-05-09 Thread John Levine
 No test/plain?  Delete without further ado.

Sadly, it is no longer 1998.

R's,
John


Re:

2015-05-09 Thread Stephen Satchell

On 05/09/2015 08:17 AM, Jim Popovitch wrote:

On Sat, May 9, 2015 at 11:05 AM, Keith Medcalf kmedc...@dessus.com wrote:


No test/plain?  Delete without further ado.



In the past year or so it seems that all RAA Verification emails, or
at least the ones I see, contain no plain text.  :-(

-Jim P.



I'm surprised.  I have set Thunderbird to view messages in plain text 
only.  I get a number of messages that have only one line:  Please view 
this email in your browser.


Right.

(My brother doesn't like this.  He's a process chemist, so he needs to 
use HTML mail to send most business traffic so that formulas and 
equations are sent properly.  I had to remove my HTML-only filter in 
order to receive his e-mails.  Then he set up a GMail account because 
others were also filtering on HTML-only.  Go figure.  Before you ask, I 
have not put that filter back...)


Re:

2015-05-09 Thread Jim Popovitch
On Sat, May 9, 2015 at 11:05 AM, Keith Medcalf kmedc...@dessus.com wrote:

 No test/plain?  Delete without further ado.


In the past year or so it seems that all RAA Verification emails, or
at least the ones I see, contain no plain text.  :-(

-Jim P.


Re: Rasberry pi - high density

2015-05-09 Thread Rafael Possamai
From the work that I've done in the past with clusters, your need for
bandwidth is usually not the biggest issue. When you work with big data,
let's say 500 million data points, most mathematicians would condense it
all down into averages, standard deviations, probabilities, etc, which then
become much smaller to save in your hard disks and also to perform data
analysis with, as well as transfer these stats from master to nodes and
vice-versa. So for one project at a time, your biggest concern is cpu
clock, ram, interrupts, etc. If you want to run all of the BIG 10s academic
projects into one big cluster for example, then networking might become an
issue solely due to volume.

The more data you transfer, the longer it would take to perform any
meaningful analysis on it, so really your bottleneck is TFLOPS rather than
packets per second. With Facebook it's the opposite, it's mostly pictures
and videos of cats coming in and out of the server with lots of reads and
writes on their storage. In that case, switching tbps of traffic is how
they make money.

A good example is creating a dockr container with your application and
deploying a cluster with CoreOS. You save all that capex and spend by the
hour. I believe Azure and EC2 already have support for CoreOS.




On Sat, May 9, 2015 at 12:48 AM, Tim Raphael raphael.timo...@gmail.com
wrote:

 The problem is, I can get more processing power and RAM out of two 10RU
 blade chassis and only needing 64 10G ports...

 32 x 256GB RAM per blade = 8.1TB
 32 x 16 cores x 2.4GHz = 1,228GHz
 (not based on current highest possible, just using reasonable specs)

 Needing only 4 QFX5100s which will cost less than a populated 6513 and
 give lower latency. Power, cooling and cost would be lower too.

 RPi = 900MHz and 1GB RAM. So to equal the two chassis, you'll need:

 1228 / 0.9 = 1364 Pis for compute (main performance aspect of a super
 computer) meaning double the physical space required compared to the
 chassis option.

 So yes, infeasible indeed.

 Regards,

 Tim Raphael

  On 9 May 2015, at 1:24 pm, char...@thefnf.org wrote:
 
 
 
  So I just crunched the numbers. How many pies could I cram in a rack?
 
  Check my numbers?
 
  48U rack budget
  6513 15U (48-15) = 33U remaining for pie
  6513 max of 576 copper ports
 
  Pi dimensions:
 
  3.37 l (5 front to back)
  2.21 w (6 wide)
  0.83 h
  25 per U (rounding down for Ethernet cable space etc) = 825 pi
 
  Cable management and heat would probably kill this before it ever
 reached completion, but lol...
 
 
 



RE:

2015-05-09 Thread Keith Medcalf

Ah.  Security hole as designed.  inline dispositions should be ignored unless 
the recipient specifically requests to see them after viewing the text/plain 
part.  In fact, I would vote for ignoring *everything* except the text/plain 
part unless the recipient specifically requests it after viewing the text/plain 
part.  No test/plain?  Delete without further ado.


 -Original Message-
 From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Mark Andrews
 Sent: Friday, 8 May, 2015 00:39
 To: Paul Ferguson
 Cc: NANOG
 Subject:


 In message mailman.3786.1431050203.12477.na...@nanog.org, Paul Ferguson
 via N
 ANOG writes:
 
  Does anyone any else find it weird that the last dozen or so messages
  from the list have been .eml attachments?

 Nanog is encapsulating messages that are DKIM signed.  Your mailer may
 not be properly handling

   Content-Type: message/rfc822
   Content-Disposition: inline

 Mark
 --
 Mark Andrews, ISC
 1 Seymour St., Dundas Valley, NSW 2117, Australia
 PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org





Re: Rasberry pi - high density

2015-05-09 Thread Barry Shein

On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) wrote:
  
  
  So I just crunched the numbers. How many pies could I cram in a rack?

For another list I just estimated how many M.2 SSD modules one could
cram into a 3.5 disk case. Around 40 w/ some room to spare (assuming
heat and connection routing aren't problems), at 500GB/each that's
20TB in a standard 3.5 case.

It's getting weird out there.

-- 
-Barry Shein

The World  | b...@theworld.com   | http://www.TheWorld.com
Purveyors to the Trade | Voice: 800-THE-WRLD| Dial-Up: US, PR, Canada
Software Tool  Die| Public Access Internet | SINCE 1989 *oo*


Re: Rasberry pi - high density

2015-05-09 Thread Eugeniu Patrascu
On Sat, May 9, 2015 at 9:55 PM, Barry Shein b...@world.std.com wrote:


 On May 9, 2015 at 00:24 char...@thefnf.org (char...@thefnf.org) wrote:
  
  
   So I just crunched the numbers. How many pies could I cram in a rack?

 For another list I just estimated how many M.2 SSD modules one could
 cram into a 3.5 disk case. Around 40 w/ some room to spare (assuming
 heat and connection routing aren't problems), at 500GB/each that's
 20TB in a standard 3.5 case.

 It's getting weird out there.


I think the next logical step in servers would be to remove the traditional
hard drive cages and put SSD module slots that can be hot swapped. Imagine
inserting small SSD modules on the front side of the servers and directly
connect them via PCIe to the motherboard. No more bottlenecks and a
software RAID of some sorts would actually make a lot more sense than the
current controller based solutions.