[no subject]

2015-05-08 Thread Mark Andrews

In message mailman.3786.1431050203.12477.na...@nanog.org, Paul Ferguson via N
ANOG writes:
 
 Does anyone any else find it weird that the last dozen or so messages
 from the list have been .eml attachments?

Nanog is encapsulating messages that are DKIM signed.  Your mailer may
not be properly handling

Content-Type: message/rfc822
Content-Disposition: inline

Mark
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org


Re: Mailing list posts wrapped

2015-05-08 Thread Mark Andrews

In message baa37148-8859-42b2-8f23-a4a4b2d29...@gawul.net, Andrew Koch writes
:
 There was an inadvertent DMARC handling setting applied to all posts. This h=
 as been corrected. Sorry for the disruption.=20
 
 Andrew Koch=

It was also not copying the Subject: to the outer SMTP envelope.
It would pay to check that this is being done to any message getting
encapsulated.

-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org


Re: Alcatel-Lucent 7750 Service Router (SR)

2015-05-08 Thread Watson, Bob
Carrier oem churn (turnover /agitation cycles)
First mover for features happen and leapfrog but the ones that matter get 
adopted across the line in time.  


 On May 7, 2015, at 8:40 PM, Josh Reynolds j...@spitwspots.com wrote:
 
 What churn rates are you talking about?
 
 Josh Reynolds
 CIO, SPITwSPOTS
 www.spitwspots.com
 
 On 05/07/2015 05:36 PM, Watson, Bob wrote:
 Many of these churn rates result from problems  self inflicted hence all the 
 dramatic sdn promises, popularity in abstractions, Api all the things, let's 
 go yang/netconf and retrofit every ietf standard.  There's benefits  but 
 gotta rant a little. What's better than correct? Well over correct of course.
 
 
 
 
 On May 7, 2015, at 12:17 PM, Josh Reynolds j...@spitwspots.com wrote:
 
 You know where these people wouldn't fit? W/ISPs.
 
 Every three years or so you are forklifting the majority of your wireless 
 PtMP for either a new series or a totally different vendor. New backhaul 
 vendors often. You're building AC and DC power plants. You likely touch 
 Cisco, juniper, HP, mikrotik, ubiquiti, Linux, windows, *BSD/pfsense, 
 lucent, accedian/ciena, etc due to various client and network requirements 
 all in the same week, AND you have to make them work together nicely :)
 
 It's not the environment for somebody like that, and I truly don't 
 understand how people of that.. caliber end up working on large scale 
 WANs and global transit networks.
 
 Frankly, it scares me a bit.
 
 On May 7, 2015 9:07:35 AM AKDT, Craig cvulja...@gmail.com wrote:
 we do cry when we interview people that claim to have advanced
 knowledge of BGP and we ask them some very basic BGP questions, and we
 get
 a blank stare.
 
 On Thu, May 7, 2015 at 12:49 PM, Rob Seastrom r...@seastrom.com wrote:
 
 Josh Reynolds j...@spitwspots.com writes:
 
 It really bothers me to see that people in this industry are so
 worried about a change of syntax or terminology. If there's one
 thing about the big vendors that bothers me, it's that these
 batteries of vendor specific tests have allowed many techs to get
 lazy. They simply can't seem to operate well, if at all, in a
 non-Cisco (primarily) environment.
 If that bothers you, I recommend you not look at what passes for a
 system administrator these days.  It will make you cry.
 
 -r
 -- 
 Sent from my Android device with K-9 Mail. Please excuse my brevity.
 


Re: Huawei and ZTE Routers

2015-05-08 Thread Bacon Zombie
You could try cross posting to UKNOG since BT use Huawei in their DSLAMs.

http://lists.uknof.org.uk/cgi-bin/mailman/listinfo/uknof/
On 7 May 2015 21:18, ML m...@kenweb.org wrote:

 On 5/7/2015 2:25 PM, Daniel Corbe wrote:

 Colton Conor colton.co...@gmail.com writes:

  The other thread about the Alcatel-Lucent routers has been pleasantly
 delightful. Our organization used to believe that Juniper, Cisco, and
 Brocade were the only true vendors for carrier grade routing, but now we
 are going to throw Alcatel-Lucent into the mix.

 ZTE and Huawei, the big chinese vendors, have also been mentioned to us.
 I
 know there are large national security issues with using these vendors in
 the US, but I know Level3 and other large American vendors use Huawei and
 ZTE in their networks.

 How do their products perform? How are they compared to Cisco and Juniper
 on the performance side of the house? Is their pricing really half or
 less
 of that of Cisco and Juniper? Is it worth using these vendors or not
 worth
 the hassle?

 I don't know much about Huawei but be wary of ZTE's claims.  They love
 their vendor lock-in.  They have a bad habit of giving away hardware for
 next to nothing and then ratcheting up support costs.

 Opex needs to be a consideration when selecting an equipment vendor as
 well as capex.


 2nd hand information:

 Apparently the NMS for ZTE's GPON gear is an ugly contraption.
 When upgrades are needed:
 we have to deploy a series of convuluted batch files
 and it has to be installed in the directory that whatever they installed
 it to in China
 paths are hardcoded in the app


 Hopefully there is no crossover into ZTE's other products.




OSP list?

2015-05-08 Thread Dave Allen
Does anyone know of a mailing list or group devoted to the topic of outside
plant fiber network design and construction?


Re: OSP list?

2015-05-08 Thread Josh Reynolds
WISPA has a fiber list for FTTx and hybrid deployments. It's not the 
most active thing in the world, but there can still be good stuff on there.


Josh Reynolds
CIO, SPITwSPOTS
www.spitwspots.com

On 05/08/2015 08:57 AM, Dave Allen wrote:

Does anyone know of a mailing list or group devoted to the topic of outside
plant fiber network design and construction?




Re: OSP list?

2015-05-08 Thread chris
I would also be interested
On May 8, 2015 12:59 PM, Dave Allen da...@staff.gwi.net wrote:

 Does anyone know of a mailing list or group devoted to the topic of outside
 plant fiber network design and construction?



Re: OSP list?

2015-05-08 Thread Nicholas Schmidt
+1

On Fri, May 8, 2015 at 1:02 PM, chris tknch...@gmail.com wrote:

 I would also be interested
 On May 8, 2015 12:59 PM, Dave Allen da...@staff.gwi.net wrote:

  Does anyone know of a mailing list or group devoted to the topic of
 outside
  plant fiber network design and construction?
 



Re: IP DSCP across the Internet

2015-05-08 Thread Jay Hennigan

On 5/7/15 3:05 AM, Mark Tinka wrote:


And this is what sales and marketing droids don't get - so-called
Premium Internet products abound that don't really mean anything.

The competition that offer these products are basically hoping nothing
happens, and that when it does, it seems as palatable as flying First
Class in a plane that's going down.


Which is usually a bad thing. I've never heard of an airplane backing 
into a mountain.


--
Jay Hennigan - CCIE #7880 - Network Engineering - j...@impulse.net
Impulse Internet Service  -  http://www.impulse.net/
Your local telephone and internet company - 805 884-6323 - WB6RDV


Re: OSP list?

2015-05-08 Thread Ilissa Miller
This could be a good resource - may have to dig a little:  
http://www.ospmag.com/


On May 8, 2015, at 1:18 PM, Josh Reynolds wrote:

 WISPA has a fiber list for FTTx and hybrid deployments. It's not the most 
 active thing in the world, but there can still be good stuff on there.
 
 Josh Reynolds
 CIO, SPITwSPOTS
 www.spitwspots.com
 
 On 05/08/2015 08:57 AM, Dave Allen wrote:
 Does anyone know of a mailing list or group devoted to the topic of outside
 plant fiber network design and construction?
 


Ilissa Miller
CEO, iMiller Public Relations
President, Northeast DAS + Small Cell Association
Sponsorship Sales Director, NANOG
Tel:  914.315.6424
Cel:  917.743.0931
eMail:  ili...@imillerpr.com
eMail:  imil...@nanog.org

www.imillerpr.com
www.northeastdas.com
www.nanog.org









Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Brandon Martin

On 05/08/2015 02:53 PM, John Levine wrote:

Some people I know (yes really) are building a system that will have
several thousand little computers in some racks.  Each of the
computers runs Linux and has a gigabit ethernet interface.  It occurs
to me that it is unlikely that I can buy an ethernet switch with
thousands of ports, and even if I could, would I want a Linux system
to have 10,000 entries or more in its ARP table.

Most of the traffic will be from one node to another, with
considerably less to the outside.  Physical distance shouldn't be a
problem since everything's in the same room, maybe the same rack.

What's the rule of thumb for number of hosts per switch, cascaded
switches vs. routers, and whatever else one needs to design a dense
network like this?  TIA


Unless you have some dire need to get these all on the same broadcast 
domain, those kind of numbers on a single L2 would send me running for 
the hills for lots of reasons, some of which you've identified.


I'd find a good L3 switch and put no more ~200-500 IPs on each L2 and 
let the switch handle gluing it together at L3.  With the proper 
hardware, this is a fully line-rate operation and should have no real 
downsides aside from splitting up the broadcast domains (if you do need 
multicast, make sure your gear can do it).  With a divide-and-conquer 
approach, you shouldn't have problems fitting the L2+L3 tables into even 
a pretty modest L3 switch.


Densest chassis switches I know of are going to be gets about 96 ports 
per RU (48 ports each on a half-width blade, but you need breakout 
panels to get standard RJ45 8P8C connectors as the blades have MRJ21s) 
less rack overhead for power supplies, management, etc..  That should 
get you ~2000 ports per rack [1].  Such switches can be quite expensive. 
 The trend seems to be toward stacking pizza boxes these days, though. 
 Get the number of ports you need per rack (you're presumably not 
putting all 10,000 nodes in a single rack) and aggregate up one or two 
layers.  This gives you a pretty good candidate for your L2/L3 split.


[1] Purely as an example, you can cram 3x Brocade MLX-16 chassis into a 
42U rack (with 0RU to spare).  That gives you 48 slots for line cards. 
Leaving at least one slot in each chassis for 10Gb or 100Gb uplinks to 
something else, 45x48 = 2160 1000BASE-T ports (electrically) in a 42U 
rack, and you'll need 45 more RU somewhere for breakout patch panels!

--
Brandon Martin


Mailing list messages with attachments (was Re:)

2015-05-08 Thread Larry Sheldon
Be advised that I have made changes to my personal spam traps to bin 
mailing list messages with attachments.


--
sed quis custodiet ipsos custodes? (Juvenal)


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Rafael Possamai
- The more switches a packet has to go through, the higher the latency, so
your response times may deteriorate if you cascade too many switches.
Legend says up to 4 is a good number, any further you risk creating a big
mess.

- The more switches you add, the higher your bandwidth utilized by
broadcasts in the same subnet.
http://en.wikipedia.org/wiki/Broadcast_radiation

- If you have only one connection between each switch, each switch is going
to be limited to that rate (1gbps in this case), possibly creating a
bottleneck depending on your application and how exactly it behaves.
Consider aggregating uplinks.

- Bundling too many Ethernet cables will cause interference (cross-talk),
so keep that in mind. I'd purchase F/S/FTP cables and the like.

Here I am going off on a tangent: if your friends want to build a super
computer then there's a way to calculate the most efficient number of
nodes given your constraints (e.g. linear optimization). This could save
you time, money and headaches. An example: maximize the number of TFLOPS
while minimizing number of nodes (i.e. number of switch ports). Just a
quick thought.






On Fri, May 8, 2015 at 1:53 PM, John Levine jo...@iecc.com wrote:

 Some people I know (yes really) are building a system that will have
 several thousand little computers in some racks.  Each of the
 computers runs Linux and has a gigabit ethernet interface.  It occurs
 to me that it is unlikely that I can buy an ethernet switch with
 thousands of ports, and even if I could, would I want a Linux system
 to have 10,000 entries or more in its ARP table.

 Most of the traffic will be from one node to another, with
 considerably less to the outside.  Physical distance shouldn't be a
 problem since everything's in the same room, maybe the same rack.

 What's the rule of thumb for number of hosts per switch, cascaded
 switches vs. routers, and whatever else one needs to design a dense
 network like this?  TIA

 R's,
 John



Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Miles Fidelman

John Levine wrote:

Some people I know (yes really) are building a system that will have
several thousand little computers in some racks.  Each of the
computers runs Linux and has a gigabit ethernet interface.  It occurs
to me that it is unlikely that I can buy an ethernet switch with
thousands of ports, and even if I could, would I want a Linux system
to have 10,000 entries or more in its ARP table.

Most of the traffic will be from one node to another, with
considerably less to the outside.  Physical distance shouldn't be a
problem since everything's in the same room, maybe the same rack.

What's the rule of thumb for number of hosts per switch, cascaded
switches vs. routers, and whatever else one needs to design a dense
network like this?  TIA




It's become fairly commonplace to build supercomputers out of clusters 
of 100s, or 1000s of commodity PCs, see, for example:

www.rocksclusters.org
http://www.rocksclusters.org/presentations/tutorial/tutorial-1.pdf
or
http://www.dodlive.mil/files/2010/12/CondorSupercomputerbrochure_101117_kb-3.pdf 
(a cluster of 1760 playstations at AFRL Rome Labs)


Interestingly, all the documentation I can find is heavy on the software 
layers used to cluster resources - but there's little about hardware 
configuration other than pretty pictures of racks with lots of CPUs and 
lots of wires.


If the people you know are trying to do something similar - it might be 
worth some nosing around the Rocks community, or some phone calls.  I 
expect that interconnect architecture and latency might be a bit of an 
issue for this sort of application.


Miles Fidelman




--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra



Re:

2015-05-08 Thread Larry Sheldon
Be advised that I have made changes to my personal spam traps to bin 
mailing list messages with attachments.


--
sed quis custodiet ipsos custodes? (Juvenal)


Re: OSP list?

2015-05-08 Thread Ilissa Miller
I think you did!  It was online earlier - their magazine is online too: 
http://digital.ospmag.com/#pageSet=0contentItem=0

They do have a directory issue for their magazine ... 
On May 8, 2015, at 2:51 PM, chris wrote:

 I am getting site offline... Did we kill it? Lol
 
 On Fri, May 8, 2015 at 1:34 PM, Ilissa Miller ili...@imillerpr.com wrote:
 This could be a good resource - may have to dig a little:  
 http://www.ospmag.com/
 
 
 On May 8, 2015, at 1:18 PM, Josh Reynolds wrote:
 
  WISPA has a fiber list for FTTx and hybrid deployments. It's not the most 
  active thing in the world, but there can still be good stuff on there.
 
  Josh Reynolds
  CIO, SPITwSPOTS
  www.spitwspots.com
 
  On 05/08/2015 08:57 AM, Dave Allen wrote:
  Does anyone know of a mailing list or group devoted to the topic of outside
  plant fiber network design and construction?
 
 
 
 Ilissa Miller
 CEO, iMiller Public Relations
 President, Northeast DAS + Small Cell Association
 Sponsorship Sales Director, NANOG
 Tel:  914.315.6424
 Cel:  917.743.0931
 eMail:  ili...@imillerpr.com
 eMail:  imil...@nanog.org
 
 www.imillerpr.com
 www.northeastdas.com
 www.nanog.org
 
 
 
 
 
 
 
 


Ilissa Miller
CEO, iMiller Public Relations
President, Northeast DAS + Small Cell Association
Sponsorship Sales Director, NANOG
Tel:  914.315.6424
Cel:  917.743.0931
eMail:  ili...@imillerpr.com
eMail:  imil...@nanog.org

www.imillerpr.com
www.northeastdas.com
www.nanog.org









Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Christopher Morrow
On Fri, May 8, 2015 at 2:53 PM, John Levine jo...@iecc.com wrote:
 Some people I know (yes really) are building a system that will have
 several thousand little computers in some racks.  Each of the
 computers runs Linux and has a gigabit ethernet interface.  It occurs
 to me that it is unlikely that I can buy an ethernet switch with
 thousands of ports, and even if I could, would I want a Linux system
 to have 10,000 entries or more in its ARP table.

 Most of the traffic will be from one node to another, with
 considerably less to the outside.  Physical distance shouldn't be a
 problem since everything's in the same room, maybe the same rack.

 What's the rule of thumb for number of hosts per switch, cascaded
 switches vs. routers, and whatever else one needs to design a dense
 network like this?  TIA

consider the pain of also ipv6's link-local gamery.
look at the nvo3 WG and it's predecessor (which shouldn't have really
existed anyway, but whatever, and apparently my mind helped me forget
about the pain involved with this wg)

I think 'why one lan' ? why not just small (/26 or /24 max?) subnet
sizes... or do it all in v6 on /64's with 1/rack or 1/~200 hosts.


Weekly Routing Table Report

2015-05-08 Thread Routing Analysis Role Account
This is an automated weekly mailing describing the state of the Internet
Routing Table as seen from APNIC's router in Japan.

The posting is sent to APOPS, NANOG, AfNOG, AusNOG, SANOG, PacNOG,
CaribNOG and the RIPE Routing Working Group.

Daily listings are sent to bgp-st...@lists.apnic.net

For historical data, please see http://thyme.rand.apnic.net.

If you have any comments please contact Philip Smith pfsi...@gmail.com.

Routing Table Report   04:00 +10GMT Sat 09 May, 2015

Report Website: http://thyme.rand.apnic.net
Detailed Analysis:  http://thyme.rand.apnic.net/current/

Analysis Summary


BGP routing table entries examined:  543336
Prefixes after maximum aggregation (per Origin AS):  207090
Deaggregation factor:  2.62
Unique aggregates announced (without unneeded subnets):  264470
Total ASes present in the Internet Routing Table: 50256
Prefixes per ASN: 10.81
Origin-only ASes present in the Internet Routing Table:   36631
Origin ASes announcing only one prefix:   16305
Transit ASes present in the Internet Routing Table:6324
Transit-only ASes present in the Internet Routing Table:178
Average AS path length visible in the Internet Routing Table:   4.5
Max AS path length visible:  44
Max AS path prepend of ASN ( 55944)  41
Prefixes from unregistered ASNs in the Routing Table:  1204
Unregistered ASNs in the Routing Table: 416
Number of 32-bit ASNs allocated by the RIRs:   9411
Number of 32-bit ASNs visible in the Routing Table:7301
Prefixes from 32-bit ASNs in the Routing Table:   26590
Number of bogon 32-bit ASNs visible in the Routing Table: 4
Special use prefixes present in the Routing Table:0
Prefixes being announced from unallocated address space:387
Number of addresses announced to Internet:   2741486624
Equivalent to 163 /8s, 103 /16s and 196 /24s
Percentage of available address space announced:   74.0
Percentage of allocated address space announced:   74.0
Percentage of available address space allocated:  100.0
Percentage of address space in use by end-sites:   97.3
Total number of prefixes smaller than registry allocations:  182274

APNIC Region Analysis Summary
-

Prefixes being announced by APNIC Region ASes:   134135
Total APNIC prefixes after maximum aggregation:   39069
APNIC Deaggregation factor:3.43
Prefixes being announced from the APNIC address blocks:  140281
Unique aggregates announced from the APNIC address blocks:56940
APNIC Region origin ASes present in the Internet Routing Table:5035
APNIC Prefixes per ASN:   27.86
APNIC Region origin ASes announcing only one prefix:   1207
APNIC Region transit ASes present in the Internet Routing Table:879
Average APNIC Region AS path length visible:4.4
Max APNIC Region AS path length visible: 44
Number of APNIC region 32-bit ASNs visible in the Routing Table:   1422
Number of APNIC addresses announced to Internet:  747744256
Equivalent to 44 /8s, 145 /16s and 172 /24s
Percentage of available APNIC address space announced: 87.4

APNIC AS Blocks4608-4864, 7467-7722, 9216-10239, 17408-18431
(pre-ERX allocations)  23552-24575, 37888-38911, 45056-46079, 55296-56319,
   58368-59391, 63488-64098, 131072-135580
APNIC Address Blocks 1/8,  14/8,  27/8,  36/8,  39/8,  42/8,  43/8,
49/8,  58/8,  59/8,  60/8,  61/8, 101/8, 103/8,
   106/8, 110/8, 111/8, 112/8, 113/8, 114/8, 115/8,
   116/8, 117/8, 118/8, 119/8, 120/8, 121/8, 122/8,
   123/8, 124/8, 125/8, 126/8, 133/8, 150/8, 153/8,
   163/8, 171/8, 175/8, 180/8, 182/8, 183/8, 202/8,
   203/8, 210/8, 211/8, 218/8, 219/8, 220/8, 221/8,
   222/8, 223/8,

ARIN Region Analysis Summary


Prefixes being announced by ARIN Region ASes:178748
Total ARIN prefixes after maximum aggregation:87933
ARIN Deaggregation factor: 2.03
Prefixes being announced from the ARIN address blocks:   181253
Unique aggregates announced from the ARIN address blocks: 84396
ARIN Region origin ASes present in the Internet Routing Table:16587
ARIN Prefixes per 

Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread John Levine
Some people I know (yes really) are building a system that will have
several thousand little computers in some racks.  Each of the
computers runs Linux and has a gigabit ethernet interface.  It occurs
to me that it is unlikely that I can buy an ethernet switch with
thousands of ports, and even if I could, would I want a Linux system
to have 10,000 entries or more in its ARP table.

Most of the traffic will be from one node to another, with
considerably less to the outside.  Physical distance shouldn't be a
problem since everything's in the same room, maybe the same rack.

What's the rule of thumb for number of hosts per switch, cascaded
switches vs. routers, and whatever else one needs to design a dense
network like this?  TIA

R's,
John


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Dave Taht
On Fri, May 8, 2015 at 11:53 AM, John Levine jo...@iecc.com wrote:
 Some people I know (yes really) are building a system that will have
 several thousand little computers in some racks.

Very cool-ly crazy.

 Each of the
 computers runs Linux and has a gigabit ethernet interface.  It occurs
 to me that it is unlikely that I can buy an ethernet switch with
 thousands of ports, and even if I could, would I want a Linux system
 to have 10,000 entries or more in its ARP table.

Agreed. :) You don't really want 10,000 entries in a routing FIB
table either, but I was seriously encouraged by the work going
on in linux 4.0 and 4.1 to improve those lookups.

https://netdev01.org/docs/duyck-fib-trie.pdf

I'd love to know the actual scalability of some modern
routing protocols (isis, babel, ospfv3, olsrv2, rpl) with that
many nodes too

 Most of the traffic will be from one node to another, with
 considerably less to the outside.  Physical distance shouldn't be a
 problem since everything's in the same room, maybe the same rack.

That is an awful lot of ports to fit in a rack (48 ports, 36 2U slots
in the rack (and is that too high?) = 1728
ports) A thought is you could make it meshier using multiple
interfaces per tiny linux box? Put, say
3-6 interfaces and have a very few switches interconnecting given
clusters (and multiple paths
to each switch). That would reduce your arp table (and fib table) by a
lot at the cost of adding
hops...

 What's the rule of thumb for number of hosts per switch, cascaded
 switches vs. routers, and whatever else one needs to design a dense
 network like this?  TIA

max per vlan 4096. Still a lot.

Another approach might be max density on a switch (48?) per cluster,
routed (not switched) 10GigE
to another 10GigE+ switch.

I'd love to know the rule of thumbs here also, I imagine some rules
must exist for those in the VM
or VXLAN worlds.

 R's,
 John



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67


Re: OSP list?

2015-05-08 Thread chris
I am getting site offline... Did we kill it? Lol

On Fri, May 8, 2015 at 1:34 PM, Ilissa Miller ili...@imillerpr.com wrote:

 This could be a good resource - may have to dig a little:
 http://www.ospmag.com/


 On May 8, 2015, at 1:18 PM, Josh Reynolds wrote:

  WISPA has a fiber list for FTTx and hybrid deployments. It's not the
 most active thing in the world, but there can still be good stuff on there.
 
  Josh Reynolds
  CIO, SPITwSPOTS
  www.spitwspots.com
 
  On 05/08/2015 08:57 AM, Dave Allen wrote:
  Does anyone know of a mailing list or group devoted to the topic of
 outside
  plant fiber network design and construction?
 


 Ilissa Miller
 CEO, iMiller Public Relations
 President, Northeast DAS + Small Cell Association
 Sponsorship Sales Director, NANOG
 Tel:  914.315.6424
 Cel:  917.743.0931
 eMail:  ili...@imillerpr.com
 eMail:  imil...@nanog.org

 www.imillerpr.com
 www.northeastdas.com
 www.nanog.org










RE: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Chuck Church
Sounds interesting.  I wouldn't do more than a /23 (assuming IPv4) per subnet.  
Join them all together with a fast L3 switch.  I'm still trying to visualize 
what several thousand tiny computers in a single rack might look like.  Other 
than a cabling nightmare.  1000 RJ-45 switch ports is a good chuck of a rack 
itself.

Chuck

-Original Message-
From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of John Levine
Sent: Friday, May 08, 2015 2:53 PM
To: nanog@nanog.org
Subject: Thousands of hosts on a gigabit LAN, maybe not

Some people I know (yes really) are building a system that will have several 
thousand little computers in some racks.  Each of the computers runs Linux and 
has a gigabit ethernet interface.  It occurs to me that it is unlikely that I 
can buy an ethernet switch with thousands of ports, and even if I could, would 
I want a Linux system to have 10,000 entries or more in its ARP table.

Most of the traffic will be from one node to another, with considerably less to 
the outside.  Physical distance shouldn't be a problem since everything's in 
the same room, maybe the same rack.

What's the rule of thumb for number of hosts per switch, cascaded switches vs. 
routers, and whatever else one needs to design a dense network like this?  TIA

R's,
John



RE: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Sameer Khosla
You may want to look at CLOS / leaf/spine architecture.  This design tends to 
be optimized for east-west traffic, scales easily as bandwidth needs grow, and 
keeps thing simple, l2/l3 boundry on the ToR switch, L3 ECMP from leaf to 
spine.  Not a lot of complexity and scale fairly high on both leafs and spines. 
 

Sk.

-Original Message-
From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of John Levine
Sent: Friday, May 08, 2015 2:53 PM
To: nanog@nanog.org
Subject: Thousands of hosts on a gigabit LAN, maybe not

Some people I know (yes really) are building a system that will have several 
thousand little computers in some racks.  Each of the computers runs Linux and 
has a gigabit ethernet interface.  It occurs to me that it is unlikely that I 
can buy an ethernet switch with thousands of ports, and even if I could, would 
I want a Linux system to have 10,000 entries or more in its ARP table.

Most of the traffic will be from one node to another, with considerably less to 
the outside.  Physical distance shouldn't be a problem since everything's in 
the same room, maybe the same rack.

What's the rule of thumb for number of hosts per switch, cascaded switches vs. 
routers, and whatever else one needs to design a dense network like this?  TIA

R's,
John


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Niels Bakker

* lists.na...@monmotha.net (Brandon Martin) [Fri 08 May 2015, 21:42 CEST]:
[1] Purely as an example, you can cram 3x Brocade MLX-16 chassis into 
a 42U rack (with 0RU to spare).  That gives you 48 slots for line cards.


You really can't.  Cables need to come from the top, not from the 
sides, or they'll block the path of other linecards.



-- Niels.


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread John Levine
 to have 10,000 entries or more in its ARP table.

Agreed. :) You don't really want 10,000 entries in a routing FIB
table either, but I was seriously encouraged by the work going
on in linux 4.0 and 4.1 to improve those lookups.

One obvious way to deal with that is to put some manageable number of
hosts on a subnet and route traffic between the subnets.  I think we
can assume they'll all have 10/8 addresses, and I'm not too worried
about performance to the outside world, just within the network.

R's,
John


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Blake Hudson
Linux has a (configurable) limit on the neighbor table. I know in RHEL 
variants, the default has been 1024 neighbors for a while.


net.ipv4.neigh.default.gc_thresh3
net.ipv4.neigh.default.gc_thresh2
net.ipv4.neigh.default.gc_thresh1

net.ipv6.neigh.default.gc_thresh3
net.ipv6.neigh.default.gc_thresh2
net.ipv6.neigh.default.gc_thresh1

These may be rough guidelines for performance or arbitrary limits 
someone thought would be a good idea. Either way, you'll need to 
increase the number if you're using IP on Linux.


Although not explicitly stated, I would assume that these computers may 
be virtualized or inside some sort of blade chassis (which reduces the 
number of physical cables to a switch). Strictly speaking, I see no 
hardware limitation in your way, as most top of rack switches will 
easily do a few thousand or 10's of thousands of MAC entries and a few 
thousand hosts can fit inside a single IP4 or IP6 subnet. There are some 
pretty dense switches if you actually do need 1000 ports, but as others 
have stated, you'll utilize a good portion of the rack in cable and 
connectors.


--Blake


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Miles Fidelman
Forgot to mention - you might also want to check out Beowulf clusters - 
there's an email list at http://www.beowulf.org/ - probably some useful 
info in the list archives, maybe a good place to post your query.


Miles

Miles Fidelman wrote:

John Levine wrote:

Some people I know (yes really) are building a system that will have
several thousand little computers in some racks.  Each of the
computers runs Linux and has a gigabit ethernet interface.  It occurs
to me that it is unlikely that I can buy an ethernet switch with
thousands of ports, and even if I could, would I want a Linux system
to have 10,000 entries or more in its ARP table.

Most of the traffic will be from one node to another, with
considerably less to the outside.  Physical distance shouldn't be a
problem since everything's in the same room, maybe the same rack.

What's the rule of thumb for number of hosts per switch, cascaded
switches vs. routers, and whatever else one needs to design a dense
network like this?  TIA




It's become fairly commonplace to build supercomputers out of clusters 
of 100s, or 1000s of commodity PCs, see, for example:

www.rocksclusters.org
http://www.rocksclusters.org/presentations/tutorial/tutorial-1.pdf
or
http://www.dodlive.mil/files/2010/12/CondorSupercomputerbrochure_101117_kb-3.pdf 
(a cluster of 1760 playstations at AFRL Rome Labs)


Interestingly, all the documentation I can find is heavy on the 
software layers used to cluster resources - but there's little about 
hardware configuration other than pretty pictures of racks with lots 
of CPUs and lots of wires.


If the people you know are trying to do something similar - it might 
be worth some nosing around the Rocks community, or some phone calls.  
I expect that interconnect architecture and latency might be a bit of 
an issue for this sort of application.


Miles Fidelman







--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra



RE: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Brian R
Agree with many of the other comments.  Smaller subnets (the /23 suggestion 
sounds good) with L3 between the subnets.
 
off topic
The first thing that came to mind was Bitcoin farm! then Ask Bitmaintech 
and then I'd be more worried about the number of fans and A/C units.
 /off topic
 
Brian
 
 Date: Fri, 8 May 2015 18:53:03 +
 From: jo...@iecc.com
 To: nanog@nanog.org
 Subject: Thousands of hosts on a gigabit LAN, maybe not
 
 Some people I know (yes really) are building a system that will have
 several thousand little computers in some racks.  Each of the
 computers runs Linux and has a gigabit ethernet interface.  It occurs
 to me that it is unlikely that I can buy an ethernet switch with
 thousands of ports, and even if I could, would I want a Linux system
 to have 10,000 entries or more in its ARP table.
 
 Most of the traffic will be from one node to another, with
 considerably less to the outside.  Physical distance shouldn't be a
 problem since everything's in the same room, maybe the same rack.
 
 What's the rule of thumb for number of hosts per switch, cascaded
 switches vs. routers, and whatever else one needs to design a dense
 network like this?  TIA
 
 R's,
 John
  

Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Brandon Martin

On 05/08/2015 04:17 PM, Niels Bakker wrote:

* lists.na...@monmotha.net (Brandon Martin) [Fri 08 May 2015, 21:42 CEST]:

[1] Purely as an example, you can cram 3x Brocade MLX-16 chassis into
a 42U rack (with 0RU to spare).  That gives you 48 slots for line cards.


You really can't.  Cables need to come from the top, not from the sides,
or they'll block the path of other linecards.


Hum, good point.  Cram may not be a strong enough term :)  It'd work 
on the horizontal slot chassis types (4/8 slot), but not the vertical 
(16/32 slot).


You might be able to make it fit if you didn't care about 
maintainability, I guess.  There's some room to maneuver if you don't 
care about being able to get the power supplies out, too.  I don't 
recommend this approach...  Those MRJ21 cables are not easy to work with 
as it is.

--
Brandon Martin


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread charles

On 2015-05-08 13:53, John Levine wrote:

Some people I know (yes really) are building a system that will have
several thousand little computers in some racks.



How many racks?
How many computers per rack unit? How many computers per rack?
(How are you handling power?)
How big is each computer?

Do you want network cabling to be contained to each rack? Or do you want 
to run the cable to a central networking/switching rack?


H even a 6513 fully populated with POE 48 port line cards (which 
could let you do power and network in the same cable (I think? Does POE 
work on gigabit these days)? would get you (12*48 = 576) ports.


So 48U rack - 15U (I think the 6513 is 15U total) leaves you 33U. 
Can you fit 576 systems in 33U?



  Each of the

computers runs Linux and has a gigabit ethernet interface.




Copper?

  It occurs

to me that it is unlikely that I can buy an ethernet switch with
thousands of ports


6515?


, and even if I could, would I want a Linux system

to have 10,000 entries or more in its ARP table.



Add more ram. That's always the answer. LOL.



Most of the traffic will be from one node to another, with
considerably less to the outside.  Physical distance shouldn't be a
problem since everything's in the same room, maybe the same rack.

What's the rule of thumb for number of hosts per switch, cascaded
switches vs. routers, and whatever else one needs to design a dense
network like this?  TIA



We need more data.



Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Roland Dobbins


On 9 May 2015, at 1:53, John Levine wrote:

What's the rule of thumb for number of hosts per switch, cascaded 
switches vs. routers, and whatever else one needs to design a dense 
network like this?


Most of the major switch vendors have design guides and other examples 
like this available (this one is Cisco-specific):


http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/VMDC/3-0-1/DG/VMDC_3-0-1_DG/VMDC301_DG3.html

Some organizations like Facebook have also taken the time to write up 
their approaches and make them publicly available:


https://code.facebook.com/posts/360346274145943/introducing-data-center-fabric-the-next-generation-facebook-data-center-network/

---
Roland Dobbins rdobb...@arbor.net


[NANOG-announce] NANOG 64 Reminders

2015-05-08 Thread Betty Burke be...@nanog.org
NANOGers,

As we continue our final preparations in support of NANOG 64, June 1-3, 2015
https://www.nanog.org/meetings/nanog64/home in San Francisco, let me
share the following highlights and reminders:


   - The NANOG 64 Agenda https://www.nanog.org/meetings/nanog64/agenda is
   posted, and updates will continue to be provided.
   - Be sure to note the Registration
   https://www.nanog.org/meetings/nanog64/registration Fee Schedule
   -  Standard Registration starting April 1, 2015
  - (non-member $525, member $500, student $100)
  -  Late Registration starting May 23, 2015
  - (non-member $600, member $575, student $100)
  -  On-Site Registration starting May 31, 2015
  - (non-member $675, member $650, student $100)
  - Cancellation Fee: $50.00, 2 weeks before meeting cancellation fee
  is $100.00 - May 18, 2015
  - *No refund will be offered after May 31, 2015.
   - Also, take a moment to join NANOG,  or be sure to renew your existing
   Membership https://www.nanog.org/membership/join.


   - The hotel room blocks are becoming stressed.  Do make your hotel room
   reservation https://www.nanog.org/meetings/nanog64/hotelinformation
quickly.


   - We welcome those 503 attendees
   https://www.nanog.org/meetings/nanog64/attendees and conference
   sponsors https://www.nanog.org/meetings/nanog64/sponsors already
   planning to join us for NANOG 64.


   - You will find a new Peering Personals
   https://www.nanog.org/meetings/nanog64/peeringpersonalsactivity on
   Monday, and will be treated to Sponsor Socials on Sunday, Monday, and
   Tuesday evenings.  The ever famous, must attend event, BnG will be on
   Tuesday at the close of NANOG programming.

We encourage those not yet registered to do so, and join us for what will
be another large and exciting June NANOG meeting!

Should you have any questions, contact us at nanog-supp...@nanog.org.

See you soon!

Sincerely,
Betty
___
NANOG-announce mailing list
nanog-annou...@mailman.nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog-announce

BGP Update Report

2015-05-08 Thread cidr-report
BGP Update Report
Interval: 30-Apr-15 -to- 07-May-15 (7 days)
Observation Point: BGP Peering with AS131072

TOP 20 Unstable Origin AS
Rank ASNUpds %  Upds/PfxAS-Name
 1 - AS23752  272041  5.6%2267.0 -- NPTELECOM-NP-AS Nepal 
Telecommunications Corporation, Internet Services,NP
 2 - AS9829   266241  5.5% 190.4 -- BSNL-NIB National Internet 
Backbone,IN
 3 - AS22059  101494  2.1%   16915.7 -- APVIO-1 - Apvio, Inc.,US
 4 - AS771393947  1.9%   18789.4 -- TELKOMNET-AS-AP PT 
Telekomunikasi Indonesia,ID
 5 - AS36947   82403  1.7%1248.5 -- ALGTEL-AS,DZ
 6 - AS26615   74092  1.5%  59.3 -- Tim Celular S.A.,BR
 7 - AS370971276  1.5%2639.9 -- NET-CITY-SA - City of San 
Antonio,US
 8 - AS45899   69419  1.4%  96.8 -- VNPT-AS-VN VNPT Corp,VN
 9 - AS26821   56095  1.2%8013.6 -- REVNET - Revelation Networks, 
Inc.,US
10 - AS18566   39338  0.8%  21.3 -- MEGAPATH5-US - MegaPath 
Corporation,US
11 - AS54169   35125  0.7%   11708.3 -- MGH-ION-1 - Marin General 
Hospital,US
12 - AS28573   34329  0.7%  14.6 -- NET Serviços de Comunicação 
S.A.,BR
13 - AS25563   32631  0.7%   10877.0 -- WEBLAND-AS Webland AG, 
Autonomous System,CH
14 - AS18403   32623  0.7%  45.4 -- FPT-AS-AP The Corporation for 
Financing  Promoting Technology,VN
15 - AS20248   25818  0.5%1358.8 -- TAKE2 - Take 2 Hosting, Inc.,US
16 - AS33667   25338  0.5% 649.7 -- CMCS - Comcast Cable 
Communications, Inc.,US
17 - AS840225290  0.5% 172.0 -- CORBINA-AS OJSC Vimpelcom,RU
18 - AS33651   25254  0.5% 647.5 -- CMCS - Comcast Cable 
Communications, Inc.,US
19 - AS17451   22968  0.5% 247.0 -- BIZNET-AS-AP BIZNET NETWORKS,ID
20 - AS48309   21934  0.5% 430.1 -- AGS-AS Ariana Gostar Spadana 
(PJSC),IR


TOP 20 Unstable Origin AS (Updates per announced prefix)
Rank ASNUpds %  Upds/PfxAS-Name
 1 - AS771393947  1.9%   18789.4 -- TELKOMNET-AS-AP PT 
Telekomunikasi Indonesia,ID
 2 - AS22059  101494  2.1%   16915.7 -- APVIO-1 - Apvio, Inc.,US
 3 - AS54169   35125  0.7%   11708.3 -- MGH-ION-1 - Marin General 
Hospital,US
 4 - AS25563   32631  0.7%   10877.0 -- WEBLAND-AS Webland AG, 
Autonomous System,CH
 5 - AS3935889608  0.2%9608.0 -- MUBEA-FLO - Mubea,US
 6 - AS181358109  0.2%8109.0 -- BTV BTV Cable television,JP
 7 - AS26821   56095  1.2%8013.6 -- REVNET - Revelation Networks, 
Inc.,US
 8 - AS2021957426  0.1%7426.0 -- ASTEKOSTMN West Siberian 
Regional Center of Telecommunications Tekos-Tyumen Ltd.,RU
 9 - AS463367426  0.1%7426.0 -- GOODVILLE - Goodville Mutual 
Casualty Company,US
10 - AS571207422  0.1%7422.0 -- TMK-AS JSC TMK,RU
11 - AS175711319  0.2%5659.5 -- LEXIS-AS - LexisNexis,US
12 - AS1979144994  0.1%4994.0 -- STOCKHO-AS Stockho Hosting 
SARL,FR
13 - AS45726   17105  0.3%4276.2 -- LIONAIR-AS-ID Lion Mentari 
Airlines, PT,ID
14 - AS52233   20915  0.4%3485.8 -- Columbus Communications Curacao 
NV,CW
15 - AS13483   12603  0.3%3150.8 -- INFOR-AS13483 - INFOR GLOBAL 
SOLUTIONS (MICHIGAN), INC.,US
16 - AS463586195  0.1%3097.5 -- UAT - University of Advancing 
Technology,US
17 - AS1991212925  0.1%2925.0 -- FLEXOPTIX Flexoptix GmbH,DE
18 - AS216713748  0.3%2749.6 -- HPES - Hewlett-Packard 
Company,US
19 - AS370971276  1.5%2639.9 -- NET-CITY-SA - City of San 
Antonio,US
20 - AS557412593  0.1%2593.0 -- WBSDC-NET-IN West Bengal 
Electronics Industry Development,IN


TOP 20 Unstable Prefixes
Rank Prefix Upds % Origin AS -- AS Name
 1 - 202.70.64.0/21   137183  2.7%   AS23752 -- NPTELECOM-NP-AS Nepal 
Telecommunications Corporation, Internet Services,NP
 2 - 202.70.88.0/21   133506  2.7%   AS23752 -- NPTELECOM-NP-AS Nepal 
Telecommunications Corporation, Internet Services,NP
 3 - 118.98.88.0/2494467  1.9%   AS64567 -- -Private Use AS-,ZZ
 AS7713  -- TELKOMNET-AS-AP PT 
Telekomunikasi Indonesia,ID
 4 - 105.96.0.0/22 81587  1.6%   AS36947 -- ALGTEL-AS,DZ
 5 - 64.34.125.0/2450755  1.0%   AS22059 -- APVIO-1 - Apvio, Inc.,US
 6 - 76.191.107.0/24   50731  1.0%   AS22059 -- APVIO-1 - Apvio, Inc.,US
 7 - 204.80.242.0/24   35118  0.7%   AS54169 -- MGH-ION-1 - Marin General 
Hospital,US
 8 - 107.0.152.0/2426172  0.5%   AS33651 -- CMCS - Comcast Cable 
Communications, Inc.,US
 AS33667 -- CMCS - Comcast Cable 
Communications, Inc.,US
 9 - 162.208.96.0/24   24251  0.5%   AS33651 -- CMCS - Comcast Cable 
Communications, Inc.,US
 AS33667 -- CMCS - Comcast Cable 
Communications, Inc.,US
10 - 

The Cidr Report

2015-05-08 Thread cidr-report
This report has been generated at Fri May  8 21:14:42 2015 AEST.
The report analyses the BGP Routing Table of AS2.0 router
and generates a report on aggregation potential within the table.

Check http://www.cidr-report.org/2.0 for a current version of this report.

Recent Table History
Date  PrefixesCIDR Agg
01-05-15549780  302345
02-05-15549975  302322
03-05-15549810  302602
04-05-15550100  302954
05-05-15549974  302765
06-05-15549964  302600
07-05-15550322  303393
08-05-15550752  303572


AS Summary
 50526  Number of ASes in routing system
 20158  Number of ASes announcing only one prefix
  3226  Largest number of prefixes announced by an AS
AS10620: Telmex Colombia S.A.,CO
  120959488  Largest address span announced by an AS (/32s)
AS4134 : CHINANET-BACKBONE No.31,Jin-rong Street,CN


Aggregation Summary
The algorithm used in this report proposes aggregation only
when there is a precise match using the AS path, so as 
to preserve traffic transit policies. Aggregation is also
proposed across non-advertised address space ('holes').

 --- 08May15 ---
ASnumNetsNow NetsAggr  NetGain   % Gain   Description

Table 550224   303500   24672444.8%   All ASes

AS22773 3033  169 286494.4%   ASN-CXA-ALL-CCI-22773-RDC -
   Cox Communications Inc.,US
AS17974 2766   81 268597.1%   TELKOMNET-AS2-AP PT
   Telekomunikasi Indonesia,ID
AS6389  2801  182 261993.5%   BELLSOUTH-NET-BLK -
   BellSouth.net Inc.,US
AS39891 2473   29 244498.8%   ALJAWWALSTC-AS Saudi Telecom
   Company JSC,SA
AS28573 2319  310 200986.6%   NET Serviços de Comunicação
   S.A.,BR
AS3356  2557  682 187573.3%   LEVEL3 - Level 3
   Communications, Inc.,US
AS4766  2927 1337 159054.3%   KIXS-AS-KR Korea Telecom,KR
AS9808  1574   67 150795.7%   CMNET-GD Guangdong Mobile
   Communication Co.Ltd.,CN
AS6983  1751  249 150285.8%   ITCDELTA - Earthlink, Inc.,US
AS7545  2624 1160 146455.8%   TPG-INTERNET-AP TPG Telecom
   Limited,AU
AS20115 1879  493 138673.8%   CHARTER-NET-HKY-NC - Charter
   Communications,US
AS10620 3226 1851 137542.6%   Telmex Colombia S.A.,CO
AS7303  1660  292 136882.4%   Telecom Argentina S.A.,AR
AS4755  2012  713 129964.6%   TATACOMM-AS TATA
   Communications formerly VSNL
   is Leading ISP,IN
AS9498  1332  112 122091.6%   BBIL-AP BHARTI Airtel Ltd.,IN
AS4323  1622  412 121074.6%   TWTC - tw telecom holdings,
   inc.,US
AS18566 2036  869 116757.3%   MEGAPATH5-US - MegaPath
   Corporation,US
AS7552  1157   58 109995.0%   VIETEL-AS-AP Viettel
   Corporation,VN
AS22561 1357  286 107178.9%   CENTURYLINK-LEGACY-LIGHTCORE -
   CenturyTel Internet Holdings,
   Inc.,US
AS6147  1222  202 102083.5%   Telefonica del Peru S.A.A.,PE
AS8402  1026   24 100297.7%   CORBINA-AS OJSC Vimpelcom,RU
AS6849  1210  240  97080.2%   UKRTELNET JSC UKRTELECOM,UA
AS8151  1606  667  93958.5%   Uninet S.A. de C.V.,MX
AS7738   999   83  91691.7%   Telemar Norte Leste S.A.,BR
AS38285  982  119  86387.9%   M2TELECOMMUNICATIONS-AU M2
   Telecommunications Group
   Ltd,AU
AS4538  1926 1075  85144.2%   ERX-CERNET-BKB China Education
   and Research Network
   Center,CN
AS18881  868   39  82995.5%   Global Village Telecom,BR
AS26615  963  166  79782.8%   Tim Celular S.A.,BR
AS24560 1237  462  77562.7%   AIRTELBROADBAND-AS-AP Bharti
   Airtel Ltd., Telemedia
   Services,IN
AS18101  964  195  76979.8%   RELIANCE-COMMUNICATIONS-IN
  

RE: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Phil Bedard
The real answer to this is being able to cram them into a single chassis which 
can multiplex the network through a backplane.  Something like the HP Moonshot 
ARM system or the way others like Google build high density compute with 
integrated Ethernet switching. 

Phil

-Original Message-
From: John Levine jo...@iecc.com
Sent: ‎5/‎8/‎2015 2:59 PM
To: nanog@nanog.org nanog@nanog.org
Subject: Thousands of hosts on a gigabit LAN, maybe not

Some people I know (yes really) are building a system that will have
several thousand little computers in some racks.  Each of the
computers runs Linux and has a gigabit ethernet interface.  It occurs
to me that it is unlikely that I can buy an ethernet switch with
thousands of ports, and even if I could, would I want a Linux system
to have 10,000 entries or more in its ARP table.

Most of the traffic will be from one node to another, with
considerably less to the outside.  Physical distance shouldn't be a
problem since everything's in the same room, maybe the same rack.

What's the rule of thumb for number of hosts per switch, cascaded
switches vs. routers, and whatever else one needs to design a dense
network like this?  TIA

R's,
John


Updated prefix filtering

2015-05-08 Thread Chaim Rieger

Best example  I’ve found is located at http://jonsblog.lewis.org/ 
http://jonsblog.lewis.org/

I too ran out of space, Brocade, not Cisco though, and am looking to filter 
prefixes. did anybody do a more recent or updated filter list  since 2008 ?

Offlist is fine. 

Oh and happy friday to all.

RE: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread charles

On 2015-05-08 18:20, Phil Bedard wrote:

The real answer to this is being able to cram them into a single
chassis which can multiplex the network through a backplane.
Something like the HP Moonshot ARM system or the way others like
Google build high density compute with integrated Ethernet switching.




I was going to suggest moonshot myself (I walk by a number of moonshot 
units daily). However it seemed like the systems were already selected 
and then someone was like oh yeah, better ask netops how to hook these 
things we bought and didn't tell anyone about to the interwebz. (I mean 
that's not a 100% accurate description of my $DAYJOB at all).


In which case, the standard response is well gee whizz buddy, ya should 
of bought moonshot jigs. But now you have to buy pallet loads of chassis 
switches. Hope you have some money left over in your budget.


Rasberry pi - high density

2015-05-08 Thread charles



So I just crunched the numbers. How many pies could I cram in a rack?

Check my numbers?

48U rack budget
6513 15U (48-15) = 33U remaining for pie
6513 max of 576 copper ports

Pi dimensions:

3.37 l (5 front to back)
2.21 w (6 wide)
0.83 h
25 per U (rounding down for Ethernet cable space etc) = 825 pi

Cable management and heat would probably kill this before it ever 
reached completion, but lol...






Re: Rasberry pi - high density

2015-05-08 Thread Tim Raphael
The problem is, I can get more processing power and RAM out of two 10RU blade 
chassis and only needing 64 10G ports...

32 x 256GB RAM per blade = 8.1TB
32 x 16 cores x 2.4GHz = 1,228GHz
(not based on current highest possible, just using reasonable specs)

Needing only 4 QFX5100s which will cost less than a populated 6513 and give 
lower latency. Power, cooling and cost would be lower too.

RPi = 900MHz and 1GB RAM. So to equal the two chassis, you'll need:

1228 / 0.9 = 1364 Pis for compute (main performance aspect of a super computer) 
meaning double the physical space required compared to the chassis option.

So yes, infeasible indeed.

Regards,

Tim Raphael

 On 9 May 2015, at 1:24 pm, char...@thefnf.org wrote:
 
 
 
 So I just crunched the numbers. How many pies could I cram in a rack?
 
 Check my numbers?
 
 48U rack budget
 6513 15U (48-15) = 33U remaining for pie
 6513 max of 576 copper ports
 
 Pi dimensions:
 
 3.37 l (5 front to back)
 2.21 w (6 wide)
 0.83 h
 25 per U (rounding down for Ethernet cable space etc) = 825 pi
 
 Cable management and heat would probably kill this before it ever reached 
 completion, but lol...
 
 
 


Any AWS folk on the list?

2015-05-08 Thread Mike Lyon
Trying get a cross-connect up with you in SV5 and your customer support
folk are unable to call Equinix to trounleshoot.

If you could ping me offlist, it would be greatly appreciated.

Thank You,
Mike


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Joe Hamelin
On Fri, May 8, 2015 at 11:53 AM, John Levine jo...@iecc.com wrote:

 Some people I know (yes really) are building a system that will have
 several thousand little computers in some racks.  Each of the
 computers runs Linux and has a gigabit ethernet interface.


Though a bit off-topic I ran in to this project at the CascadeIT
conference.  I'm currently in corp IT that is Notes/Windows based so I
haven't had a good place to test it but the concept is very interesting.
They distributed way they monitor would greatly reduce bandwidth overhead.

http://assimproj.org

The Assimilation Project is designed to discover and monitor
infrastructure, services, and dependencies on a network of potentially
unlimited size, without significant growth in centralized resources. The
work of discovery and monitoring is delegated uniformly in tiny pieces to
the various machines in a network-aware topology - minimizing network
overhead and being naturally geographically sensitive.

The two main ideas are:

   - distribute discovery throughout the network, doing most discovery
   locally
   - distribute the monitoring as broadly as possible in a network-aware
   fashion.
   - use autoconfiguration and zero-network-footprint discovery techniques
   to monitor most resources automatically. during the initial installation
   and during ongoing system addition and maintenance.



--
Joe Hamelin, W7COM, Tulalip, WA, 360-474-7474


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Jima

On 2015-05-08 12:53, John Levine wrote:

What's the rule of thumb for number of hosts per switch, cascaded
switches vs. routers, and whatever else one needs to design a dense
network like this?  TIA


 I won't pretend to know best practices, but my inclination would be to 
connect the devices to 48-port L2 ToR switches with 2-4 SFP+ uplink 
ports (a number of vendors have options for this), with the 10gbit ports 
aggregated to a 10gbit core L2/L3 switch stack (ditto).  I'm not sure 
I'd attempt this without 10gbit to the edge switches, due to Rafael's 
aforementioned point of the bottleneck/loss of multiple ports for trunking.


 Not knowing the architectural constraints, I'd probably go with 
others' advice of limiting L2 zones to 200-500 hosts, which would 
probably amount to 4-10 edge switches per VLAN.


 Dang.  The more I think about this project, the more expensive it sounds.

 Jima


Re: Updated prefix filtering

2015-05-08 Thread Faisal Imtiaz
Not sure if you missed it.. there was a discussion on this topic in the recent 
past...
I am taking the liberty of re-posting below.. you may find it useful.

--
Hi Freddy,

As Paul has mentioned, you could check the David's project - SIR, look
at his presentation:
https://www.youtube.com/watch?v=o1njanXhQqM

We've also developed a platform for the BGP monitoring and routing
optimization which could solve your problem. It would inject to the
border routers only TOP X prefixes with which you exchange most of the
traffic. The added value would be that route orders point to best
performing transit (low latency, 0 packet loss) per distant prefix.

If you are interested to know more about our software please contact me
off-list.


-- 
Regards,
Pawel Rybczyk
Regional Manager
BORDER 6 sp. z o.o.
pawel.rybc...@border6.com
office: +48 22 242 89 51 (ext.103)
mobile: +48 664 300 375
==

Faisal Imtiaz
Snappy Internet  Telecom
7266 SW 48 Street
Miami, FL 33155
Tel: 305 663 5518 x 232

Help-desk: (305)663-5518 Option 2 or Email: supp...@snappytelecom.net 

- Original Message -
 From: Chaim Rieger chaim.rie...@gmail.com
 To: NANOG list nanog@nanog.org
 Sent: Friday, May 8, 2015 6:41:34 PM
 Subject: Updated prefix filtering
 
 
 Best example  I’ve found is located at http://jonsblog.lewis.org/
 http://jonsblog.lewis.org/
 
 I too ran out of space, Brocade, not Cisco though, and am looking to filter
 prefixes. did anybody do a more recent or updated filter list  since 2008 ?
 
 Offlist is fine.
 
 Oh and happy friday to all.


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Joe Hamelin
On Fri, May 8, 2015 at 5:19 PM, Jima na...@jima.us wrote:
   Dang.  The more I think about this project, the more expensive it sounds.

Naw, just use WiFi.  ;)

--
Joe Hamelin, W7COM, Tulalip, WA, 360-474-7474


RE: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread John R. Levine

off topic
The first thing that came to mind was Bitcoin farm! then Ask Bitmaintech and then 
I'd be more worried about the number of fans and A/C units.
/off topic


I promise, no bitcoins involved.

R's,
John


Re: Thousands of hosts on a gigabit LAN, maybe not

2015-05-08 Thread Benson Schliesser
Morrow's comment about the ARMD WG notwithstanding, there might be some 
useful context in https://tools.ietf.org/html/draft-karir-armd-statistics-01


Cheers,
-Benson


Christopher Morrow mailto:morrowc.li...@gmail.com
May 8, 2015 at 12:19 PM

consider the pain of also ipv6's link-local gamery.
look at the nvo3 WG and it's predecessor (which shouldn't have really
existed anyway, but whatever, and apparently my mind helped me forget
about the pain involved with this wg)

I think 'why one lan' ? why not just small (/26 or /24 max?) subnet
sizes... or do it all in v6 on /64's with 1/rack or 1/~200 hosts.
John Levine mailto:jo...@iecc.com
May 8, 2015 at 11:53 AM
Some people I know (yes really) are building a system that will have
several thousand little computers in some racks. Each of the
computers runs Linux and has a gigabit ethernet interface. It occurs
to me that it is unlikely that I can buy an ethernet switch with
thousands of ports, and even if I could, would I want a Linux system
to have 10,000 entries or more in its ARP table.

Most of the traffic will be from one node to another, with
considerably less to the outside. Physical distance shouldn't be a
problem since everything's in the same room, maybe the same rack.

What's the rule of thumb for number of hosts per switch, cascaded
switches vs. routers, and whatever else one needs to design a dense
network like this? TIA

R's,
John