[NANOG-announce] 2016 NANOG Election Results

2016-10-20 Thread Dave Temkin
Greetings NANOG Colleagues,

The 2016 NANOG Board and Bylaw election process is now complete.

The results were shared during NANOG 68, are posted on the NANOG website,
and summarized here.

In 2016, there were two regular open positions on the Board of Directors.
The appointments are:

   -

   Will Charnock - 3 years
   -

   Patrick Gilmore - 3 years


The officers elected and committee liaisons are:

   -

   Chair - Dave Temkin
   -

   Vice Chair - Ryan Donnelly
   -

   Treasurer - Will Charnock
   -

   Secretary - Betty Burke
   -

   Communications Committee Liaison - Jezzibell Gilmore
   -

   Program Committee Liaison - Patrick Gilmore


The proposed amendments to the NANOG Bylaws were accepted. The updated
Bylaws document will be posted to the NANOG website.


Best Regards,

-Dave Temkin

Chair, NANOG Board of Directors
___
NANOG-announce mailing list
nanog-annou...@mailman.nanog.org
http://mailman.nanog.org/mailman/listinfo/nanog-announce

Re: A perl script to convert Cisco IOS/Nexus/ASA configurations to HTML for easier comprehension

2016-10-20 Thread Ken Chase
re more general 'network utilities' and scripts:

 http://sizone.org/m/hacks/cidrmath.pl

adds and removes subnets from networks giving list of remaining/aggregated 
(sub)nets.

I couldnt find an online calculator that does this, most are just for 
'translation' 
from subnet masks<>cidr or cisco inverse masks, etc.

Wrote it years ago cuz I had an itch. The included perl module populates a
hash entry per ip and I didnt want to write my own, so uses lots of ram+cpu on
big ops (/8 - /9 for eg). But great for earthly operations like /23 - /27 +
/28.

Yes I should start my own git repo, but i've been lazy.

No warranties provided.

If anyone has a faster/better one, that'd be handy.

/kc
--
Ken Chase - k...@sizone.org Toronto & Guelph Canada


Re: MPLS in the campus Network?

2016-10-20 Thread Nick Hilliard
Mark Tinka wrote:
> Not sure what gear you're using now, but you'll get full routing and
> MPLS features on the platforms such as the Cisco ASR920. I'd have
> recommended the Cisco ME3600X, but they just announced EoS/EoL last
> night, which means that while you can still order it until October 2017,
> the ASR920 will be cheaper and is the future of of Metro-E from Cisco in
> this segment.

the ME3600x is mostly fine for this sort of thing, with the exception of
its control plane policing mechanism which is badly limited in a number
of important ways, not least the fact that its traffic categorisation
buckets are non-configurable and not what you might necessarily want for
customer edge service.

Nick



Re: MPLS in the campus Network?

2016-10-20 Thread Mark Tinka


On 20/Oct/16 18:45, Roland Dobbins wrote:

>
> Sure - but it's probably worth revisiting the origins of those
> requirements, and whether there are better alternatives.

Indeed.

What we've seen is customers who prefer to manage their own IP layer,
and just need transport. These types of customers tend to be split
between EoDWDM and EoMPLS preferences. Whatever the case, their primary
requirement is control of their IP domain.

What we're not seeing anymore is l3vpn requirements, particularly on the
back of on-premise IT infrastructure moving into the cloud. We see this
driving a lot of regular IP growth.

Mark.


Re: MPLS in the campus Network?

2016-10-20 Thread Roland Dobbins


On 20 Oct 2016, at 23:32, Mark Tinka wrote:


Some requirements call for Ethernet transport as opposed to IP.


Sure - but it's probably worth revisiting the origins of those 
requirements, and whether there are better alternatives.


---
Roland Dobbins 


Re: MPLS in the campus Network?

2016-10-20 Thread Mark Tinka


On 20/Oct/16 18:29, Jason Lixfeld wrote:

> Likely not at the PE, true, but he did say Internet access, so I err’d on the 
> side of assuming DFZ, somewhere.  If that assumption is true, FIB resources 
> for the SP interconnect nodes and filtering towards the PEs, absolutely..

I assumed 0/0 + ::/0 to upstreams, but if it had to be the DFZ, you can
still hold a full table in RAM but install just default and some select
entries into FIB.

This would only be necessary if there are downstreams that require the
full table. However, if it's for internal use, and a certain amount of
DFZ-related traffic engineering is required, the OP could make the case
for just having the border routers as the boxes that can handle a full
feed. This could be just one or two routers.

That would leave the majority of the network running cheaper routers
with smaller FIB's.

But without really knowing the OP's Internet routing design needs, it
could go either way.

Mark.


Re: MPLS in the campus Network?

2016-10-20 Thread Mark Tinka


On 20/Oct/16 18:24, Roland Dobbins wrote:

>
> And I'd definitely recommend figuring out why that's being done so
> broadly today, and working to reduce its prevalence and scope, moving
> forward.

Some requirements call for Ethernet transport as opposed to IP.

I don't know the details of the OP's user requirements, but from our
side, we have as many customers asking for IP services as they do Ethernet.

Mark.


Re: MPLS in the campus Network?

2016-10-20 Thread Jason Lixfeld

> On Oct 20, 2016, at 12:23 PM, Mark Tinka  wrote:
> 
> 
> 
> On 20/Oct/16 17:12, Jason Lixfeld wrote:
> 
>> 
>> It’s only more expensive the more big vendor products you use.  Sometimes 
>> you need to (i.e.: Boxes with big RIB/FIBs for DFZ, or deep buffers), but 
>> more and more, people are looking to OCP/White Box Switches [1][2].
> 
> It doesn't sound like the OP needs massive FIB space, so he could implement 
> FIB filtering and run the smaller boxes that have all the features but lack 
> the FIB real estate of the larger routers/switches.
> 
> This is what we do for our Metro-E Access networks.
> 
> Mark.

Likely not at the PE, true, but he did say Internet access, so I err’d on the 
side of assuming DFZ, somewhere.  If that assumption is true, FIB resources for 
the SP interconnect nodes and filtering towards the PEs, absolutely.

Re: MPLS in the campus Network?

2016-10-20 Thread Roland Dobbins

On 20 Oct 2016, at 23:17, Mark Tinka wrote:


especially looking at how much Layer 2 traffic you're hauling around.


And I'd definitely recommend figuring out why that's being done so 
broadly today, and working to reduce its prevalence and scope, moving 
forward.


---
Roland Dobbins 


Re: MPLS in the campus Network?

2016-10-20 Thread Mark Tinka


On 20/Oct/16 17:12, Jason Lixfeld wrote:

>
> It’s only more expensive the more big vendor products you use.  Sometimes you 
> need to (i.e.: Boxes with big RIB/FIBs for DFZ, or deep buffers), but more 
> and more, people are looking to OCP/White Box Switches [1][2].

It doesn't sound like the OP needs massive FIB space, so he could
implement FIB filtering and run the smaller boxes that have all the
features but lack the FIB real estate of the larger routers/switches.

This is what we do for our Metro-E Access networks.

Mark.


Re: MPLS in the campus Network?

2016-10-20 Thread Mark Tinka


On 20/Oct/16 17:05, Leo Bicknell wrote:

> I would challenge your port cost assumption for "routers".  For
> instance the Arista 7280 could deliver can be had with 48 10GE SFP+
> ports with full Internet routing capabilities.  If you're used
> to Cisco or Juniper, it is worth looking further afield these days.

We're currently looking at Arista's development in the IP/MPLS space.
While we are not yet ready to spend money on them for that, we think the
future is bright. So we are working very closely with them to get their
IP and MPLS code to where it would make sense for us. I'm hopeful we
shall be investing in Arista for routing in the very near future.

But we love them for switching, and will be activating some of their
platforms for that use-case very soon.

Mark.


signature.asc
Description: OpenPGP digital signature


Re: MPLS in the campus Network?

2016-10-20 Thread Mark Tinka


On 20/Oct/16 15:43, steven brock wrote:

>
> If you had to make such a choice recently, did you choose an MPLS design
> even at lower speed ?
> How would you convince your management that MPLS is the best solution for
> your campus network ? How would you justify the cost or speed difference ?

IP/MPLS would be my recommendation.

From an operational perspective, running it in your Core and Access
backbone removes any need for STP (and all the associated headache).

Not sure what gear you're using now, but you'll get full routing and
MPLS features on the platforms such as the Cisco ASR920. I'd have
recommended the Cisco ME3600X, but they just announced EoS/EoL last
night, which means that while you can still order it until October 2017,
the ASR920 will be cheaper and is the future of of Metro-E from Cisco in
this segment.

Brocade's CES 2000 platform would also be a good choice here.

Juniper ACX5000 presents some challenges re: the use of that Broadcom
chip, although I know a number of operators that have had the courage to
deploy it for this use-case.

Ultimately, whatever vendor you choose, the guaranteed way to not get
that 3AM call is to run IP/MPLS for your Core and Access backbones,
especially looking at how much Layer 2 traffic you're hauling around.

Mark.


Incapsula | 19551

2016-10-20 Thread stuart clark via NANOG
Can someone ping me off list from Incapsula AS 19551 please?

Thanks!


Re: MPLS in the campus Network?

2016-10-20 Thread Jason Lixfeld
Hi,

> On Oct 20, 2016, at 9:43 AM, steven brock  wrote:
> 
> Compared to MPLS, a L2 solution with 100 Gb/s interfaces between
> core switches and a 10G connection for each buildings looks so much
> cheaper. But we worry about future trouble using Trill, SPB, or other
> technologies, not only the "open" ones, but specifically the proprietary
> ones based on central controller and lots of magic (some colleagues feel
> the debug nightmare are garanteed).

From my perspective, in this day and age, no service provider or campus should 
really be using any sort of layer 2 protection mechanism in their backbone, if 
they can help it.

> If you had to make such a choice recently, did you choose an MPLS design
> even at lower speed ?

Yup.  5 or so years ago, and never looked back.  Actually, this was in 
conjunction with upgrading our 1G backbone to a 10G backbone, so it was an 
upgrade for us in all senses of the word.

> How would you convince your management that MPLS is the best solution for
> your campus network ?

You already did:


> We are not satisfied with the current backbone design ; we had our share
> of problems in the past:
> - high CPU load on the core switches due to multiple instances of spanning
> tree slowly converging when a topology change happens (somehow fixed
> with a few instances of MSTP)
> - spanning tree interoperability problems and spurious port blocking
> (fixed by BPDU filtering)
> - loops at the edge and broadcast/multicast storms (fixed with traffic
> limits and port blocking based on threshhold)
> - some small switches at the edge are overloaded with large numbers of
> MAC addresses (fixed with reducing broadcast domain size and subnetting)
> 
> This architecture doesn't feel very solid.
> Even if the service provisionning seems easy from an operational point
> of view (create a VLAN and it is immediately available at any point of the
> L2 backbone), we feel the configuration is not always consistent.
> We have to rely on scripts pushing configuration elements and human
> discipline (and lots of duct-tape, especially for QoS and VRFs).



> How would you justify the cost or speed difference ?

It’s only more expensive the more big vendor products you use.  Sometimes you 
need to (i.e.: Boxes with big RIB/FIBs for DFZ, or deep buffers), but more and 
more, people are looking to OCP/White Box Switches [1][2].

For example, assuming a BCM Trident II based board with 48 SFP+ cages and 6 
QSFP+ cages, you get a line-rate, MPLS capable 10G port for $65.  Or, if you’re 
like me and hate the idea of breakout cables, you’re at about $100/SFP+ cage, 
at which points the QSPF+ cages are pretty much free.

Software wise, there are lots of vendors.  One that I like is IPInfusion’s 
OcNOS[3] codebase.  They are putting a lot of resources into building a service 
provider feature set (full-blown MPLS/VPLS/EVPN, etc.) for OCP switches.  There 
are others, but last time I looked a couple of years ago, they were less 
focused on MPLS and more focused on SDN: Cumulus Networks[4], PICA8[5], Big 
Switch Networks[6].

> Thanks for your insights!

[1] 
https://www.linx.net/communications/press-releases/lon2-revolutionary-development
[2] 
http://www.ipinfusion.com/about/press/london-internet-exchange-use-ip-infusion’s-ocnos™-network-operating-system-new-london-in
[3] http://www.ipinfusion.com/products/ocnos
[4] https://cumulusnetworks.com
[5] http://www.pica8.com
[6] http://www.bigswitch.com

Re: MPLS in the campus Network?

2016-10-20 Thread Leo Bicknell

From what you describe I do think you have many options, including
more than just the ones you laid out.  When you're under 10km and
own your own fiber the possibilities are virtually limitless.

First off, you don't want to be running spanning tree across a
campus.  While I don't think you need to elminate it completely as
some in the industry are pressing, doing it at the scale you describe
is probably a world of hurt.

I would challenge your port cost assumption for "routers".  For
instance the Arista 7280 could deliver can be had with 48 10GE SFP+
ports with full Internet routing capabilities.  If you're used
to Cisco or Juniper, it is worth looking further afield these days.

I would also challenge that there is one way to do the job.  It may
be easier to build a couple of networks.  Perhaps a router based one
to deliver IP services, and a separate "Metro Ethernet" network to
deliver L2 VLAN transport.  It may sound crazy that buying two
boxes is chepaer than one, but it can be depending on the exact
scale and port count.  Heck, depending on your port count doing
passive DWDM to interconnect switches in each office may be cheaper
than encapsulating in MPLS.  A lot of it also depends on your 
monitoring requirements, or lack of.

In a message written on Thu, Oct 20, 2016 at 03:43:26PM +0200, steven brock 
wrote:
> How would you convince your management that MPLS is the best solution for
> your campus network ? How would you justify the cost or speed difference ?

Well, cost and speed are two prime considerations, but there are other
important considerations.

Vendors support platforms and features based on the customer base.
If you buy a box everyone does MPLS on, and then use it for TRILL,
you'll be in a world of hurt.  Particularly if you want long, stable
life ride with the crowd.  Use a platform many others are using for
the same job.

-- 
Leo Bicknell - bickn...@ufp.org
PGP keys at http://www.ufp.org/~bicknell/


pgpALqdUJoKza.pgp
Description: PGP signature


MPLS in the campus Network?

2016-10-20 Thread steven brock
Dear NANOG members,

We operate a campus network reaching more than 100 buildings on 5 campuses.
We also operate a regional backbone and the interconnexion to our NREN.
The current architecture is made of a L2 backbone and a few routers.
Most of the buildings are connected with a 1 Gb/s link using our own
optical fiber
(only a few building are connected at 10 Gb/s).
In a smaller number of buildings (a few dozens), we also operate the
internal network, made of ethernet switches (in a multi-vendor environment).
In each building, we provide at least an edge switch, marking the boundary
between
us and the customer, where we deliver the different services on ethernet
ports.

The services we currently offer:
- L2 interconnections (400 vlans are present in 2 buildings or more;
only a few VLANs are present in more than 30 buildings)
- IPv4 et IPv6 routing (hundreds of subnets) and Internet access,
- specific interconnections (ex: terminating a VPN to the customer,
say a national private infrastructure delivered by the NREN through
MPLS L2/L3VPN and stitched to the customer network using a specific VLAN)
- routing isolation using routing instances (~ VRF Lite) : only 5
instances, but we could have more,
- routing and filtering using open source firewalls running on servers
in our DCs (less than 15 platforms, as most customer operate their
own firewall),
- user authentication,
- shared VPN platform allowing direct access for an identified user into
the customer network (based on radius attribute) - this platform uses
VLANs to interconnect to the rest of the network,
- wireless LAN, also allowing direct access for an identified user into
the customer network ; the platform is a centralized controller, and
it uses VLAN to interconnect to the rest of the network.
(those last two services could use just a VLAN or a dedicated subnet
delivered on a port of the edge switch which is then connected to the
customer firewall)

We are not satisfied with the current backbone design ; we had our share
of problems in the past:
- high CPU load on the core switches due to multiple instances of spanning
tree slowly converging when a topology change happens (somehow fixed
with a few instances of MSTP)
- spanning tree interoperability problems and spurious port blocking
(fixed by BPDU filtering)
- loops at the edge and broadcast/multicast storms (fixed with traffic
limits and port blocking based on threshhold)
- some small switches at the edge are overloaded with large numbers of
MAC addresses (fixed with reducing broadcast domain size and subnetting)

This architecture doesn't feel very solid.
Even if the service provisionning seems easy from an operational point
of view (create a VLAN and it is immediately available at any point of the
L2 backbone), we feel the configuration is not always consistent.
We have to rely on scripts pushing configuration elements and human
discipline (and lots of duct-tape, especially for QoS and VRFs).

We are re-designing our network architecture.
We have enough fiber to imagine many ways to link the core network
devices.
We find MPLS has its merit as a platform, to bring all the network services
we currently provide (L2, L3 VPN, VPLS, and soon EVPN)
However, we also want to upgrade the infrastructure to allow future growth
of the traffic. Some labs, especially in physics, could need more than 10
Gb/s
in the coming years. Our cycles of evolution are long (we keep a backbone
technology
for 8 years). MPLS is definitely not cheap considering the price of a 10G
or 100G
router interface.

Compared to MPLS, a L2 solution with 100 Gb/s interfaces between
core switches and a 10G connection for each buildings looks so much
cheaper. But we worry about future trouble using Trill, SPB, or other
technologies, not only the "open" ones, but specifically the proprietary
ones based on central controller and lots of magic (some colleagues feel
the debug nightmare are garanteed).

If you had to make such a choice recently, did you choose an MPLS design
even at lower speed ?
How would you convince your management that MPLS is the best solution for
your campus network ? How would you justify the cost or speed difference ?

Thanks for your insights!