Re: Vyatta as a BRAS

2010-07-18 Thread Tim Durack
On Sun, Jul 18, 2010 at 8:01 PM, Brett Frankenberger
rbf+na...@panix.com wrote:
 On Mon, Jul 19, 2010 at 07:13:46AM +0930, Mark Smith wrote:

 This document supports that. If the definition of a software router is
 one that doesn't have a fixed at the factory forwarding function, then
 the ASR1K is one.

 The code running in the ASICs on line cards in 6500-series
 chassis isn't fixed at the factory.  Same with the code running on the
 PFCs in those boxes.  There's not a tremendous amount of flexibility to
 make changes after the fact, because the code is so tightly integrated
 with the hardware, but there is some.

 (Not saying the 6500 is a software-based platform.  It's pretty clearly
 a hardware-based platform under most peoples' definition.  But:  the
 line is blurry.)

     -- Brett



Surely the important point for most forwarding engines is that there
is isolation between control, management and forwarding planes?

If I'm looking for a box, I want line rate forwarding on all
interfaces. I want stateless ACLs and policing functions on the
forwarding plane. I want to use those functions to protect the control
and management planes. I want the control plane to cope with the
required amount of forwarding state and churn. I want the management
plane to be somewhat as capable as the Linux tools I run to maintain
the network.

I don't honestly care whether it is a single cpu, multi-core
multi-cpu, ASIC or NPU.

That being said, for the networks I help maintain, the C6K meets most
of those requirements. I think the N7K is movement in the right
direction. I consider both to be L2/L3 switches :-)

-- 
Tim:



Re: IPv6: numbering of point-to-point-links

2011-01-25 Thread Tim Durack
On Tue, Jan 25, 2011 at 9:44 AM, Lasse Jarlskov l...@telenor.dk wrote:
 Thank you all for your comments - it appears that there is no consensus
 on how this should be done.

The best piece of advice I received when asking similar questions in
the past is to allocate a /64 for every network regardless of it's
potential size. Loopbacks, point-to-point, hosting VLANs etc. Then
assign whatever size you are currently comfortable with.

We've used /128s for loopbacks, safe in the knowledge that we can
expand them all to /64s without renumbering (in case someone comes up
with a good idea why /64s on loopbacks are necessary.)

We've gone unnumbered on point-to-points, as a way of deferring that
particular decision. Admittedly this reduces useful diagnostics
available from traceroutes, although I quite like seeing loopbacks in
traceroutes anyway. Unnumbered does reduce control-plane address space
surface, which might be seen as a useful benefit (I'm sure someone
will tell me why that's a bad idea.)

My point is, if you do your number plan right, you should have some
flexibility to make changes in the future without pain.

-- 
Tim:



Re: Internet Edge Router replacement - IPv6 route tablesizeconsiderations

2011-03-11 Thread Tim Durack
On Fri, Mar 11, 2011 at 1:55 PM, James Stahr st...@mailbag.com wrote:

 Is anyone else considering only using link local for their PtoP links?  I
 realized while deploying our IPv6 infrastructure that OSPFv3 uses the
 link-local address in the routing table and than the global address, so if I
 want to have a routing table which makes sense, I need to statically assign
 a global address AND the link-local address.  Then I realized, why even
 assign a global in the first place?  Traceroutes replies end up using the
 loopback. BGP will use loopbacks.  So is there any obvious harm in this
 approach that I'm missing?


For now I have allocated /64s per p-t-p, but I'm doing ipv6 unnumbered
loopback0

I quite like how the core route table looks. It also lets me avoid The
Point to Point Wars :-)

Maybe there will be a good reason to go back and slap globals on there, but
I've not been convinced yet.

-- 
Tim:


Re: Simple Low Cost WAN Link Simulator Recommendations

2011-03-20 Thread Tim Durack
On Sun, Mar 20, 2011 at 6:20 PM, Matthew Petach mpet...@netflight.comwrote:

 On Thu, Mar 17, 2011 at 6:57 AM, Loopback loopb...@digi-muse.com wrote:
  Need the ability to test Network Management and Provisioning applications
  over a variety of WAN link speeds from T1 equivalent up to 1GB speeds.
   Seems to be quite a few offerings but I am looking for recommendations
 from
  actual users.   Thanks in advance.


Linux tc netem:

http://www.linuxfoundation.org/collaborate/workgroups/networking/netem

Has worked well for us.

-- 
Tim:


Re: IP tunnel MTU

2012-10-29 Thread Tim Durack
On Mon, Oct 29, 2012 at 4:01 PM, Jared Mauch ja...@puck.nether.net wrote:

 On Oct 29, 2012, at 3:46 PM, Joe Maimon jmai...@ttec.com wrote:



 Templin, Fred L wrote:

 Yes; I was aware of this. But, what I want to get to is
 setting the tunnel MTU to infinity.


 Essentially, its time the network matured to the point where 
 inter-networking actually works (again), seamlessly.

 I agree.


 Certainly fixing all the buggy host stacks, firewall and compliance devices 
 to realize that ICMP isn't bad won't be hard.

 - Jared

Wait till you get started on fixing the security consultants.

-- 
Tim:



Re: 1310nm optics over Corning LEAF G.655?

2012-11-28 Thread Tim Durack
On Fri, Jun 1, 2012 at 2:18 PM, Tim Durack tdur...@gmail.com wrote:

 Anyone run 1000BASE-LX/10GBASE-LR 1310nm optics over a ~10km Corning
 LEAF G.655 span?

 I understand this fiber is not optimized for such usage, but what is
 the real-world behaviour? I'm having a hard time finding hard data.

 (Normal optics will be 1550nm and DWDM over ~40-100km spans.)

 --
 Tim:


For the record, 1000BASE-LX 1310nm 10km and 20km rated optics did not work
across this span. Based on loss measurements, this was not due to optical
budget. This may be due to the cutoff wavelength of this fiber being 1360nm.

1000BASE-LX 1310nm 40km rated optics worked (with a 10dB attenuator.)

1000BASE-ZX 1550nm 80km rated optics worked (with a 10dB attenuator.)

Hope that helps someone in the future. If anyone has any comments, I would
be interested to hear (either pm or public.)

Thanks,

-- 
Tim:


Re: The 100 Gbit/s problem in your network

2013-02-11 Thread Tim Durack
On Mon, Feb 11, 2013 at 8:11 PM, Joe Greco jgr...@ns.sol.net wrote:

   Multicast _is_ useful for filling the millions of DVRs out there with
   broadcast programs and for live events (eg. sports).  A smart VOD =
  system
   would have my DVR download the entire program from a local cache--and
   then play it locally as with anything else I watch.  Those caches =
  could
   be populated by multicast as well, at least for popular content.  The
   long tail would still require some level of unicast distribution, but
   that is _by definition_ a tiny fraction of total demand.
 
  One of us has a different dictionary than everyone else.
 
  Assume I have 10 million movies in my library, and 10 million active =
  users.  Further assume there are 10 movies being watched by 100K users =
  each, and 9,999,990 movies which are being watched by 1 user each.
 
  Which has more total demand, the 10 popular movies or the long tail?
 
  This doesn't mean Netflix or Hulu or iTunes or whatever has the =
  aforementioned demand curve.  But it does mean my definition  yours =
  do not match.
 
  Either way, I challenge you to prove the long tail on one of the serious
 =
  streaming services is a tiny fraction of total demand.

 Think I have to agree with Patrick here, even if the facts were not to
 support him at this time.

 The real question is: how will video evolve?


Good question. I suspect it's going to look a lot like the evolution of
audio: Pandora, Grooveshark, Spotify etc. All unicast. CDN.

Live sports: how was the Olympics coverage handled? Unicast. CDN.

Multicast is dead. Feel free to disagree. :-)

Tim:


Re: ISP customer assignments

2009-10-05 Thread Tim Durack
 So now Verizon is in open revolt against ARIN. They positively refuse
 to carry /48's from legitimately multihomed users. Eff 'em. Perhaps
 Verizon would sooner see IPv6 go down in flames than see their TCAMs
 fill up again. Who knows their reasoning?

 Agree or disagree, it is indeed food for thought. One thing I can say
 with confidence: as a community we truly haven't grasped the major
 implications of an address space that isn't scarce coupled with a
 routing table that is.

Thing is, I'm an end user site. I need more that a /48, but probably
less than a /32. Seeing as how we have an AS and PI, PA isn't going to
cut it. What am I supposed to do? ARIN suggested creative subnetting.
We pushed back and got a /41. If IPv6 doesn't scratch an itch, why
bother?

There are plenty of high-profile end user sites in 2620::/23. Some
government (CIA), some popular (Facebook.) I don't think Verizon's
stand is going to last.

Tim:



Mexico City IP/Ethernet/Wave/Fiber/Colo/IXP etc...

2011-12-06 Thread Tim Durack
I'm looking for connectivity options in the Mexico City area. Initial
impressions suggest Mexico has a fairly closed market. That being
said:
Who offers good IP/BGP connectivity in and around Mexico City?Who
offers good Ethernet connectivity in and around Mexico City?Who offers
wave/fiber services in and around Mexico City.Where are the colo/IXPs
located?
Feedback on or off-list appreciated. I'm not looking for a salesman.
Thanks,
-- 
Tim:



Re: LX sfp minimum range

2012-01-25 Thread Tim Durack
On Wed, Jan 25, 2012 at 2:26 PM, jon Heise j...@smugmug.com wrote:
 we are moving a router between 2 data centers and we only have LX sfp's for 
 connection, is there any issue using LX sfp's in a short range deployment ?

A Cisco 1000BASE-LX optic has the following spec:

http://www.cisco.com/en/US/prod/collateral/modules/ps5455/ps6577/product_data_sheet0900aecd8033f885.html

-3dBm maximum transmit power, -3dBm maximum receive. That means you
can run it over any length. (We use LX for everything.)

-- 
Tim:



NYC to DEU packet loss

2012-05-04 Thread Tim Durack
Trying to troubleshoot packet loss from NYC to DEU. Traceroute shows:

tdurack@2ua82715mg:~$ traceroute -I 194.25.250.73
traceroute to 194.25.250.73 (194.25.250.73), 30 hops max, 60 byte packets

 snip
 4  216.55.2.85 (216.55.2.85)  1.694 ms  1.698 ms  1.698 ms
 5  vb1010.rar3.nyc-ny.us.xo.net (216.156.0.17)  4.788 ms  4.792 ms  4.791 ms
 6  207.88.14.178.ptr.us.xo.net (207.88.14.178)  1.684 ms  1.461 ms  1.452 ms
 7  62.157.250.245 (62.157.250.245)  40.457 ms  42.980 ms  42.982 ms
 8  hh-eb3-i.HH.DE.NET.DTAG.DE (62.154.32.134)  129.417 ms *  129.422 ms
 9  194.25.250.73 (194.25.250.73)  139.501 ms  136.192 ms  139.236 ms

Packet loss of approx. 20% affects hops 7 and 8, along with end host
9. Loss appears to be data-plane, not control-plane rate limiting.
Affected customer confirms this too :-)

62.157.250.245 is in Deutsche Telekom address spaces, so I'm guessing
this is either a DTAG problem or an issue between XO and DTAG.

I have a ticket open with XO, but I'm having a hard time figuring out
what is ~40ms away from NYC on a path to DEU. Any idea what the
physical path is?

-- 
Tim:



Equinix Direct

2012-05-24 Thread Tim Durack
Anyone got experience with Equinix Direct?

Looks like an interesting product from the glossy, but rather light on
details. I'm interested in the technical specs and real-life
experience.

(Not looking for sales. I've got a purchasing d00d for that.)

Thanks,

--
Tim:



Re: Equinix Direct

2012-05-25 Thread Tim Durack
Thanks for the response to my question.

What I have received confirms this is basically a metered IXP with
route servers and a mix of paid transit/peering options. Will be
interesting to see what the participant mix is.

It does concern me that the only connectivity options are FE/GE, no
10GE at this time. Makes me wonder about how serious the service is,
and whether I will end up with a more congested service than simply
getting a mix of transit providers myself.

Anyway, thanks again to all who responded.

-- 
Tim:



1310nm optics over Corning LEAF G.655?

2012-06-01 Thread Tim Durack
Anyone run 1000BASE-LX/10GBASE-LR 1310nm optics over a ~10km Corning
LEAF G.655 span?

I understand this fiber is not optimized for such usage, but what is
the real-world behaviour? I'm having a hard time finding hard data.

(Normal optics will be 1550nm and DWDM over ~40-100km spans.)

-- 
Tim:



Re: 1310nm optics over Corning LEAF G.655?

2012-06-01 Thread Tim Durack
On Fri, Jun 1, 2012 at 2:30 PM, Kevin L. Karch kevinka...@vackinc.com wrote:
 Tim

 Yes we have built several links with the 1310nm devices on Corning LEAF. One
 span distance was 14 KM.

 Can we offer you a quote on optics and installation support?

That's a kind offer, but we are quite well setup :-) Just looking for
field experience.

14km with standard 1310nm optics? No issues with the cutoff wavelength
being 1310nm? GigE or 10GigE?

-- 
Tim:



XO/DTAG Contact?

2012-06-13 Thread Tim Durack
Looking for a technical contact within XO and/or DTAG, preferably one
who can interpret a traceroute accurately :-)

Please hit me up offline.

Thanks,

-- 
Tim:



Re: NYC to DEU packet loss

2012-06-23 Thread Tim Durack
As suspected, this ended up being an XO/DTAG peering issue. Took a
long time to get sorted out, but thanks to any and all who assisted!

Tim:

On Fri, May 4, 2012 at 12:01 PM, Tim Durack tdur...@gmail.com wrote:
 Trying to troubleshoot packet loss from NYC to DEU. Traceroute shows:

 tdurack@2ua82715mg:~$ traceroute -I 194.25.250.73
 traceroute to 194.25.250.73 (194.25.250.73), 30 hops max, 60 byte packets

-- 
Tim:



Re: transcievers/amplifiers for 150 km fiber run

2008-10-10 Thread Tim Durack
These guys claim upto 180km:

http://www.bookham.com/datasheets/transceivers/IGP-28111.cfm

Tim:

On Fri, Oct 10, 2008 at 4:23 PM, Fletcher Kittredge
[EMAIL PROTECTED]wrote:

 Thanks to all that replied.   A bit more background:   By regulation, the
 local ILEC is required to supply us with dark fiber where available.   They
 have taken the regulatory stance that it is not technically possible to use
 dark fiber runs of more than 60 miles (prior, their regulatory stance was
 runs of more than twenty miles were not technically feasible.)  Our
 counter-argument has been that we have existing fiber runs of 63 miles and
 59 miles that work well without special equipment.   We are now arguing
 about a particular fiber run in rural Maine of about 91 miles.   Our
 position is it is technically feasible, depending on fiber characteristics,
 to light 91 miles of fiber.  Their position is that runs of more than 60
 miles are not feasible.   I was hoping to bolster our argument by pointing
 to data sheets of optical transcievers rated up to 150 km.  Then, after we
 get the fiber, I was hoping to buy said equipment.

 regards,
 Fletcher

 On Fri, Oct 10, 2008 at 2:50 PM, Fletcher Kittredge
 [EMAIL PROTECTED]wrote:

  We are looking to light a two strand fiber link of about 95 miles (or
  150km).   It would be worth a lot to us not to have repeaters.   We are
  hoping for Gigabit Ethernet.   Sonet is possible but a less attractive
  solution.  Are there options for this sort of distance?   The longest
  current link we have is about 65 miles.  I understand the transmission
  characteristics of the fiber will effect distance of transmission.
 
  regards,
  Fletcher
 
  --
  Fletcher Kittredge
  GWI
  8 Pomerleau Street
  Biddeford, ME 04005-9457
  207-602-1134
 



 --
 Fletcher Kittredge
 GWI
 8 Pomerleau Street
 Biddeford, ME 04005-9457
 207-602-1134



Re: NAT66 and the subscriber prefix length

2008-11-18 Thread Tim Durack
On Fri, Nov 14, 2008 at 2:28 PM, Mikael Abrahamsson [EMAIL PROTECTED]wrote:

 On Fri, 14 Nov 2008, [EMAIL PROTECTED] wrote:

  Not long ago, ARIN changed the IPv6 policy so that
 residential subscribers could be issued with a /56
 instead of the normal /48 assignment. This was done
 so that ISPs with large numbers of subscriber sites
 would not exhaust their /32 (or larger) allocations
 too soon. Since these ISPs are allowed to assign
 a /56 to residential subscriber sites, their initial
 IPv6 allocation will last a lot longer and they won't
 have to apply for an additional allocation while
 everyone is getting up to speed with an IPv6 Internet.


 We returned our /32 for a /25 (with /22 being reserved) and current plan is
 to hand out /48s to everybody (unless they need even more space, then
 they'll have to apply).

 So, doing /56 to end users just because you happen to have a /32 right now
 sounds like a bad plan, it doesn't take that many hours to get a larger
 space if you can justify it (which wasn't that hard for us).

 We received our /32 (as a /35 I think) back in 2000 or so, policy has
 changed since then, with RIPE it's not that hard to get a much larger space
 with a long term growth plan. My hope is that we'll make do with this /22
 space for at least 5-10 years (67 million customer /48s is quite a lot),
 unless something really big happens, and then we'll just have to get an even
 larger space.

 So message should be that /48 to end users is the way to go, and this
 should suit residential and SME market without any additional administrative
 overhead depending on customer size.

 --
 Mikael Abrahamssonemail: [EMAIL PROTECTED]


This raises questions for me: we are a mixed enterprise/campus environment.
Recently got a /45 assigned, so we have a /48 per site (it was some work to
convince ARIN that fancy subnetting to make a /46 stretch a little further
made no sense.)

We have also started offering residential Internet to those living on
campus, which has been very popular (no suprise.) If I'm expected to assign
a /48 per residential user, I'm already out of address space. Should I be
requesting a /32? Is it acceptable to carve the /32 up a little for IPv4
style multi-homing?

I'd rather come to terms with this now before I do any meaningful
deployment.

Tim:


Re: NAT66 and the subscriber prefix length

2008-11-18 Thread Tim Durack
On Tue, Nov 18, 2008 at 2:33 PM, Crist Clark [EMAIL PROTECTED]wrote:

  On 11/18/2008 at 11:03 AM, Tim Durack [EMAIL PROTECTED] wrote:
  On Fri, Nov 14, 2008 at 2:28 PM, Mikael Abrahamsson [EMAIL PROTECTED]
 wrote:
 
  On Fri, 14 Nov 2008, [EMAIL PROTECTED] wrote:
 
   Not long ago, ARIN changed the IPv6 policy so that
  residential subscribers could be issued with a /56
  instead of the normal /48 assignment. This was done
  so that ISPs with large numbers of subscriber sites
  would not exhaust their /32 (or larger) allocations
  too soon. Since these ISPs are allowed to assign
  a /56 to residential subscriber sites, their initial
  IPv6 allocation will last a lot longer and they won't
  have to apply for an additional allocation while
  everyone is getting up to speed with an IPv6 Internet.
 
 
  We returned our /32 for a /25 (with /22 being reserved) and current plan
 is
  to hand out /48s to everybody (unless they need even more space, then
  they'll have to apply).
 
  So, doing /56 to end users just because you happen to have a /32 right
 now
  sounds like a bad plan, it doesn't take that many hours to get a larger
  space if you can justify it (which wasn't that hard for us).
 
  We received our /32 (as a /35 I think) back in 2000 or so, policy has
  changed since then, with RIPE it's not that hard to get a much larger
 space
  with a long term growth plan. My hope is that we'll make do with this
 /22
  space for at least 5-10 years (67 million customer /48s is quite a lot),
  unless something really big happens, and then we'll just have to get an
 even
  larger space.
 
  So message should be that /48 to end users is the way to go, and this
  should suit residential and SME market without any additional
 administrative
  overhead depending on customer size.
 
  --
  Mikael Abrahamssonemail: [EMAIL PROTECTED]
 
 
  This raises questions for me: we are a mixed enterprise/campus
 environment.
  Recently got a /45 assigned, so we have a /48 per site (it was some work
 to
  convince ARIN that fancy subnetting to make a /46 stretch a little
 further
  made no sense.)

 A /45? I thought all allocations were on nibble borders for
 IP6.ARPA considerations.


2620::/23 is PI. ARIN makes assignments to end user sites of anywhere
between /48 and /40, depending on your justification.

ARIN wanted to give us a /46 for 5 sites, which made no sense. We pushed
back for a /45, and that's what they gave us. Still not convinced it's going
to be enough.

Tim:


Re: v6 DSL / Cable modems

2009-02-06 Thread Tim Durack
On Fri, Feb 6, 2009 at 8:51 AM, Jack Bates jba...@brightok.net wrote:

 Joe Loiacono wrote:


 Indeed it does. And don't forget that the most basic data object in the
 routing table, the address itself, is 4 times as big.


 Let's also not forget, that many organizations went from multiple
 allocations to a single allocation. If we all filter anything longer than
 /32, we'll rearrange the flow of traffic that many over the years have
 altered through longer prefixes. Even I suspect I may occasionally have to
 let a /40 out now and then to alter it's traffic from the rest of the
 aggregate. Traffic comes to you as it wants to come to you. The only pseudo
 remedy that currently exists is to move some prefixes over to a different
 path. If you only have a /32, that'll be a bit hard.

 This, more than anything, is what will effect this list and the people on
 it where IPv6 is concerned. Filtering longer than /33, 35, 40? Dare we go to
 /48 and treat them as the new /24? I know for myself, traffic manipulation
 can't begin until /40 (unless I split them further apart).


Given that ARIN at least is assigning end-user /48s out of 2620::/23 it
would be useful to accept these announcements. If not end-user PI is dead in
the water. Some providers might like that. End-users probably won't.

Tim:


Re: Nipper and Cisco configuration results

2009-04-04 Thread Tim Durack
 The problem I have with both RAT and Nipper is they're geared towards
 security and I'm more interested in verifying that the routers are
 configured correctly.  What kind of tools are people using for that?
 For an example of the type of thing I'm interested in, see
 filter_audit in the presentation at
 http://www.nanog.org/mtg-0210/abley.html

Homebrew: pull configs on a regular basis. Decompose monolithic
configs into a file tree of configlets.
Diff configlet tree against peer and template devices. Invert device
specific configlet tree into element specific tree. This helps diffs
stand out for config elements that should be consistent.

Put it all into a git repository for revision control. Run git-web for
the user interface.

Catches most of the obvious stuff, and gives a nice history of
changes. The configlet tree also gets used for grep | xarg style
pipelines for automation scripts.

Would like to improve the diff process to mask out common information
(ip address, hsrp priority etc.) This would help reduce the amount of
diff noise for interfaces.

We looked at free (RANCID, Ziptie) and expen$ive (Opsware) but none of
them really did what we wanted.

Tim:



Re: options for full routing table in 1 year?

2009-04-08 Thread Tim Durack
 Cisco 6500/7600 you replace SUP32 or SUP720 with SUP720-3BXL
        ...if I understand it, no other cards need replaced?
        (note that this disagrees with my understanding of how their FIB/CEF
 works so I'm curious about this)

If you have linecard DFCs they would need to be XLs also.

Tim:



Re: .255 addresses still not usable after all these years?

2008-06-13 Thread Tim Durack
Funny this discussion surfaced now - I got bitten by this recently.
Was using .255 for NAT on a secondary firewall. When the primary
failed over, parts of the Internet became unreachable...

Tim:

On Fri, Jun 13, 2008 at 9:51 PM, Mark Smith
[EMAIL PROTECTED] wrote:
 On Fri, 13 Jun 2008 13:43:36 -0700
 Kameron Gasso [EMAIL PROTECTED] wrote:

 Christopher Morrow wrote:
  go-go-actiontec (vol sends those out, god do they suck...)

 Crappy CPE's are exactly why we don't hand out .0 and .255 addresses in
 our DHCP pools. :(
 --
 Kameron Gasso | Senior Systems Administrator | visp.net
 Direct: 541-955-6903 | Fax: 541-471-0821


 We avoid them because in the interest of security, customers who
 would be assigned .0 and .255 have trouble accessing their online
 banking and other financial websites. With IPv4 address space running
 out, we'll probably inevitably have to start handing them out and then
 get our customers to complain to their bank etc.


 Regards,
 Mark.

 --

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear





Re: I don't need no stinking firewall!

2010-01-13 Thread Tim Durack
Lots of interesting technical information in this thread. Mixed with a
healthy dose of religion/politics :-)

I suspect that most people are going to keep doing what they are doing.

In our environment, at the transport level, we have moved from
stateful towards stateless, as it has proved to be operationally
simpler and more resilient. At the same time some of our application
people have seen the need to put their servers behind stateful Layer
7 firewalls (I say why stop at Layer 7?)

Here is a thought experiment:

Replace all the routers on the Internet with stateful firewalls. What happens?

Replace all the stateful firewalls on the Internet with stateless
packet filters. What is the result?

-- 
Tim:
Sent from Brooklyn, NY, United States



Re: Using /126 for IPv6 router links

2010-01-25 Thread Tim Durack
On Mon, Jan 25, 2010 at 1:01 PM, TJ trej...@gmail.com wrote:
 -Original Message-
 From: Richard A Steenbergen [mailto:r...@e-gerbil.net]
 Sent: Monday, January 25, 2010 12:08
 To: TJ
 Cc: nanog@nanog.org
 Subject: Re: Using /126 for IPv6 router links

 On Mon, Jan 25, 2010 at 09:10:11AM -0500, TJ wrote:
  While I agree with parts of what you are saying - that using the simple
  2^128 math can be misleading, let's be clear on a few things:
  *) 2^61 is still very, very big.  That is the number of IPv6 network
  segments available within 2000::/3.
  *) An end-user should get something between a /48 and a /56, _maybe_ as
 low
  as a /60 ... hopefully never a /64.  Really.
  **) Let's call the /48s enterprise assignments, and the /56s home
  assignments ... ?
  **) And your /56 to /64 is NOT 1-256 IPs, it is 1-256 segments.

 It is if we are to follow the always use a /64 as a single IP
 guidelines. Not that I'm encouraging this, I'm just saying this is what
 we're told to do with the space. I for one have this little protocol
 called DHCP that does IP assignments along with a bunch of other things
 that I need anyways, so I'm more than happy to take a single /64 for
 house as a single lan segment (well, never minding the fact that my
 house has a /48).

 Interesting.  I have never seen anyone say always use a /64 as a single IP
 ... perhaps you mean as an IP segment or link?
 You are assigned a /64 if it is known that you only need one segment,
 which yields as many IPs as you want (18BillionBillion or so) - and the
 reality is that a home user should get a /56 and an enterprise should get a
 /48, at the very least - some would say a /48 per site.


  **) And, using the expected /48-/56, the numbers are really 256-64k
 subnets.
 ...
  Note: All we've really done is buy ourselves an 8 to 16 bit improvement
 at
  every level of allocation space
  *) And you don't think 8-16 bits _AT EVERY LEVEL_ is a bit deal??

 I'm not saying that 8-16 bits isn't an improvement, but it's a far cry
 from the bazillions of numbers everyone makes IPv6 out to be. By the
 time you figure in the overhead of autoconfiguration, restrictive
 initial deployments, and the now that the space is much bigger, we
 should be reallocating bigger blocks logic at every layer of
 redistribution, that is what you're left with. So far all we've really
 done with v6 is created a flashback to the days when every end user
 could get a /24 just by asking, every enterprise could get a /16 just by
 asking, and every big network could get a /8 just by asking, just bit
 shifted a little bit. That's all well and good, but it isn't a
 bazillion. :)

 There are some similarities between IPv6 and old classful addressing, but
 the bit-boundaries chosen were intentionally made big and specifically
 factoring in the then-ongoing scarcity (Ye olde Class B exhaustion).  The
 scale of the difference *is* the difference.  I am not quite sure what a
 bazillion is, but when we get into the Billion Billion range I think that is
 close enough! :)


 /TJ




2^128 is a very big number. However, from a network engineering
perspective, IPv6 is really only 64bits of network address space. 2^64
is still a very big number.

An end-user assignment /48 is really only 2^16 networks. That's not
very big once you start planning a human-friendly repeatable number
plan.

An ISP allocation is /32, which is only 2^16 /48s. Again, not that big.

Once you start planning a practical address plan, IPv6 isn't as big as
everybody keeps saying...
-- 
Tim:
Sent from Brooklyn, NY, United States



Re: Using /126 for IPv6 router links

2010-01-25 Thread Tim Durack
On Mon, Jan 25, 2010 at 2:23 PM, Ryan Harden harde...@uiuc.edu wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Our numbering plan is this:

 1) Autoconfigured hosts possible? /64
 2) Autoconfigured hosts not-possible, we control both sides? /126
 3) Autoconfigured hosts not-possible, we DON'T control both sides? /64
 4) Loopback? /128

 Within our /48 we've carved it into (4) /50s.
 * First, Infrastructure. This makes ACLs cake.
 ** Within this /50 are smaller allocations for /126s and /128s and /64s.
 * Second, User Subnets (16k /64s available)
 ** All non-infrastructure subnets are assigned from this pool.
 * Third, Reserved.
 * Fourth, Reserved.

 We believe this plan gives us the most flexibility in the future. We
 made these choices based upon what works the best for us and our tools
 and not to conserve addresses. Using a single /64 ACL to permit/deny
 traffic to all ptp at the border was extremely attractive, etc.

 - --

This is what we have planned:

2620::xx00::/41 AS-NETx-2620-0-xx00 

2620::xx00::/44 Infrastructure  


2620::xx01::/48 Pop1 Infrastructure 


2620::xx01:::/64Router Loopback 
(2^64 x /128)
2620::xx01:0001::/64Transit net 
(2^48 x /112)

2620::xx01:0002::/64Server Switch 
management
2620::xx01:0003::/64Access Switch 
management

2620::xx0f::/48 Pop16Infrastructure 


2620::xx10::/44 Sparse Reservation  


2620::xx20::/44 Sparse Reservation  


2620::xx30::/44 Pop1 Services   


2620::xx30::/48 Cust1 Services

2620::xx30:0001::/64VLAN_1
2620::xx30:4094::/64VLAN_4094

2620::xx31::/48 Cust2 Services

2620::xx31:0001::/64VLAN_1
2620::xx31:4094::/64VLAN_4094

2620::xx32::/48 Cust3 Services

2620::xx31:0001::/64VLAN_1
2620::xx31:4094::/64VLAN_4094

2620::xx32::/48 Cust4 Services

2620::xx31:0001::/64VLAN_1

2620::xx31:4094::/64VLAN_4094

2620::xx32::/48 RES-PD-32   (4096 x /60)
2620::xx3f::/48 RES-PD-3f   (4096 x /60)

2620::xx40::/44 Pop2 Services   


2620::xx50::/44 Pop3 Services   


2620::xx60::/44 Pop4 services   


2620::xx70::/44 Pop5 Services   


This is a multiple campus network, customers are all internal. I have
had to squeeze Residential PDs down to /60s to make it fit. One Pop is
really 3 sites in one. This has had to be massaged into one Pop also.
To be safe, I'm thinking of adjusting loopbacks and ptp to be /64s.

I'm reasonably happy with the plan, but it doesn't seem to have that
much room to grow.

-- 
Tim:
Sent from Brooklyn, NY, United States



Re: Using /126 for IPv6 router links

2010-01-25 Thread Tim Durack
On Mon, Jan 25, 2010 at 8:01 PM, Owen DeLong o...@delong.com wrote:

 2^128 is a very big number. However, from a network engineering
 perspective, IPv6 is really only 64bits of network address space. 2^64
 is still a very big number.

 An end-user assignment /48 is really only 2^16 networks. That's not
 very big once you start planning a human-friendly repeatable number
 plan.

 An end-user MINIMUM assignment (assignment for a single site) is
 a /48.  (with the possible exception of /56s for residential customers
 that don't ask for a /48).
 I have worked in lots of different enterprises and have yet to see one that
 had more than 65,536 networks in a single site.  I'm not saying they don't
 exist, but, I will say that they are extremely rare.  Multiple sites are a
 different
 issue.  There are still enough /48s to issue one per site.

Networks per site isn't the issue. /48s per organization is my
concern. Guidelines on assignment size for end-user sites aren't
clear. It comes down to the discretion of ARIN. That's why I like pp
106. It takes some of the guess-work/fudge-factor out of assignments.

 An ISP allocation is /32, which is only 2^16 /48s. Again, not that big.

 That's just the starting minimum.  Many ISPs have already gotten much larger
 IPv6 allocations.

Understood. Again, the problem for me is medium/large end-user sites
that have to justify an assignment to a RIR that doesn't have clear
guidelines on multiple /48s.

 Once you start planning a practical address plan, IPv6 isn't as big as
 everybody keeps saying...

 It's more than big enough for any deployment I've seen so far with plenty
 of room to spare.
 Owen


-- 
Tim:
Sent from Brooklyn, NY, United States



Re: Using /126 for IPv6 router links

2010-01-26 Thread Tim Durack
On Mon, Jan 25, 2010 at 6:20 PM, Nathan Ward na...@daork.net wrote:
 Why do you force POP infrastructure to be a /48? That allows you only 16 POPs 
 which is pretty restrictive IMO.
 Why not simply take say 4 /48s and sparsely allocate /56s to each POP and 
 then grow the /56s if you require more networks at each POP.

 You only have a need for 4 /64s at each POP right now, so the 256 that a /56 
 gives you sounds like more than enough, and up to 1024 POPs (assuming you 
 don't outgrow any of the /56s).

NRPM says:

6.5.4.3. Assignment to operator's infrastructure
An organization (ISP/LIR) may assign a /48 per PoP as the service
infrastructure of an IPv6 service operator. Each assignment to a PoP
is regarded as one assignment regardless of the number of users using
the PoP. A separate assignment can be obtained for the in-house
operations of the operator.

Currently living with mixed infrastructure/customer address space, so
I'm quite happy to separate this out. We will also have a /48 per-pop
for service we provide, such as DHCP/DNS/Web etc. Essentially we will
be a customer of our own infrastructure. I believe the above wording
allows for that.

 Also I'd strongly recommend not stuffing decimal numbers in to a hexadecimal 
 field. It might seem like a good idea right now to make the learning curve 
 easier, but it's going to make stuff annoying long term. You don't have 
 anything in IPv4 that's big enough to indicate the VLAN number and you've 
 lived just fine for years, so forcing it to be decimal like that isn't really 
 needed.
 You're much better off giving your staff the tools to translate between the 
 two, rather than burn networks in order to fudge some kind of human 
 readability out of it and sacrificing your address space to get it.

 % printf %04x\n 4095
 0fff
 % printf %d\n 0x0fff
 4095

 --
 Nathan Ward

Maybe so. Right now we convert VLAN IDs to IPv4 3rd octet. Every
access switch gets a dedicated set of VLANs along these lines:

48, 348, 648, 1048 etc.

That leaves space for 128 access switches per POP, without having to
think about anything. The not having to think part is significant, as
it trades human engineering for address space. That is also one of our
goals for IPv6 deployment.

-- 
Tim:



Re: Using /126 for IPv6 router links

2010-01-26 Thread Tim Durack
On Mon, Jan 25, 2010 at 10:55 PM, Christopher Morrow
morrowc.li...@gmail.com wrote:
 some of what you're saying (tim) here is that you could: (one of these)

 1) go to all your remote-office ISP's and get a /48 from each
 2) go to *RIR's and get /something to cover the number of remote
 sites you have in their region(s)
 3) keep on keepin' on until something better comes along?

This isn't really for remote offices, just our large campus sites.

 2)
  o justification in light of 'unclear' policies for an address block
 of the right size. NOTE:I don't think the policies is unclear, but
 that could be my misreading of the policies.

For me, this seems unclear:

6.5.4.2. Assignment of multiple /48s to a single end site
When a single end site requires an additional /48 address block, it
must request the assignment with documentation or materials that
justify the request. Requests for multiple or additional /48s will be
processed and reviewed (i.e., evaluation of justification) at the RIR
level.
Note: There is no experience at the present time with the assignment
of multiple /48s to the same end site. Having the RIR review all such
assignments is intended to be a temporary measure until some
experience has been gained and some common policies can be developed.
In addition, additional work at defining policies in this space will
likely be carried out in the near future.

  o will your remote-office's ISP's accept the /48's per site? (vz/vzb
 is a standout example here)

Not too worried about VZ. Given that large content providers are
getting end-site address space, I think they will have to adjust their
stance.

  o will your remote-office's have full reachability to the parts of
 the network they need access to? (remote ISP's filtering at/above the
 /48 boundary)

Remote offices aren't included in this plan.

 For the Enterprise still used to v4-land ipv6 isn't a win yet... for
 an ISP it's relatively[0] simple.

 -Chris

 0: address interfaces, turn up protocols, add 'security' assign
 customers /48's...(yes fight bugs/problems/'why is there a colon in my
 ip address?

 (what if you do have 200 offices in the US which aren't connected on a
 private network today?)


-- 
Tim:
Sent from Brooklyn, NY, United States



Re: Comcast IPv6 Trials

2010-01-28 Thread Tim Durack
On Thu, Jan 28, 2010 at 8:44 AM, Joakim Aronius joa...@aronius.com wrote:
 Excuse the newbie question: Why use public IP space for local CPE management 
 and VoIP? Doesn't DOCSIS support traffic separation?

 /J



Probably because rfc1918 is only 2^24+2^20+2^16  = 17,891,328
(assuming I got them all and my math is right.)

That makes it tough to manage unique devices across a large deployment.

-- 
Tim:
Sent from Brooklyn, NY, United States



Re: Comcast IPv6 Trials

2010-01-28 Thread Tim Durack
On Thu, Jan 28, 2010 at 4:42 PM, Chris Gotstein ch...@uplogon.com wrote:
 Typically the CPE address is private, not sure why they would use a
 public IP.  The MTA (VoIP) part of the modem would need a public IP if
 it was talking to a SIP server that was not on the same network.  Most
 smaller cable system outsource their VoIP to a reseller with a softswitch.

It's not necessarily public, just globally unique. Some companies
have more than 17,891,328 devices they want to manage in a centralized
fashion.

-- 
Tim:



Re: Datacenter for DR in northwestern NJ/NY

2010-02-03 Thread Tim Durack
On Tue, Feb 2, 2010 at 6:19 PM, Steven Bellovin s...@cs.columbia.edu wrote:

 On Feb 2, 2010, at 5:52 PM, Cerniglia, Brandon wrote:

 Cervalis has facilities in wappingers ny
 1.5 hours from NYC


 Hmm -- where to the fibers run from a facility like that?  Are the all homed 
 to NYC, or are there runs to, say, Albany or Boston?

Cervalis (Wappinger Falls) is a decent facility.

There is at least one regional provider (Lightower) in there with a
good fiber foot-print. They can get you to mid-hudson valley, nyc, nj,
Long Island and Mass. I believe they cover PoPs in the Boston and
Albany area.

-- 
Tim:
Sent from Brooklyn, NY, United States



Re: BFD over p2p transport links

2010-02-05 Thread Tim Durack
On Fri, Feb 5, 2010 at 9:45 AM, Serge Vautour sergevaut...@yahoo.ca wrote:
 Hello,

 I'm being asked to look into using BFD over our P2P transport links. Is 
 anyone else doing this? Our transport links are all 10G Ethernet (LAN-PHY). 
 There's no alarming inside of LAN-PHY like there is in SONET. The transport 
 side should propagate a fiber break by stopping to send light on both ends. 
 This is enough to cause the router interfaces to drop and for protocols to 
 converge.

We only use BFD on L2 circuits.

Ethernet auto-neg includes limited link-fault signalling. It's not as
good as SONET, but it will detect link-failure/one-way link.

 Since LAN-PHY doesn't have any built end-end alarming, some folks believe 
 that we may encounter situations where a fiber break doesn't cause interfaces 
 do go down. Convergence would then have to wait for IGP hellos to detect the 
 problem.

Have not found GigE 10GigE to be a problem over fiber.

Have found BFD to be problematic depending on the implementation and
CPU load of your equipment.

The closer you can stay to the physical layer, the better off you are.

 Is anybody else running BFD over 10G LAN-PHY transport links? Any comments 
 around BFD for this application in general?

 Thanks,
 Serge



      __
 Looking for the perfect gift? Give the gift of Flickr!

 http://www.flickr.com/gift/





-- 
Tim:
Sent from Brooklyn, NY, United States



Re: IP4 Space

2010-03-05 Thread Tim Durack
On Fri, Mar 5, 2010 at 7:22 AM, Andy Davidson a...@nosignal.org wrote:
 On 04/03/2010 19:30, William Herrin wrote:
 On Thu, Mar 4, 2010 at 2:12 PM, Joel Jaeggli joe...@bogus.com wrote:
 handling the v6 table is not currently hard (~2600 prefixes) while long
 term the temptation to do TE is roughly that same in v6 as in v4, the
 prospect of having a bunch of non-aggregatable direct assignments should
 be much lower...
 Because we expect far fewer end users to multihome tomorrow than do today?

 The opposite, but a clean slate means multihomed networks with many v4
 prefixes may be able to be a multihomed network with just one v6 prefix.

Assuming RIR policy allows multi-homers to be allocated/assigned
enough v6 to grow appreciably without having to go back to the RIR. As
a multi-homed end-user, I don't currently find that to be the case.

-- 
Tim:



Re: FreeAxez raised flooring?

2010-03-05 Thread Tim Durack
On Fri, Mar 5, 2010 at 2:15 PM, Wayne E. Bouchard w...@typo.org wrote:
 Actually, my experience has been that most of the newer installations
 (last 5-7 years) that I have been able to see where raised floor is
 employed are also doing hot/cold rows.

We have/are building new datacenters with a raised floor plenum. Air
is directed into the racks from below, and ducted out of the top. No
hot/cold aisle, just lots of cold air to cool the equipment. It's an
AFCO rack design. Seems to be efficient so far.

-- 
Tim:



Re: FreeAxez raised flooring?

2010-03-05 Thread Tim Durack
 We have/are building new datacenters with a raised floor plenum. Air
 is directed into the racks from below, and ducted out of the top. No
 hot/cold aisle, just lots of cold air to cool the equipment. It's an
 AFCO rack design. Seems to be efficient so far.

 How do you measure efficiency?  How do you blow air on all the
 computers in the rack and not just the bottom one?

 Hot/cold aisles are going to be way more efficient, or at least more
 uniform.  Your systems are probably like most rack mount gear and
 designed to take air in the front, route it over the internals in an
 ideal way (possibly using baffles) and spitting it out through the
 back.  Hot/cold aisles work with this system.  Your way hits the bottom
 box and then spits the air out of the rack missing the top systems.

Same way you cool the top of a rack in a cold/hot aisle system. Blow
cold air up the front of the rack. We measure temperature at select
points in the rack. Keep the hottest spot below set point, and
everything is fine. The physics aren't that much different.

We are cooling 20kw per rack with this setup. 4 hp c-class chassis per
rack. Works great.

-- 
Tim:



Re: CRS-3

2010-03-09 Thread Tim Durack
On Tue, Mar 9, 2010 at 2:51 PM, Brian Feeny bfe...@mac.com wrote:

 So who is going to be the first to deploy these?

 http://newsroom.cisco.com/dlls/2010/prod_030910.html


 - Download the entire Library of Congress in just over 1 second
 - Stream every motion picture ever created in less than four minutes

 If nothing else you gotta love the Cisco Marketing machine!

This intelligence also includes carrier-grade IPv6 (CGv6)

Can't wait to find out what this is.

-- 
Tim:



Re: IP4 Space

2010-03-23 Thread Tim Durack
On Tue, Mar 23, 2010 at 8:17 AM, William Herrin b...@herrin.us wrote:
 On Tue, Mar 23, 2010 at 3:40 AM, Owen DeLong o...@delong.com wrote:
 On Mar 22, 2010, at 10:27 PM, Mark Newton wrote:
 On 23/03/2010, at 3:43 PM, Owen DeLong wrote:
 With the smaller routing table afforded by IPv6, this will be less 
 expensive. As a result, I suspect there will be more IPv6 small 
 multihomers.
 That's generally a good thing.

 Puzzled:  How does the IPv6 routing table get smaller?

 Compared to IPv4?  Because we don't do slow start, so, major providers won't 
 be
 advertising 50-5,000 prefixes for a single autonomous system.

 On the other hand, smaller ASes still announce the same number, the
 hardware resource consumption for an IPv6 route is at least double
 that of an IPv4 entry, RIR policy implies more bits for TE
 disaggregation than is often possible in IPv4 and dual-stack means
 that the IPv6 routing table is strictly additive to the IPv4 routing
 table for the foreseeable future. Your thesis has some weaknesses.

Plus the RIRs are currently applying pressure to assign only the bare
minimum IPv6 address space to PI multi-homers (at least, the RIR I
deal with.) I can see this quickly leading to non-contiguous
assignments in the not to distant future.

Today I have enough address space to easily allocate /48s per site,
assuming a /64 per VLAN. But I can see the need to assign /56s per
switch port for dhcp-pd. If I were to assign a /48 per switch stack
(seems like a reasonable engineering decision), I'm quickly going to
burn through lots of /48s. I'm sure I could come up with clever ways
to save address space, but I'm wondering why when one of the promises
of IPv6 is to avoid having to think too hard about individual
assignments.

-- 
Tim:



Re: OECD Reports on State of IPv6 Deployment for Policy Makers

2010-04-10 Thread Tim Durack
On Sat, Apr 10, 2010 at 1:44 PM, Owen DeLong o...@delong.com wrote:

 On Apr 10, 2010, at 9:40 AM, William Herrin wrote:

 On Sat, Apr 10, 2010 at 12:31 AM, Randy Bush ra...@psg.com wrote:
 karine perset's work is, as usual, good enough that it should be seen in
 it's original, not some circle-je^h^hid hack of a small part of it.

 http://www.oecd.org/dataoecd/48/8/44961688.pdf

 John,

 I'd like to call your attention to slide 8, the chart showing growth
 in fully working IPv6 deployments. Should that growth trend be allowed
 to continue, IPv4-only deployments can be expected to fall into the
 minority after another few hundred years.


 The upcoming conversion of IPv4 addressing into a zero-sum game (as a
 result of free pool depletion) is likely to increase this growth
 trend, but it's anybody's guess whether the new growth trend improves
 to something with a faster-than-linear feedback loop. And of course
 once free pool depletion hits, the cost to deploy additional IPv4
 systems starts to grow immediately, independent of pre-majority IPv6
 growth.

 In fact, IPv6 is already showing greater than linear acceleration in
 deployment, so, even though IPv4 hasn't run out yet, people are
 beginning to catch on.

 We might want to consider additional public policy incentives to kick
 the IPv6 growth rate into a higher gear.

 Such as?

 Owen




Notify all holders of a currently active AS they have been
allocated/assigned a /32. No fees. No questions.

To accept the allocation/assignment, it must be advertised within a 24
month period.

There is no shortage of available /32s in 2000::/3. There is a serious
shortage of meaningful deployment.

-- 
Tim:



Re: OECD Reports on State of IPv6 Deployment for Policy Makers

2010-04-10 Thread Tim Durack
On Sat, Apr 10, 2010 at 5:11 PM, Nick Hilliard n...@foobar.org wrote:

 I'm puzzled as to why you might think that this would incentivise
 meaningful deployment of ipv6.

 Nick



It removes the hurdle of working with the RIR and/or getting
management buy-in to go negotiate for number resources.

(Our personal experience as a community/end-user network is that ARIN
wants justification for the minimum address space one can live with.
At this early stage of deployment, that raises concerns over whether
we have a workable address plan in place. We worked with ARIN to
eventually get a /41 assigned. With the prospect of assigning /56s to
every customer port we have on an edge switch, that's not going to
last long. You can probably argue we got the initial request wrong,
but it still means we have to go back and negotiate again, which we
haven't found to be much fun. That's holding us back.)

-- 
Tim:



Re: How do you put a TV station on the Mbone?

2011-04-29 Thread Tim Durack
On Fri, Apr 29, 2011 at 3:11 PM, Joel Jaeggli joe...@bogus.com wrote:
 On 4/29/11 10:12 AM, Jay Ashworth wrote:

 It turns out that as a content provider you can unicast video delivery
 without coordinating the admission of your content onto every edge
 eyeball network on the planet. It's cheap enough that it makes money on
 fairly straght-forward internet business models and it apparently scales
 to meet the needs of justin beiber fans.



Imagine: multicast internet radio! Awesome!

I have a feeling streaming is going to stay unicast.

Multicast is a great technical solution in search of a good business problem.

-- 
Tim:



Re: How do you put a TV station on the Mbone?

2011-05-11 Thread Tim Durack
On Wed, May 4, 2011 at 7:19 PM, Tim Durack tdur...@gmail.com wrote:
 On Wed, May 4, 2011 at 6:20 PM, Jay Ashworth j...@baylink.com wrote:

 No business is entitled to protection of its business model.

 Unless it has a market monopoly, deep pockets, and lobbyist friends.


http://arstechnica.com/tech-policy/news/2011/05/after-approving-comcastnbc-deal-fcc-commish-becomes-comcast-lobbyist.ars

I rest my case.

-- 
Tim:



GTT/Inteliquent/nLayer

2013-07-31 Thread Tim Durack
Any experience/comments on the GTT Global eXpress service? Looks
interesting but odd. Why would I use a virtual IXP? Who participates?

Comments on-list or off-list are fine.

-- 
Tim:


Re: How big is the Internet?

2013-08-14 Thread Tim Durack
Not as big as the one that got away... (IPv6)


On Wed, Aug 14, 2013 at 10:32 AM, Sean Donelan s...@donelan.com wrote:


 Researchers have complained for years about the lack of good
 statistics about the internet for a couple fo decades, since the
 end of NSFNET statistics.

 What are the current estimates about the size of the Internet, all IP
 networks including managed IP and private IP, and all telecommunications
 including analog voice, video, sensor data, etc?

 CAIDA, ITU, Telegeography and some vendors like Cisco have released
 forecasts and estimates.  There are occasional pieces of information
 stated by companies in their investor documents (SEC 10-K, etc).





-- 
Tim:


Re: Typical warranty for generic DWDM transceivers

2013-08-20 Thread Tim Durack
The vendor-locked optics issue cause more trouble than it is worth. There
really needs to be some kind of aftermarket ruling on network equipment,
something along the lines of:

http://en.wikipedia.org/wiki/Aftermarket_(automotive)

Tim:


On Tue, Aug 20, 2013 at 9:49 AM, Clayton Zekelman clay...@mnsi.net wrote:



 FWIW, I've never had an issue with third party optics.

 Particularly if they're OEM Finisar, JDSU, etc...


 At 09:35 AM 20/08/2013, Jay Ashworth wrote:

 - Original Message -
  From: Brandon Ross br...@pobox.com


  She has sold many thousands of optics, all with lifetime warrantys.
  Many of them to very large and clueful organizations, many of whom are
  represented here on NANOG. Of those thousands sold, I can count less
  than 20 that have been returned.
 
  I've also worked for VARs in the past, and work with several of
  them today, selling new OEM branded optics. I've found a MUCH higher
  percentage of OEM optics having to be returned to the manufacturer.
 
  Of course, take my report with a grain of salt.

 Not at all: No one is gonna stop buying Cisco cause a Cisco optic died.

 Third-party manufacturers don't have that built in cushion, so it's not
 unreasonable that they might pay the higher degree of attention to
 reliability that your off-the-cuff statistics imply.

 You do want to go third-party, though, not fourth-party or below. :-)

 Cheers,
 -- jra
 --
 Jay R. Ashworth  Baylink
 j...@baylink.com
 Designer The Things I Think   RFC
 2100
 Ashworth  Associates http://baylink.pitas.com 2000 Land
 Rover DII
 St Petersburg FL USA   #natog  +1 727
 647 1274


 ---

 Clayton Zekelman
 Managed Network Systems Inc. (MNSi)
 3363 Tecumseh Rd. E
 Windsor, Ontario
 N8W 1H4

 tel. 519-985-8410
 fax. 519-985-8409




-- 
Tim:


Cogent multi-hop BGP

2013-08-28 Thread Tim Durack
I was under the impression Cogent no longer did the multi-hop BGP thing,
but then I got a copy of their NA user guide, and saw the peer-a/peer-b
configuration. Not a fan.

Anyone know if this is still required for Cogent IP transit service?
(on/off list is fine.)

-- 
Tim:


Re: Cogent multi-hop BGP

2013-08-28 Thread Tim Durack
As an update to interested parties: I have been informed that Cogent no
longer do the A/B peer config. This is a documentation bug apparently.


On Wed, Aug 28, 2013 at 10:20 AM, Tim Durack tdur...@gmail.com wrote:

 I was under the impression Cogent no longer did the multi-hop BGP thing,
 but then I got a copy of their NA user guide, and saw the peer-a/peer-b
 configuration. Not a fan.

 Anyone know if this is still required for Cogent IP transit service?
 (on/off list is fine.)

 --
 Tim:




-- 
Tim:


Re: subrate SFP?

2013-08-30 Thread Tim Durack
I think this is a great idea. Maybe not a huge market, but I would buy
them, instead of having to use dumb transceivers.

It would be interesting to have some other smart SFP options too, like
macsec for example...

Tim:


On Fri, Aug 30, 2013 at 5:00 AM, Saku Ytti s...@ytti.fi wrote:

 I actually emailed RAD, MethodE and Avago yesterday and pitched the idea.

 MiTOP is my exact justification why it should technically be feasible.

 I guess it would be easier to pitch, if there would be commitment to buy,
 but I don't personally need many units, just 1-2 here and there.



 On 30 August 2013 11:56, Brandon Butterworth bran...@rd.bbc.co.uk wrote:

   There is absolutely no reason that you couldn't deliver 'media
 converter'
   or '2 port switch' in a SFP casing
 
  Yes, similar devices exist
 
  http://www.rad.com/10/SFP-Format-TDM-Pseudowire-Gateway/10267/
 
  so it probably just needs more demand
 
  brandon
 



 --
   ++ytti




-- 
Tim:


TWC / MTC broadband

2013-10-01 Thread Tim Durack
Anyone alive at TWC and/or MTC broadband?

Looks like AS36100 (MTC Broadband) is incorrectly announcing 72.43.125.0/24.
This is causing problems for TWC users who are in 72.43.125.0/24

-- 
Tim:


Tail-F NCS? (Or similar network configuration management.)

2014-02-13 Thread Tim Durack
Looking for real-world experience with Tail-f NCS (or similar network
configuration management.)

Not looking for rancid, we have a homebrew config collection that works
well. Looking for something significantly better than I can write myself.

Not looking for sales either, I have people for that :-)

On/off list is fine.

-- 
Tim:


Re: Managing IOS Configuration Snippets

2014-02-27 Thread Tim Durack
On Thu, Feb 27, 2014 at 8:58 AM, Saku Ytti s...@ytti.fi wrote:

 On (2014-02-26 17:37 -0500), Robert Drake wrote:

  Consider looking at Tail-F's NCS, which according to marketing
  presentations appears to do everything I want right now.  I'd like
  to believe them but I don't have any money so I can't test it out.
  :)

 Tail-F is probably least bad option out there.

 In configuration management, this is super easy:

 DB = Template = Network

 This is super hard:

 Network = DB


 The first one keeps all platform specific logic in flat ascii files filled
 with variables from template.
 When you introduce new product, feature, vendor to network, you only add
 new
 ascii templates, extremely easy, no platform-specific logic in DB.

 The second one every little change in network, requires parser changes
 trying
 to model it back to DB. This is not sustainable. We can kid ourselves that
 NetCONF/YANG will solve this, but they won't. SNMP is old technology, when
 new
 feature comes to vendor, it may take _years_ before MIB comes. There is no
 reason to suspect you will be able to get feature out via NetCONF just
 because
 it is there. And if you can't do it 100% then you have to write parser
 which
 can understand it.

 You only need the second one, in case 100% is not from DB. But it is
 actually
 trivial to produce 100% from DB. You don't want DB to model base
 configuration, that's lot of work for no gain, that'll come from template
 or
 at most DB vendor-specific-blob.
 Then after you push configuration from DB to network, you immediately
 collect
 configuration and create relation of DB-config 2 network-config, now you
 can
 keep ensuring network has correct config. If it does not have, you don't
 know
 why not, you can't fix the error itself, but you can repovision whole box,
 so
 you do get configuration conformance check, it's just very crude.

 But the alternative, trying to understand network config, is just never
 ending
 path to to pain. If someone is going to do it, model it to python or ruby
 ORM
 and put it in github so others can contribute and we don't need to do it
 alone.

 --
   ++ytti


Agree with this.

We started out with rancid, quickly moved to a homebrew scp and git backed
system with webgit/cgit as the user interface. If you are lucky your
network equipment supports advanced features like ssh keys. If not, you
might be stuck using sshpass to ease config collection.

Built a config parsing system that would decompose monolithic configs into
configlet files. Md5sum the file and use as part of the filename. You can
then see version information for parts of the config tree. Quickly
realized that maintaining this system is a full time job, due to the
advanced status of network equipment software...

Now looking at Tail-F NCS. Demo is impressive. I'm hopeful.

Stating the obvious: the software running on most network equipment is of
poor quality. The tools to manage this are a combination of high quality
engineers and homebrew tools. Vendor tools are of a similar quality to the
equipment software. I'd like to think SDN is an attempt to improve this,
but I have my doubts.

-- 
Tim:


Re: Managing IOS Configuration Snippets

2014-02-27 Thread Tim Durack
On Thu, Feb 27, 2014 at 9:50 AM, Ryan Shea ryans...@google.com wrote:

 A couple more thoughts, regarding

 Network = DB

 I completely agree that trying to use the network config itself as the
 authority for what we intend to be on a device is not the right long-term
 approach. There is still a problem with Network = DB that I see. Assuming
 you have *many* devices, that may or may not be up at a given time, or may
 be in various stages of turn-up / burn-in / decom it is expected that a
 config change will not successfully make it to all devices. There are other
 timing issues, like a config built for a device being turned up, followed
 by a push of an update to all devices that succeeds, followed by the
 final turn-up of this device. Even if you have a fancy config pushing
 engine, let's just take as a given that you'll need to scrub through your
 rancid-git backups to determine what needs to be updated.

 Regarding the MD5 approach, let's also think that configlets could have
 no commands in them. In the NTP example I had before, if we wanted to
 remove an NTP server the configlet would need the no version, but the
 rancid backup obviously would not have this. I'm not trying to work a unit
 test assertion framework here either. Some vendors have more robust
 commenting, and this can be quite convenient for explicitly stating what
 was pushed to the device. What are you using in your network... banner,
 snmp-location, hope, prayer?


We don't do this, but the only flexible commenting in IOS style configs is
ACLs.

You could have an ACL that contains remarks only, and include version
information:

ip access-list CFG-VER
 remark CFG-VER-NTP 1.0.3
 remark CFG-VER-VTY 4.3.2
end

You could break this into individual ACLs if you prefer:

ip access-list CFG-VER-NTP
 remark CFG-VER-NTP 1.0.3
end

ip access-list CFG-VER-VTY
 remark CFG-VER-VTY 4.3.2
end

Seems ridiculous, but that is the sorry state of the network OS.

-- 
Tim:


Why IPv6 isn't ready for prime time :-)

2014-03-27 Thread Tim Durack
NANOG arguments on IPv6 SMTP spam filtering.

Deutsche Telecom discusses IPv4-IPv6 migration:

https://ripe67.ripe.net/presentations/131-ripe2-2.pdf

Facebook goes public with their IPv4-IPv6 migration:

http://www.internetsociety.org/deploy360/blog/2014/03/facebooks-extremely-impressive-internal-use-of-ipv6/

If you haven't started, you've got some work to do. Y2K/IPv6 consulting
gigs? Nice little earner!

-- 
Tim:


Re: Anternet

2014-04-05 Thread Tim Durack
Large Scale aNt will be good enough. Plus this has security advantages.

On Saturday, April 5, 2014, Jeff Kell jeff-k...@utc.edu wrote:

 On 4/5/2014 2:32 AM, Andrew D Kirch wrote:
  So, if there's more than 4 billion ants... what are they going to do?

 Who knows, but they'll definitely need IPv6 :)

 Jeff




-- 
Tim:


Pluggable Coherent DWDM 10Gig

2014-04-21 Thread Tim Durack
Anyone know if pluggable coherent DWDM 10Gig optics exist? (I'm finding no
such thing.)

How about narrow-band/filtered receive 10Gig optics? (Inline FBG filter
receive side might be doable?)

-- 
Tim:

p.s. Before you ask, DTAG Terastream has got me thinking...


Re: Pluggable Coherent DWDM 10Gig

2014-04-21 Thread Tim Durack
As a follow up, I did not miss a zero. TenGig. If you want to know why:
https://ripe67.ripe.net/presentations/131-ripe2-2.pdf

(I'll take 100Gig once I can get the optics for less than the cost of a
v.nice sports car...)


On Mon, Apr 21, 2014 at 2:42 PM, Tim Durack tdur...@gmail.com wrote:

 Anyone know if pluggable coherent DWDM 10Gig optics exist? (I'm finding no
 such thing.)

 How about narrow-band/filtered receive 10Gig optics? (Inline FBG filter
 receive side might be doable?)

 --
 Tim:

 p.s. Before you ask, DTAG Terastream has got me thinking...




-- 
Tim:


Re: Pluggable Coherent DWDM 10Gig

2014-04-21 Thread Tim Durack
On Mon, Apr 21, 2014 at 2:57 PM, Tim Durack tdur...@gmail.com wrote:

 On Mon, Apr 21, 2014 at 2:42 PM, Tim Durack tdur...@gmail.com wrote:

 Anyone know if pluggable coherent DWDM 10Gig optics exist? (I'm finding
 no such thing.)

 How about narrow-band/filtered receive 10Gig optics? (Inline FBG filter
 receive side might be doable?)

 --
 Tim:

 p.s. Before you ask, DTAG Terastream has got me thinking...


 As a follow up, I did not miss a zero. TenGig. If you want to know why:
 https://ripe67.ripe.net/presentations/131-ripe2-2.pdf

 (I'll take 100Gig once I can get the optics for less than the cost of a
 v.nice sports car...)


As another follow-up, coherent 'cos I want tuned receive as well as
transmit, so the WDM system can be truly colorless and directionless,
plus the nice high CD limit would be great.

Maybe I should ask for a pony too? :-)

-- 
Tim:


Re: Pluggable Coherent DWDM 10Gig

2014-04-25 Thread Tim Durack
On Mon, Apr 21, 2014 at 2:42 PM, Tim Durack tdur...@gmail.com wrote:

 Anyone know if pluggable coherent DWDM 10Gig optics exist? (I'm finding no
 such thing.)

 How about narrow-band/filtered receive 10Gig optics? (Inline FBG filter
 receive side might be doable?)

 --
 Tim:

 p.s. Before you ask, DTAG Terastream has got me thinking...


As a follow up, there are lots of people willing to sell various flavours
of DWDM optics, but as I suspected, there is no such thing as a
coherent/tuned/filtered receive 10GigE DWDM optic. All 10GigE optics are
wide-band receive. However, you can get inline optical filters, Santec
OFM-15 for example. Investigating...

-- 
Tim:


Re: Pluggable Coherent DWDM 10Gig

2014-04-25 Thread Tim Durack
I'm trying to build colorless directionless with passive power
couplers/splitters plus EDFA. DTAG are doing it with 100G. I think it's
doable with 10G. Will see. Interesting experiment either way, right?

(I'm betting DTAG would use integrated pluggables if they could. They don't
appear to be fans of traditional systems.)

On Friday, April 25, 2014, Phil Bedard bedard.p...@gmail.com wrote:

 What are you trying to do?  Why do you need the receive side to be tuned
 to a specific narrowband wavelength?  Coherent doesn't really make sense
 in 10G becaue 10G long-haul is still on/off keyed and doesn't care about
 phase. Coherent detectors are needed where phase of the signal is
 important like long-haul 100G where multiple analog photonic signals are
 mixed on the transmit side.  It also requires DSPs to process the received
 information. You aren't going to put a DSP inside a SFP+ cage.  With
 CFP2/CFP4/QSFP28 the optics vendors would like people to start building
 the DSP onto line cards, whether it be a router or transport shelf,
 because there just isn't the packaging room to make it happen.

 Terastream today doesn't use integrated router optics, they use Cisco's
 nV-Optical solution. The connection between the router and transport shelf
 is still gray optics, but the system is managed as a single logical
 entity, with a 1:1 correlation between router port and transponder.  You
 tune the wavelength on the router because of the 1:1 correlation.
 Terastream just uses passive DWDM muxes/demuxes, also part of the same
 Cisco transport solution, and Cisco VOAs/amps.

 -Phil



 On 4/25/14, 2:59 PM, Tim Durack tdur...@gmail.com javascript:;
 wrote:

 On Mon, Apr 21, 2014 at 2:42 PM, Tim Durack tdur...@gmail.comjavascript:;
 wrote:
 
  Anyone know if pluggable coherent DWDM 10Gig optics exist? (I'm finding
 no
  such thing.)
 
  How about narrow-band/filtered receive 10Gig optics? (Inline FBG filter
  receive side might be doable?)
 
  --
  Tim:
 
  p.s. Before you ask, DTAG Terastream has got me thinking...
 
 
 As a follow up, there are lots of people willing to sell various flavours
 of DWDM optics, but as I suspected, there is no such thing as a
 coherent/tuned/filtered receive 10GigE DWDM optic. All 10GigE optics are
 wide-band receive. However, you can get inline optical filters, Santec
 OFM-15 for example. Investigating...
 
 --
 Tim:




-- 
Tim:


Re: Pluggable Coherent DWDM 10Gig

2014-04-26 Thread Tim Durack
Will need amplification anyway for almost any realistic topology.

For those who don't understand what or why, please read the Terastream PDF
and watch the video several times, then tell me it's not a great idea :-)

On Saturday, April 26, 2014, Julien Goodwin na...@studio442.com.au wrote:

 On 26/04/14 16:02, Mikael Abrahamsson wrote:
  On Sat, 26 Apr 2014, Julien Goodwin wrote:
 
  But you'd never send it all the waves anyway, that's far too much loss
  across the band.
 
  Please elaborate.

 At 3dB loss per split you'd very quickly need additional amplification,
 at which point the ROADM is cheaper. A static split can do the 80 waves
 in much less than the ~20dB a power split would need, and

 
  ROADMs already solve this problem, and are available at the module level
  (how practically available and usable I've no idea, never needed to
 try).
 
  Compare the price of a ROADM and a 50%/50% light splitter. Which one do
  you think is the cheapest and also operationally most reliable?

 Not disagreeing, I'd go with dumb static optics, nearly all the
 reconfigurable optic selling points don't seem to translate into
 actual operational benefits.



-- 
Tim:


Re: MACsec SFP

2014-06-25 Thread Tim Durack
On Wed, Jun 25, 2014 at 8:40 AM, Saku Ytti s...@ytti.fi wrote:

 On (2014-06-25 05:09 -0700), Eric Flanery (eric) wrote:

  That said, I do think the separately tunable tunable transmitters and
  receivers could be huge, especially if they came at only a reasonably
 small

 I don't think this technology exists. The receivers are always wideband and
 there is some filter in optical mux or in BX optic to avoid receiving
 reflections of your own TX.
 Not sure if tunable filter exists.



Tunable rx exists in pluggable format, but it is called 100G coherent :-)

I would find tunable rx useful for 1/10G (eliminate DCM, power-splitter
based WDM etc), but not sure there is enough market for the product to
exist. Closest I got was inline FBG fiber patch. There are manufacturers
for these.

Tim:


Re: MACsec SFP

2014-06-25 Thread Tim Durack
On Cisco equipment supporting MACsec, EAP and MKA is of course configured
through the normal cli.
On Wednesday, June 25, 2014, Pieter Hulshoff phuls...@aimvalley.nl wrote:

 On 25-06-14 22:45, Christopher Morrow wrote:

 today you program the key (on switches that do macsec, not in an SFP
 that does it for you, cause those don't exist, yet) in your router
 config and as near as I have seen there isn't a key distribution
 protocol aside from that which you write/manage yourself and which is
 likely using ssh/snmp(ick)/telnet(ick).


 I'm not familiar with the MACsec key distribution available in current
 routers/switches. Are you saying Cisco doesn't support EAP and/or MKA for
 this purpose or just that the command protocol for configuring EAP/MKA is
 run via SSH/SNMP/telnet?

 Kind regards,

 Pieter Hulshoff



-- 
Tim:


Public DNS64

2014-08-15 Thread Tim Durack
Anyone know of a reliable public DNS64 service?

Would be cool if Google added a Public DNS64 service, then I could point
the NAT64 prefix at appropriately placed boxes in my network.

Why? Other people are better than me at running DNS resolvers :-)

-- 
Tim:


Re: Public DNS64

2014-08-15 Thread Tim Durack
Yeah, sort of agree, except I'm allergic to running services that aren't
straight bit shoveling. NAT64 is pushing it, but at least that is just
announcing a prefix.


On Fri, Aug 15, 2014 at 2:33 PM, Rubens Kuhl rube...@gmail.com wrote:




 On Fri, Aug 15, 2014 at 3:29 PM, Tim Durack tdur...@gmail.com wrote:

 Anyone know of a reliable public DNS64 service?

 Would be cool if Google added a Public DNS64 service, then I could point
 the NAT64 prefix at appropriately placed boxes in my network.

 Why? Other people are better than me at running DNS resolvers :-)


 No one is better than you at running DNS resolvers with low latency from
 your network. Even if they can run DNS resolvers with magical capabilities,
 they will still suffer from transit time.


 Rubens





-- 
Tim:


Re: Low cost WDM gear

2015-02-07 Thread Tim Durack
You can do ~500km without inline amplifier sites using EDFA+Raman+ROPA, but
you are going to need some serious optical engineering to make that work.
The more standard way to do it is amplifier sites every 80-100km for EDFA.
If you are doing 10GigE you will need to allow for DCM also.

On Sat, Feb 7, 2015 at 1:04 PM, Mike Hammett na...@ics-il.net wrote:

 One particular route I'm looking at is 185 miles, so of the options
 presented 300 km is closest. ;-)




 -
 Mike Hammett
 Intelligent Computing Solutions
 http://www.ics-il.com

 - Original Message -

 From: Christopher Morrow morrowc.li...@gmail.com
 To: Kenneth McRae kenneth.mc...@me.com
 Cc: NANOG nanog@nanog.org
 Sent: Saturday, February 7, 2015 12:02:11 PM
 Subject: Re: Low cost WDM gear

 would be good for mike to define 'long distances' here, is it:
 2km
 30km
 300km
 3000km

 Probably the 30-60k range is what you mean by 'long distances' but...
 clarity might help.

 On Sat, Feb 7, 2015 at 12:55 PM, Kenneth McRae kenneth.mc...@me.com
 wrote:
  Mike,
 
  I just replaced a bunch of FiberStore WDM passive muxes with OSI Hardware
  equipment. The FiberStore gear was a huge disappointment (excessive loss,
  poor technical support, refusal to issue refund without threatening legal
  action, etc.). I have had good results from the OSI equipment so far. I
  run passive muxes for CWDM (8 - 16 channels).
 
  On Feb 07, 2015, at 09:51 AM, Manuel Marín m...@transtelco.net wrote:
 
  Hi Mike
 
  I can recommend a couple of vendors that provide cost effective
 solutions.
  Ekinops  Packetlight.
 
  On Saturday, February 7, 2015, Mike Hammett na...@ics-il.net wrote:
 
  I know there are various Asian vendors for low cost (less than $500)
 muxes
  to throw 16 or however many colors onto a strand. However, they don't
 work
  so well when you don't control the optics used on both sides (therefore
  must use standard wavelengths), obviously only do a handful of channels
 and
  have a distance limitation.
  What solutions are out there that don't cost an arm and a leg?
  -
  Mike Hammett
  Intelligent Computing Solutions
  http://www.ics-il.com
 
 
  --
  TRANSTELCO| Manuel Marin | VP Engineering | US: *+1 915-217-2232* | MX:
 *+52
  656-257-1109*
 
  CONFIDENTIALITY NOTICE: This communication is intended only for the use
  of the individual or entity to which it is addressed and may contain
  information that is privileged, confidential, and exempt from disclosure
  under applicable law. If you are not the intended recipient of this
  information, you are notified that any use, dissemination, distribution,
 or
  copying of the communication is strictly prohibited.
 
  AVISO DE CONFIDENCIALIDAD: Esta comunicación es sólo para el uso de la
  persona o entidad a la que se dirige y puede contener información
  privilegiada, confidencial y exenta de divulgación bajo la legislación
  aplicable. Si no es el destinatario de esta información, se le notifica
 que
  cualquier uso, difusión, distribución o copia de la comunicación está
  estrictamente prohibido.




-- 
Tim:


RAD MiNID

2015-01-20 Thread Tim Durack
Anyone got experience with RAD MiNID? I need to do some L2 protocol
tunneling (L2PT), and this looks like it might scratch that itch.

-- 
Tim:


Re: draft-ietf-mpls-ldp-ipv6-16

2015-02-20 Thread Tim Durack
On Fri, Feb 20, 2015 at 6:39 AM, Saku Ytti s...@ytti.fi wrote:

 On (2015-02-19 11:06 -0500), Tim Durack wrote:

  What is the chance of getting working code this decade? I would quite
 like
  to play with this new fangled IPv6 widget...
 
  (Okay, I'd like to stop using IPv4 for infrastructure. LDP is the last
  piece for me.)

 Is there 4PE implementation to drive IPv4 edges, shouldn't be hard to
 accept
 IPv6 next-hop in BGP LU, but probably does not work out-of-the-box?
 Isn't Segment Routing implementation day1 IPV4+IPV6 in XR?

 --
   ++ytti


I would gladly take OSPFv2/OSPFv3/ISIS+SR over LDP, but I'm seeing that is
not all that is needed.

I also need some flavor of L2VPN (eVPN) and L3VPN (VPNv4/VPNv6) working
over IPv6.

IPv6 control plane this decade may yet be optimistic.

-- 
Tim:


draft-ietf-mpls-ldp-ipv6-16

2015-02-19 Thread Tim Durack
I notice draft-ietf-mpls-ldp-ipv6-16 was posted February 11, 2015.

What is the chance of getting working code this decade? I would quite like
to play with this new fangled IPv6 widget...

(Okay, I'd like to stop using IPv4 for infrastructure. LDP is the last
piece for me.)

-- 
Tim:


Re: Searching for a quote

2015-03-12 Thread Tim Durack
http://en.wikipedia.org/wiki/Jon_Postel

Postel's Law
Perhaps his most famous legacy is from RFC 760, which includes a Robustness
Principle which is often labeled Postel's Law: an implementation should be
conservative in its sending behavior, and liberal in its receiving
behavior (reworded in RFC 1122 as Be liberal in what you accept, and
conservative in what you send).

On Thu, Mar 12, 2015 at 8:20 PM, Jason Iannone jason.iann...@gmail.com
wrote:

 There was once a fairly common saying attributed to an early
 networking pioneer that went something like, be generous in what you
 accept, and send only the stuff that should be sent.  Does anyone
 know what I'm talking about or who said it?




-- 
Tim:


Re: [j-nsp] draft-ietf-mpls-ldp-ipv6-16

2015-02-20 Thread Tim Durack
On Fri, Feb 20, 2015 at 6:02 PM, Adam Vitkovsky adam.vitkov...@gamma.co.uk
wrote:

  Alright so would you mind sharing the business drivers that would make
 you migrate your current production infrastructure to this new unproven
 possibly buggy LDPv6 and 4PE/4VPE setup please?



 adam


Businesses bigger than me think there is a business driver for IPv6:

http://meetings.ripe.net/ripe-54/presentations/IPv6_management.pdf
http://www.internetsociety.org/deploy360/wp-content/uploads/2014/04/WorldIPv6Congress-IPv6_LH-v2.pdf

IPv6 management of equipment is relatively easy. Once you've started down
that path, you start looking at the protocol stuff, and wondering what to
do about that.

Maybe I should leave it alone until the business people figure it out for
me :-)

Tim:


Re: [j-nsp] draft-ietf-mpls-ldp-ipv6-16

2015-02-20 Thread Tim Durack
On Fri, Feb 20, 2015 at 11:33 AM, Adam Vitkovsky adam.vitkov...@gamma.co.uk
 wrote:

  Of Tim Durack
  Sent: 20 February 2015 14:00
  IPv6 control plane this decade may yet be optimistic.
 

 And most importantly it's not actually needed it's just a whim of network
 operators.

 adam


 --
 This email has been scanned for email related threats and delivered safely
 by Mimecast.
 For more information please visit http://www.mimecast.com
 --


I don't consider unique addressing of infrastructure a whim.

-- 
Tim:


Peering + Transit Circuits

2015-08-18 Thread Tim Durack
Question: What is the preferred practice for separating peering and transit
circuits?

1. Terminate peering and transit on separate routers.
2. Terminate peering and transit circuits in separate VRFs.
3. QoS/QPPB (
https://www.nanog.org/meetings/nanog42/presentations/DavidSmith-PeeringPolicyEnforcement.pdf
)
4. Don't worry about peers stealing transit.
5. What is peering?

Your comments are appreciated.

-- 
Tim:


Re: Peering + Transit Circuits

2015-08-18 Thread Tim Durack
On Tue, Aug 18, 2015 at 1:29 PM, Patrick W. Gilmore patr...@ianai.net
wrote:

 On Aug 18, 2015, at 1:24 PM, William Herrin b...@herrin.us wrote:
  On Tue, Aug 18, 2015 at 8:29 AM, Tim Durack tdur...@gmail.com wrote:

  Question: What is the preferred practice for separating peering and
 transit
  circuits?
 
  1. Terminate peering and transit on separate routers.
  2. Terminate peering and transit circuits in separate VRFs.
  3. QoS/QPPB (
 
 https://www.nanog.org/meetings/nanog42/presentations/DavidSmith-PeeringPolicyEnforcement.pdf
  )
  4. Don't worry about peers stealing transit.
  5. What is peering?
 
  Your comments are appreciated.
 
 
  If you have a small number of peers, a separate router carrying a
  partial table works really well.

 To expand on this, and answer Tim’s question one post up in the thread:

 Putting all peer routes on a dedicated router with a partial table avoids
 the “steal transit” question. The Peering router can only speak to peers
 and your own network. Anyone dumping traffic on it will get !N (unless they
 are going to a peer, which is a pretty minimal risk).

 It has lots of other useful features such as network management and
 monitoring. It lets you do maintenance much easier. Etc., etc.

 But mostly, it lets you avoid joining an IX and having people use you as a
 backup transit provider.


This has always been my understanding - thanks for confirming. I'm weighing
cost-benefit, and looking to see if there are any other smart ideas. As
usual, it looks like simplest is best.

-- 
Tim:

p.s. Perhaps I should be relieved no one tried to sell me an SDN peering
transit theft controller...


Fwd: [c-nsp] Peering + Transit Circuits

2015-08-18 Thread Tim Durack
-- Forwarded message --
From: Tim Durack tdur...@gmail.com
Date: Tue, Aug 18, 2015 at 9:53 AM
Subject: Re: [c-nsp] Peering + Transit Circuits
To: Rolf Hanßen n...@rhanssen.de
Cc: cisco-...@puck.nether.net cisco-...@puck.nether.net


On Tue, Aug 18, 2015 at 9:45 AM, Rolf Hanßen n...@rhanssen.de wrote:

 Hi,

 you forgot do some interface-ACL-magic that drops peer-traffic that does
 not have a destination IP in my cool-networks-whitelist.


Yup, valid option. I am trying to avoid anything that involves maintaining
lists.

-- 
Tim:


Re: [c-nsp] Peering + Transit Circuits

2015-08-18 Thread Tim Durack
On Tue, Aug 18, 2015 at 8:47 AM, Gert Doering g...@greenie.muc.de wrote:

 Hi,

 On Tue, Aug 18, 2015 at 08:29:31AM -0400, Tim Durack wrote:
  4. Don't worry about peers stealing transit.
  5. What is peering?

 I'm afraid that the majority of answers will be 4./5., mixed with
 6. what? how can peers stell my transit?!

 We're somewhat into the we'll notice if there is surprisingly high
 inbound traffic on peering, and then we'll find the peer, and apply
 appropriate measures camp... (since we're a hosting shop, we have mostly
 outgoing traffic, so significant amounts of incomnig traffic stick
 out).

 But yeah, something more strict might be in order.


Thanks for the response. This is what I was guessing.

We currently do 2. Terminate peering and transit circuits in separate
VRFs. which works well when everything is a VRF but comes at the cost of
higher resource usage (RIB  FIB.)

I was thinking a creative solution might be:

7. DSCP mark packets on peering ingress, police on transit egress.

Not sure if I really want to get into using DSCP bits for basic IP service
though.


 (It would be cool if Cisco would understand that hardware forwarding
 platforms need useful netflow with MAC-addresses in there...  ASR9k at
 least got working MAC-accounting, but more fine grained telemetry would
 certainly be appreciated.  Software IOS can do it, Sup720 cannot do it
 due to hardware constraints, Sup2T exports MAC addresses taken from random
 caches in the system but not the inbound packets, XR doesn't do it at all,
 hrmph)


 gert

 --
 USENET is *not* the non-clickable part of WWW!
//
 www.muc.de/~gert/
 Gert Doering - Munich, Germany
 g...@greenie.muc.de
 fax: +49-89-35655025
 g...@net.informatik.tu-muenchen.de




-- 
Tim:


Re: [c-nsp] Peering + Transit Circuits

2015-08-18 Thread Tim Durack
On Tue, Aug 18, 2015 at 8:47 AM, Gert Doering g...@greenie.muc.de wrote:

 Hi,

 (It would be cool if Cisco would understand that hardware forwarding
 platforms need useful netflow with MAC-addresses in there...  ASR9k at
 least got working MAC-accounting, but more fine grained telemetry would
 certainly be appreciated.  Software IOS can do it, Sup720 cannot do it
 due to hardware constraints, Sup2T exports MAC addresses taken from random
 caches in the system but not the inbound packets, XR doesn't do it at all,
 hrmph)


At the risk of introducing religion, I will mention sFlow...

-- 
Tim:


Re: [c-nsp] Peering + Transit Circuits

2015-08-18 Thread Tim Durack
On Tue, Aug 18, 2015 at 9:38 AM, Gert Doering g...@greenie.muc.de wrote:

 Hi,

 On Tue, Aug 18, 2015 at 09:32:53AM -0400, Tim Durack wrote:
   (It would be cool if Cisco would understand that hardware forwarding
   platforms need useful netflow with MAC-addresses in there...  ASR9k at
 [..]
  At the risk of introducing religion, I will mention sFlow...

 Yes... and this is helping exactly why...?  Given the overwhelming
 support for sFlow in (Cisco-) hardware routers used as peering edge?  :-)


I ask Cisco for sFlow support on a regular basis. Cisco typically respond
with some variation of NIH syndrome. Anyway, back to my question :-)

-- 
Tim:


Re: [c-nsp] Peering + Transit Circuits

2015-08-18 Thread Tim Durack
On Tue, Aug 18, 2015 at 11:25 AM, Scott Granados sc...@granados-llc.net
wrote:

 So in our case we terminate peering and transit on different routers.
 Peering routers have well flow enabled (the one that starts with a J that’s
 inline).  With NFSEN / NFDUMP we’re able to collect that flow data and look
 for anomalous flows or other issues. We pretty much detect and then deal
 with peering issues rather than prevent them with whitelists and so forth
 but then again we’ve been lucky and not experienced to many issues other
 than the occasional leakage of prefixes and such which maxprefix handles
 nicely.


Can I ask why you terminate peering and transit on different routers? (Not
suggesting that is bad, just trying to understand the reason.)

Tim:


Re: Public DNS64

2016-05-29 Thread Tim Durack
For the record:

Tim,

I'm not on the NANOG lists and I don't see how I can respond to this thread:

https://mailman.nanog.org/pipermail/nanog/2014-August/069267.html

but I figured I'd let you know that:

https://developers.google.com/speed/public-dns/docs/dns64

is now available for testing.  Perhaps it will be some use.

Regards,
-Erik

On Fri, Aug 15, 2014 at 2:29 PM, Tim Durack <tdur...@gmail.com> wrote:

> Anyone know of a reliable public DNS64 service?
>
> Would be cool if Google added a Public DNS64 service, then I could point
> the NAT64 prefix at appropriately placed boxes in my network.
>
> Why? Other people are better than me at running DNS resolvers :-)
>
> --
> Tim:>
>



-- 
Tim:>


Re: Coherent CWDM 40G QSFP

2016-10-18 Thread Tim Durack
Not aware of ACO/DCO in QSFP form factor. Inphi is doing 100G QSFP28 PAM4
DWDM for MS. Probably the best you will see for a while.
On Tue, Oct 18, 2016 at 4:50 PM Mike Hammett  wrote:

> Does anyone make a coherent CWDM 40G QSFP? I thought so, but the first
> couple places I checked, I struck out at. This would be for a passive
> mux\MROADM.
>
>
>
>
> -
> Mike Hammett
> Intelligent Computing Solutions
>
> Midwest Internet Exchange
>
> The Brothers WISP
>
>


Re: [c-nsp] SFP DOM SNMP Polling?

2016-11-22 Thread Tim Durack
A typical SFP spec sheet leads me to conclude that reading optic values
repeatedly is expected. For example:

https://www.finisar.com/sites/default/files/resources/AN_2030_DDMI_for_SFP_Rev_E2.pdf

(I selected Finisar as they have complete spec sheets publicly available.)

I question the vendor statement...

Tim:>


SFP DOM SNMP Polling?

2016-11-22 Thread Tim Durack
I have a vendor that does not support SFP DOM SNMP polling. They state this
is due to EEPROM read life cycle. Constant reads will damage the SFP.

We SNMP poll SFP DOM from Cisco equipment without issue.

Not heard this one before. Trying to see if there is some validity to the
statement. Thoughts?

Tim:>


AS15133 (EdgeCast)?

2018-01-12 Thread Tim Durack
Anybody here from AS15133 (EdgeCast) engineering?

We peer with you at DE-CIX NY. Seeing a packet loss issue which is
impacting O365 & MS CDN assets.

Trying to resolve but email to peering@ generated a Verizon ticket which
isn't inspiring confidence...

Thanks,

Tim:>


Re: AS15133 (EdgeCast)?

2018-01-12 Thread Tim Durack
Problem seems resolved. If somebody somewhere did something - thanks! :)

On Fri, Jan 12, 2018 at 2:43 PM Tim Durack <tdur...@gmail.com> wrote:

> Anybody here from AS15133 (EdgeCast) engineering?
>
> We peer with you at DE-CIX NY. Seeing a packet loss issue which is
> impacting O365 & MS CDN assets.
>
> Trying to resolve but email to peering@ generated a Verizon ticket which
> isn't inspiring confidence...
>
> Thanks,
>
> Tim:>
>


Re: Ingress filtering on transits, peers, and IX ports

2020-10-15 Thread Tim Durack
We deploy urpf strict on all customer end-host and broadband circuits. In
this scenario urpf = ingress acl I don't have to think about.

We deploy urpf loose on all customer multihomed DIA circuits. I dont this
makes sense - ingress packet acl would be more sane.

Any flavour of urpf on upstream transit or peering would be challenging.
Ingress packet acl dropping source = own+customer prefix might make sense
depending on your AS topology.

You might argue that ingress packet acl would be operationally simpler on
customer and upstream, as you could cover all scenarios.

On Thu, Oct 15, 2020 at 10:05 AM Saku Ytti  wrote:

> On Thu, 15 Oct 2020 at 15:14,  wrote:
>
>
> > Yes one should absolutely do that, but...
> > But considering to become a good netizen what is more work?
> > a) Testing and the enabling uRPF on every customer facing box or setting
> up precise ACLs on every customer facing port, and then maintaining all
> that?
> > b) Gathering  all your PAs (potentially PIs) (hint: show bgp nei x.x.x.x
> advertised routes) crafting an ACL and apply it on several peering/transit
> links?
> > One of them is couple of weeks work and one is an afternoon job.
>
> I am not fan of uRPF, expensive for what it does. But I don't view it
> as an alternative here, I view it as either adding an ACE on all
> egresses on egress direction or adding ACE on the ingress where
> customer is on ingress direction.
>
> To me these options seem equally complex but the latter one seems superior.
>
> --
>   ++ytti
>


-- 
Tim:>


Re: Ingress filtering on transits, peers, and IX ports

2020-10-15 Thread Tim Durack
On Thu, Oct 15, 2020 at 10:30 AM Saku Ytti  wrote:

> On Thu, 15 Oct 2020 at 17:22, Tim Durack  wrote:
>
>
> > We deploy urpf strict on all customer end-host and broadband circuits.
> In this scenario urpf = ingress acl I don't have to think about.
>
> But you have to think about what prefixes a customer has. If BGP you
> need to generate prefix-list, if static you need to generate a static
> route. As you already have to know and manage this information, what
> is the incremental cost to also emit an ACL?
>
> --
>   ++ytti
>

"You might argue that ingress packet acl would be operationally simpler on
customer and upstream, as you could cover all scenarios."

Although for a static customer urpf is hard to beat...

-- 
Tim:>


Re: Ingress filtering on transits, peers, and IX ports

2020-10-20 Thread Tim Durack
I took a slightly different approach for my mental exercise, expressed in
IOS pigeon:


object-group ip address AS65001

 192.0.2.0 255.255.255.0

end


object-group v6-network AS65001

 2001:DB8::/32

end


object-group ip address TwentyFiveGigE1/0/1

 192.0.2.0 255.255.255.254

end


object-group v6-network TwentyFiveGigE1/0/1

 FE80::/10

 2001:DB8::/127

end


ip access-list extended TwentyFiveGigE1/0/1_IPV4_IN

 permit ip addrgroup TwentyFiveGigE1/0/1 any

 deny   ip addrgroup AS65001 any

 permit ip any any

end


ipv6 access-list TwentyFiveGigE1/0/1_IPV6_IN

 permit ipv6 object-group TwentyFiveGigE1/0/1 any

 deny ipv6 object-group AS65001 any

 permit ipv6 any any

end


interface TwentyFiveGigE1/0/1

 ip access-group TwentyFiveGigE1/0/1_IPV4_IN in

 ipv6 traffic-filter TwentyFiveGigE1/0/1_IPV6_IN in

end


I believe this is the minimum necessary to protect your AS from your
netblock(s) ingress. Note: there may be technical and business reasons why
you need to permit your netblock(s) ingress to your AS.


I believe this concept could be used on any EBGP Inter-AS link, including
peering addressed out of your own netblock.


I remain unconvinced, but shrewd is the one who sees the calamity...


Tim:>

On Mon, Oct 19, 2020 at 8:40 PM Brian Knight via NANOG 
wrote:

> Thanks to the folks who responded to my messages on and off-list.  A
> couple of folks have asked me to summarize the responses that I
> received.
>
> * Static ACL is currently the best way to protect a multi-homed network.
>   Loose RPF may be used if bogon filtering is more important, but it does
> not provide anti-spoofing security.
>
> * Protect your infrastructure subnets with the ingress ACL [BCP 84 sec
> 3.2].  Loopbacks and point-to-point circuits can benefit from this.  In
> the draft ACL, for example, I permit ICMP and traceroute over UDP, and
> block all else.
>
> * Do an egress ACL also, to prevent clutter from reaching the rest of
> the 'Net.  Permit only your aggregate and customer prefixes going
> outbound.
>
> * As I worked through putting the ACLs together, I found that if one
> implements an egress ACL, then customer prefixes must be enumerated
> anyway.  Once those are in an object group, it's easy to add an entry to
> the ingress ACL permitting traffic destined to customer PI space and
> aggregate space.  Seems better than just permitting all traffic in.
>
> Our ACLs, both v4 and v6, now look like the following:
>
> Ingress
>
> * Deny to and from bogon networks, where bogon is either source or dest
> * Permit to and from WAN PtP subnets
> * For IPv6, also permit link-local IPs (fe80::/10)
> * Deny to and from multicast ranges 224.0.0.0/4 and ff00::/8
> * Permit ICMP / traceroute over UDP to infrastructure
> * Deny all other traffic to infrastructure
> * Permit from customer PI / PA space
> * Deny from originated aggregate space
> * Permit all traffic to customer PI / PA space
> * Permit all traffic to aggregate space
> * Deny any any
>
> Egress
>
> * Deny to and from bogon networks
> * Permit to and from WAN PtP subnets
> * For IPv6, also permit link-local IPs
> * Deny to and from multicast range
> * Permit all traffic from customer PI / PA space
> * Permit all traffic from aggregate space
> * Deny any any
>
> We have started implementing the ACLs by blocking the bogon traffic
> only.  The other deny rules are set up as permit rules for now with
> logging turned on.  I'll review matching traffic before I switch the
> rules to deny.
>
> Future work also includes automating the updates to the object groups
> via IRR.
>
> BTW, Team Cymru didn't have any guidance around IPv6 bogons, so I put
> together the below object group based on the IANA IPv6 allocation list:
>
> https://www.iana.org/assignments/ipv6-unicast-address-assignments/ipv6-unicast-address-assignments.xhtml.
>
>   Obviously this is only for space not yet allocated to RIRs.
>
> object-group network ipv6 IPV6-BOGON
>description Invalid IPV6 networks
>::/3
>4000::/3
>6000::/3
>8000::/3
>a000::/3
>c000::/3
>e000::/4
>f000::/5
>f800::/6
>fc00::/7
>fe00::/9
>fec0::/10
> exit
>
> Thanks,
>
> -Brian
>
>
>
> On 2020-10-14 17:43, Brian Knight wrote:
> > So I have put together what I think is a reasonable and complete ACL.
> > From my time in the enterprise world, I know that a good ingress ACL
> > filters out traffic sourcing from:
> >
> > * Bogon blocks, like 0.0.0.0/8, 127.0.0.0/8, RFC1918 space, etc
> > (well-documented in
> >
> https://team-cymru.com/community-services/bogon-reference/bogon-reference-http/
> )
> > * RIR-assigned blocks I am announcing to the rest of the world
> >
> > However, I recognized a SP-specific case where we could receive
> > legitimate traffic sourcing from our own IP blocks: customers running
> > multi-homed BGP where we have assigned PA space to them.  So I added
> > "permit" statements for traffic sourcing from these blocks.
> >
> > Also, we have direct peering links that are numbered 

Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Tim Durack
If y'all can deal with the BU, the Cat9k family is looking half-decent:
MPLS PE/P, BGP L3VPN, BGP EVPN (VXLAN dataplane not MPLS) etc.
UADP programmable pipeline ASIC, FIB ~200k, E-LLW, mandatory DNA license
now covers software support...

Of course you do have to deal with a BU that lives in a parallel universe
(SDA, LISP, NEAT etc) - but the hardware is the right price-perf, and
IOS-XE is tolerable.

No large FIB today, but Cisco appears to be headed towards "Silicon One"
for all of their platforms: RTC ASIC strapped over some HBM. The strategy
is interesting: sell it as a chip, sell it whitebox, sell it fully packaged.

YMMV

On Fri, Jun 19, 2020 at 7:40 AM Mark Tinka  wrote:

> I think it's less about just the forwarding chips and more about an entire
> solution that someone can go and buy without having to fiddle with it.
>
> You remember the saying, "Gone are the days when men were men and wrote
> their own drivers"? Well, running a network is a full-time job, without
> having to learn how to code for hardware and protocols.
>
> There are many start-ups that are working off of commodity chips and
> commodity face plates. Building software for those disparate hardware
> systems, and then developing the software so that it can be used in
> commercial deployments is non-trivial. That is the leverage Cisco, Juniper,
> Nokia... even Huawei, have, and they won't let us forget it.
>
> Then again, if one's vision is bold enough, they could play the long game,
> start now, patiently build, and then come at us in 8 or so years. Because
> the market, surely, can't continue at the rate we are currently going.
> Everything else around us is dropping in price and revenue, and yet
> traditional routing and switching equipment continues to stay the same, if
> not increase. That's broken!`
>
> Mark.
>
> On 19/Jun/20 13:25, Robert Raszuk wrote:
>
> But talking about commodity isn't this mainly Broadcom ? And is there
> single chip there which does not support line rate IP ? Or is there any
> chip which supports MPLS and cost less then IP/MPLS one ?
>
> On Fri, Jun 19, 2020 at 1:22 PM Benny Lyne Amorsen via cisco-nsp 
>  wrote:
>
>
> -- Forwarded message --
> From: Benny Lyne Amorsen  
> To: cisco-...@puck.nether.net
> Cc:
> Bcc:
> Date: Fri, 19 Jun 2020 13:12:06 +0200
> Subject: Re: [c-nsp] Devil's Advocate - Segment Routing, Why?
> Saku Ytti   writes:
>
>
> This is simply not fundamentally true, it may be true due to market
> perversion. But give student homework to design label switching chip
> and IPv6 switching chip, and you'll use less silicon for the label
> switching chip. And of course you spend less overhead on the tunnel.
>
> What you say is obviously true.
>
> However, no one AFAIK makes an MPLS switch at prices comparable to basic
> layer 3 IPv6 switches. You can argue that it is a market failure as much
> as you want, but I can only buy what is on the market. According to the
> market, MPLS is strictly Service Provider, with the accompanying Service
> Provider markup (and then ridiculous discounts to make the prices seem
> reasonable). Enterprise and datacenter are not generally using MPLS, and
> I can only look on in envy at the prices of their equipment.
>
> There is room for a startup to rethink the service provider market by
> using commodity enterprise equipment. Right now that means dumping MPLS,
> since that is only available (if at all) at the most expensive license
> level. Meanwhile you can get get low-scale BGPv6 and line-speed GRE with
> commodity hardware without extra licenses.
>
> I am not saying that it will be easy to manage such a network, the
> tooling for MPLS is vastly superior. I am merely saying that with just a
> simple IPv6-to-the-edge network you can deliver similar services to an
> MPLS-to-the-edge network at lower cost, if you can figure out how to
> build the tooling.
>
> Per-packet overhead is hefty. Is that a problem today?
>
>
>
>
> -- Forwarded message --
> From: Benny Lyne Amorsen via cisco-nsp  
> 
> To: cisco-...@puck.nether.net
> Cc:
> Bcc:
> Date: Fri, 19 Jun 2020 13:12:06 +0200
> Subject: Re: [c-nsp] Devil's Advocate - Segment Routing, Why?
> ___
> cisco-nsp mailing list  
> cisco-nsp@puck.nether.nethttps://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
>
> ___
> cisco-nsp mailing list  
> cisco-nsp@puck.nether.nethttps://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/
> .
>
>
>

-- 
Tim:>


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Tim Durack
On Fri, Jun 19, 2020 at 10:34 AM Mark Tinka  wrote:

>
>
> On 19/Jun/20 16:09, Tim Durack wrote:
>
> >
> > It could be worse: Nexus ;-(
> >
> > There is another version of the future:
> >
> > 1. SP "Silicon One" IOS-XR
> > 2. Enterprise "Silicon One" IOS-XE
> >
> > Same hardware, different software, features, licensing model etc.
>
> All this forking weakens a vendor's position in some respects, because
> when BU's are presenting one company as 6,000 ones, it's difficult for
> buying consistency.
>
> Options are great, but when options have options, it starts to get ugly,
> quick.
>
> Ah well...
>
>
> >
> > Silicon One looks like an interesting strategy: single ASIC for fixed,
> > modular, fabric. Replace multiple internal and external ASIC family,
> > compete with merchant, whitebox, MSDC etc.
>
> That is the hope. We've been to the cinema for this one before, though.
> Quite a few times. So I'm not holding my breath.
>
>
> >
> > The Cisco 8000/8200 is not branded as NCS, which is BCM.
>
> Not all of it - remember the big pink elephant in the room, the NCS
> 6000? That is/was nPower. Again, sending customers in all sorts of
> directions with that box, where now ASR9000 and the new 8000 seem to be
> the go-to box. Someone can't make up their mind over there.
>
>
> > I asked the NCS5/55k guys why they didn't use UADP. No good answer,
> > although I suspect some big customer(s) were demanding BCM for their
> > own programming needs. Maybe there were some memory bandwidth issues
> > with UADP, which is what Q100 HBM is the answer for.
>
> When you're building boxes for one or two customers, things like this
> tend to happen.
>
> But like I've been saying for some time, the big brands competing with
> the small brands over merchant silicon doesn't make sense. If you want
> merchant silicon to reduce cost, you're better off playing with the new
> brands that will charge less and be more flexible. While I do like IOS
> XR and Junos, paying a premium for them for a chip that will struggle
> the same way across all vendor implementations just doesn't track.
>
> Mark.
>
>
Yes, for sure NCS6K was a completely different beast, much as NCS1K, NCS2K.
Not sure why the NCS naming was adopted vs. ASR, and then dropped for
8000/8200. Probably lots of battles within the Cisco conglomerate.

Not defending, just observing. Either way, networks got to get built,
debugged, maintained, debugged, upgraded, debugged. All while improving
performance, managing CAPEX, reducing OPEX.

-- 
Tim:>


Re: [c-nsp] Devil's Advocate - Segment Routing, Why?

2020-06-19 Thread Tim Durack
On Fri, Jun 19, 2020 at 9:05 AM Mark Tinka  wrote:

>
>
> On 19/Jun/20 14:50, Tim Durack wrote:
>
> > If y'all can deal with the BU, the Cat9k family is looking
> > half-decent: MPLS PE/P, BGP L3VPN, BGP EVPN (VXLAN dataplane not MPLS)
> > etc.
> > UADP programmable pipeline ASIC, FIB ~200k, E-LLW, mandatory DNA
> > license now covers software support...
> >
> > Of course you do have to deal with a BU that lives in a parallel
> > universe (SDA, LISP, NEAT etc) - but the hardware is the right
> > price-perf, and IOS-XE is tolerable.
> >
> > No large FIB today, but Cisco appears to be headed towards "Silicon
> > One" for all of their platforms: RTC ASIC strapped over some HBM. The
> > strategy is interesting: sell it as a chip, sell it whitebox, sell it
> > fully packaged.
> >
> > YMMV
>
> I'd like to hear what Gert thinks, though. I'm sure he has a special
> place for the word "Catalyst" :-).
>
> Oddly, if Silicon One is Cisco's future, that means IOS XE may be headed
> for the guillotine, in which case investing any further into an IOS XE
> platform could be dicey at best, egg-face at worst.
>
> I could be wrong...
>
> Mark.
>
>
It could be worse: Nexus ;-(

There is another version of the future:

1. SP "Silicon One" IOS-XR
2. Enterprise "Silicon One" IOS-XE

Same hardware, different software, features, licensing model etc.

Silicon One looks like an interesting strategy: single ASIC for fixed,
modular, fabric. Replace multiple internal and external ASIC family,
compete with merchant, whitebox, MSDC etc.

The Cisco 8000/8200 is not branded as NCS, which is BCM. I asked the
NCS5/55k guys why they didn't use UADP. No good answer, although I suspect
some big customer(s) were demanding BCM for their own programming needs.
Maybe there were some memory bandwidth issues with UADP, which is what Q100
HBM is the answer for.

-- 
Tim:>


Re: 60 ms cross-continent

2020-06-20 Thread Tim Durack
Speed of light in glass ~200 km/s

100 km rtt = 1ms

Coast-to-coast ~6000 km ~60ms

Tim:>

On Sat, Jun 20, 2020 at 12:27 PM William Herrin  wrote:

> Howdy,
>
> Why is latency between the east and west coasts so bad? Speed of light
> accounts for about 15ms each direction for a 30ms round trip. Where
> does the other 30ms come from and why haven't we gotten rid of it?
>
> c = 186,282 miles/second
> 2742 miles from Seattle to Washington DC mainly driving I-90
>
> 2742/186282 ~= 0.015 seconds
>
> Thanks,
> Bill Herrin
>
> --
> William Herrin
> b...@herrin.us
> https://bill.herrin.us/
>


-- 
Tim:>


Re: 60 ms cross-continent

2020-06-20 Thread Tim Durack
And of course in your more realistic example:

2742 miles = 4412 km ~ 44 ms optical rtt with no OEO in the path

On Sat, Jun 20, 2020 at 12:36 PM Tim Durack  wrote:

> Speed of light in glass ~200 km/s
>
> 100 km rtt = 1ms
>
> Coast-to-coast ~6000 km ~60ms
>
> Tim:>
>
> On Sat, Jun 20, 2020 at 12:27 PM William Herrin  wrote:
>
>> Howdy,
>>
>> Why is latency between the east and west coasts so bad? Speed of light
>> accounts for about 15ms each direction for a 30ms round trip. Where
>> does the other 30ms come from and why haven't we gotten rid of it?
>>
>> c = 186,282 miles/second
>> 2742 miles from Seattle to Washington DC mainly driving I-90
>>
>> 2742/186282 ~= 0.015 seconds
>>
>> Thanks,
>> Bill Herrin
>>
>> --
>> William Herrin
>> b...@herrin.us
>> https://bill.herrin.us/
>>
>
>
> --
> Tim:>
>


-- 
Tim:>


Re: LDPv6 Census Check

2020-06-10 Thread Tim Durack
Ah yes, I would say LDPv6 and/or SR/MPLS IPv6. SRv6 reads like a science
project.

Either way, I would like to achieve a full IPv6 control plane.


On Wed, Jun 10, 2020 at 2:46 PM Saku Ytti  wrote:

> I'm pretty sure that one or more of Mark, Gert or Tim are thinking
> SR/MPLS IPv6 when they say SRv6?
>
> No one in their right minds thinks SRv6 is a good idea, terrible snake
> oil and waste of NRE. SR/MPLS IPv6 of course is terrific.
>
> LDPv6 and SRv6 seem like an odd couple, LDPv6 SR/MPLS IPv6 seem far
> more reasonable couple to choose from. I have my favorite.
>
>
> On Wed, 10 Jun 2020 at 21:32, Tim Durack  wrote:
> >
> > I would take either LDPv6 or SRv6 - but also need L3VPN (and now EVPN)
> re-wired to use IPv6 NH.
> >
> > I have requested LDPv6 and SRv6 many times from Cisco to migrate the
> routing control plane from IPv4 to IPv6
> >
> > I have lots of IPv6 address space. I don't have a lot of IPv4 address
> space. RFC1918 is not as big as it seems. Apparently this is hard to
> grasp...
> >
> > (This is primarily IOS-XE - can't afford the IOS-XR supercars)
> >
> > On Wed, Jun 10, 2020 at 1:20 PM Mark Tinka  wrote:
> >>
> >> Hi all.
> >>
> >> Just want to sample the room and find out if anyone here - especially
> those running an LDP-based BGPv4-free core (or something close to it) -
> would be interested in LDPv6, in order to achieve the same for BGPv6?
> >>
> >> A discussion I've been having with Cisco on the matter is that they do
> not "see any demand" for LDPv6, and thus, won't develop it (on IOS XE).
> Meanwhile, it is actively developed, supported and maintained on IOS XR
> since 5.3.0, with new features being added to it as currently as 7.1.1.
> >>
> >> Needless to say, a bunch of other vendors have been supporting it for a
> while now - Juniper, Nokia/ALU, Huawei, even HP.
> >>
> >> IOS XR supporting LDPv6 notwithstanding, Cisco's argument is that "the
> world" is heavily focused on deploying SRv6 (Segment Routing). While I know
> of one or two questionable deployments, I'm not entirely sure much of the
> world is clamouring to deploy SR, based on all the polls we've done at
> various NOG meetings and within the general list-based operator community
> >>
> >> So I just wanted to hear from this operator community on whether you
> would be interested in having LDPv6 support to go alongside your LDPv4
> deployments, especially if you run native dual-stack backbones. Or if your
> focus is totally on SRv6. Or if you don't care either way :-). Thanks.
> >>
> >> Mark.
> >
> >
> >
> > --
> > Tim:>
>
>
>
> --
>   ++ytti
>


-- 
Tim:>


Re: LDPv6 Census Check

2020-06-10 Thread Tim Durack
I would take either LDPv6 or SRv6 - but also need L3VPN (and now EVPN)
re-wired to use IPv6 NH.

I have requested LDPv6 and SRv6 many times from Cisco to migrate the
routing control plane from IPv4 to IPv6

I have lots of IPv6 address space. I don't have a lot of IPv4
address space. RFC1918 is not as big as it seems. Apparently this is hard
to grasp...

(This is primarily IOS-XE - can't afford the IOS-XR supercars)

On Wed, Jun 10, 2020 at 1:20 PM Mark Tinka  wrote:

> Hi all.
>
> Just want to sample the room and find out if anyone here - especially
> those running an LDP-based BGPv4-free core (or something close to it) -
> would be interested in LDPv6, in order to achieve the same for BGPv6?
>
> A discussion I've been having with Cisco on the matter is that they do not
> "see any demand" for LDPv6, and thus, won't develop it (on IOS XE).
> Meanwhile, it is actively developed, supported and maintained on IOS XR
> since 5.3.0, with new features being added to it as currently as 7.1.1.
>
> Needless to say, a bunch of other vendors have been supporting it for a
> while now - Juniper, Nokia/ALU, Huawei, even HP.
>
> IOS XR supporting LDPv6 notwithstanding, Cisco's argument is that "the
> world" is heavily focused on deploying SRv6 (Segment Routing). While I know
> of one or two questionable deployments, I'm not entirely sure much of the
> world is clamouring to deploy SR, based on all the polls we've done at
> various NOG meetings and within the general list-based operator community
>
> So I just wanted to hear from this operator community on whether you would
> be interested in having LDPv6 support to go alongside your LDPv4
> deployments, especially if you run native dual-stack backbones. Or if your
> focus is totally on SRv6. Or if you don't care either way :-). Thanks.
>
> Mark.
>


-- 
Tim:>