Re: Impacts of Encryption Everywhere (any solution?)
In a message written on Mon, May 28, 2018 at 09:23:09AM -0500, Mike Hammett wrote: > However, this could be wildly improved with caching ala squid or something > similar. The problem is that encrypted content is difficult to impossible for > your average Joe to cache. The rewards for implementing caching are greatly > mitigated and people like this must suffer a worse Internet experience > because of some ideological high horse in a far-off land. > > Some things certainly do need to be encrypted, but encrypting everything > means people with limited Internet access get worse performance OR mechanisms > have to be out in place to break ALL encryption, this compromising security > and privacy when it's really needed. I'm going to take this question head on, as opposed to the many tangents in this thread. The Internet lived in the world you described, and a lot of people learned a lot of things along the way. Perhaps the most important lessons: - Users cannot be trusted to check if there is a "secure" indicator before sending sensitive information. - Users cannot tell the difference between two "secure" sites, one of which is a phishing site that just happens to have a certificate. - There is no algorithmic way to determine if mixed mode content is "safe". - Web site operators seem incapable of maintaining white lists of safe mixed mode content. - Mixed mode content is not safe due to browser bugs. - Once users have been trained that it's ok to send content via some insecure channels, it's nearly impossible to untrain them of it later. Basically, while you presented the "pro" side of unencrypted content (being able to cache), you didn't present any of the negative side. I have to wonder if the villagers were given a choice of faster internet, where 5% of them had their bank account cleaned out, and 5% had their identity stolen, or slower, secure internet which they would choose? Want a technological solution? It exists! Signed content. I've always been baffled why there isn't a way to serve up HTTP signed (but not encrypted) content. I'd imagine the way it would work is: 1) Initial connection had to be HTTPS encrypted to create a full encrypted channel. 2) Additional assets could then be downloaded as HTTPS, or as HTTP + Signature. Signature must be from the same certificate as the HTTPS data. The http+signature data could then be cashed just fine, and stored in the clear. The web site could determine what to serve up that way to maintain security. All POST commands would have to be HTTPS (data from client to server), and of course sensitive information would be returned HTTPS only. Why doesn't that exist? -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Companies using public IP space owned by others for internal routing
In a message written on Mon, Dec 18, 2017 at 08:58:37AM -0500, Jason Iannone wrote: > My previous employer used 198.18/15 for CE links on IPVPN services. This one is mostly legit: https://tools.ietf.org/html/rfc5735 -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Novice sysadmins
In a message written on Wed, Dec 06, 2017 at 10:51:32AM -0800, Stephen Satchell wrote: > What professional engineers you mentioned do can kill people. I have > yet to hear of anyone dying from a sysadmin or netadmin screwing up. > (Other than dropping something heavy onto someone, using a fork lift > incompetently, or building an unsafe raised floor.). Some of the folks on this list run networks that carry 911 phone calls. A call not going through may well result in fatalities. I'm personally torn, I think the "Professional Engineer" things are 75% racket, and 25% good, but I also think the 'net continues to miss out on the 25% good and could seriously use some of it. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Novice sysadmins (was: Suggestions for a more privacy conscious email provider)
In a message written on Tue, Dec 05, 2017 at 06:49:43AM -0800, Stephen Satchell wrote: > The NSF in particular ran the 'Net like bouncers do in a strip club: > you break the rules, you go. No argument. I'm not sure I've ever seen a more inaccurate description of the NSF. What in the world are you talking about? > The original trust model for the Internet was based on this unrelenting > oversight. You didn't expect Bad Things(tm) because the consequences of > doing them was so severe: banishment and exile. Also, the technical > ability required to do Bad Things(tm) wasn't easily won. Accessing the > 'Net was a PRIVILEGE, not a right. Abuse at your own peril. Oh wait, you took the BS to a new level. There was no banishment and exile. This was before we knew of buffer overflows, spoofing, and so on. I remember the weekly sendmail buffer overrun bugs, the finger back bombs, the rlogin spoofing attacks. Turns out bored college students were very good at creating mischeff. There was no banishment. There were plenty of bad things. > Ok, I'll shut up now. Good plan. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Broadcast television in an IP world
In a message written on Sat, Nov 18, 2017 at 01:48:08AM +0100, Baldur Norddahl wrote: > Does multicast have any future? Netflix, YouTube, et al does not use it. > People want instant replay and a catalogue to select from. Except for sport > events, live TV has no advantage so why even try to optimize for it? Yes, but not the way you're asking. Multicast to end user workstations and between ISP's is probably dead and will never return. Multicast used in private networks, including to distribute programming to set top boxes is alive and well, often hidden in view but in use by millions. It's not just live TV, in the sense of sports. Many businesses leave on their favorite news channel 24x7x365, people still tune into topical shows (evening news, the late show) on schedules, etc. And some of them also do things like push software and guide data using multicast. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: What's the point of prepend communities?
In a message written on Mon, Oct 30, 2017 at 07:56:43PM +0100, Michael Hallgren wrote: > But keep in mind that 'prepend communities' are fragile: I decide by local > preference whereto I send my traffic. Absolutely, but they are still very useful in many situations. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: What's the point of prepend communities?
In a message written on Thu, Oct 26, 2017 at 01:54:17PM -0600, Clinton Work wrote: > I believe that Jason is asking about an ISP BGP community to prepend > their AS when the BGP routes are received from the customer (not when Yeah, I typoed my example, should have been: There are paths: 1 2 3 5 1 2 4 5 If you prepend your ASN, you get: 1 1 2 3 5 1 1 2 4 5 No difference. If you send the ISP a prepend community for 3, you get: 1 2 2 3 5 1 2 4 5 And you just forced all traffic to the second, shorter path. For only customers that are dual homed to 3 & 4 without affecting other traffic. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: What's the point of prepend communities?
In a message written on Thu, Oct 26, 2017 at 02:47:44PM -0400, Jason Lixfeld wrote: > I understand how prepends fit in the context of best path selection, but my > question was more the difference between a customer signalling the ISP to > prepend their AS using a BGP community stamped to a prefix vs. the customer > prepending their own AS instead. Imagine: You are 1. ISP is 2. ISP's peers are 3 & 4. Your B2B partner is 5. There are paths: 1 2 3 5 1 2 4 5 If you prepend your ASN, you get: 1 1 2 3 5 1 1 2 4 5 No difference. If you send the ISP a prepend community for 3, you get: 1 2 3 3 5 1 2 4 5 And you just forced all traffic to the second, shorter path. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Gonna be a long day for anybody with CPE that does WPA2..
In a message written on Mon, Oct 16, 2017 at 03:38:19AM -0400, valdis.kletni...@vt.edu wrote: > And it looks like we're all going to be reflashing a lot of devices. Based on my reading this morning many (but not all) of the attacks are against _clients_ with no way to migitate by simply upgrading AP's. Sure, Windows, Mac, Linux...but also Android and iOS...and that "smart" TV, the streaming stick plugged into it, the nanny cam, etc, etc, etc. :( -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: 4 or smaller digit ASNs
In a message written on Thu, Oct 12, 2017 at 05:01:13AM +, James Breeden wrote: > I have a client interested in picking up a new AS number but they really want > it to be 3 or 4 digits in length. As other's have said, that's difficult. What about going the other way? Ask for 2^32-1. "We have the biggest ASN!" -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Temp at Level 3 data centers
In a message written on Wed, Oct 11, 2017 at 12:54:26PM -0400, Zachary Winnerman wrote: > I recall some evidence that 80+F temps can reduce hard drive lifetime, > though it might be outdated as it was from before SSDs were around. I This is very much a "your infrastructure may vary" situation. The servers we're currently buying when speced with SSD only and the correct network card (generally meaning RJ45 only, but there are exceptions) are waranteed for 105 degree inlet operations. While we do not do "high temperature operations" we have seen operations where folks run them at 90-100 degree input chasing effiency. Famously, Intel ran computers outside in a tent just to prove it works fine: https://www.computerworld.com/article/2533138/data-center/running-servers-in-a-tent-outside--it-works.html It should be easy to purchase equipment that can tolerate 80-90 degree input without damage. But that's not the question here. The question is if the temp is within the range specified in the contract. If it is, deal with it, and if it is not, hold your vendor to delivering what they promised. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Why don't large carriers use alternate communication routes?
In a message written on Tue, Oct 10, 2017 at 07:19:15PM -0400, Sean Donelan wrote: > Are the penalties for subscribe outages so minimal that it makes business > sense not to use backup alternate routes? There are penalties for subscriber outages? Do tell! Where? -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: New TRANSLANT cable - US/VA to ES
In a message written on Wed, Sep 27, 2017 at 11:47:40PM +, Jay R. Ashworth wrote: > Microsoft, Facebook, Telxius. > > 160TB, presumably each way, but no technical detail in this piece: TB or Tbps? Either way, they are old hat. Can we get that expressed in Likes/second, or perhaps Windows Updates/second? -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Max Prefix Out, was Re: Verizon 701 Route leak?
In a message written on Thu, Aug 31, 2017 at 12:50:58PM +0200, J??rg Kost wrote: > What about adding an option to the BGP session that A & B do agree on a > fixed number of prefixes in both directions, so Bs prefix-in could be As > prefix-out automatically? As others have pointed out, that's harder to do, but there's a different reason it may not be desireable. If a peer sets a limit to tear down the session with no automatic reset, forcing a call to their NOC to get a human to reset it then it may be advantageous to set your side to tear down at N-1 prefixes. That way you can insure restoration at the speed of your NOC, and not at the speed of your peer's. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: DevOps workflow for networking
In a message written on Fri, Aug 11, 2017 at 08:51:25AM -0700, Hugo Slabbert wrote: > Possibly a minor nit, but if the devices "don't directly support > automation", how is the "D" part of "CI/CD" accomplished there? > `integration -ne deployment`. Do you mean something like "there is no API > or e.g. netconf interface, but they can generate config off-box, scp it, > and `copy start run` to load"? More or less. I've worked at places that do this sort of thing. 1) Download config from box. 2) Run script to determine changes necesary to config. 3) Load changes. 4) Download config again. 5) Re-run the script to determine changes necessary, verify there are none. For a lot of the devices with a Cisco-IOS like interface it's not even hard. Generate a code snippet: config terminal interface e0 description bar end write mem Then tftp the config to a server, have the script see e0 has description bar. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Microsoft O365 labels nanog potential fraud?
In a message written on Wed, Mar 29, 2017 at 08:58:38AM -0600, Grant Taylor via NANOG wrote: > I also strongly recommend that mailing lists be viewed as an entity unto > themselves. I.e. they receive the email, process it, and generate a new > email /from/ /their/ /own/ /address/ with very similar content as the > message they received. > > I strongly encourage mailing list admins to enable Variable Envelope > Return Path to help identify which subscribed recipient causes each > individual bounce, even if the problem is from downstream forwards. > > The problem with this is that it takes more processing power and > bandwidth. Most people simply want an old school expansion that > re-sends the same, unmodified, message to multiple recipients. - That > methodology's heyday has come and mostly gone. Actually, my problem is not so much processing power and bandwidth, but that every time I've encountered this problem I found a morass of painfully broken, horribly documented, super-complex software. With sendmail/postfix you can edit an alias file and say: bob: joe, tim, alex And boom, done. If I could enable some feature/module/whatever in either one with a line or two of config to make that do Variable Envelope Return Path I would, but every solution I know of requires setting up a complex milter, running some external daemon, which often depends on 3 different interpreted languages to be installed and so on down a dependency hell. While I haven't looked at real mailing list software recently (e.g. mailman) when I last did they didn't suport this either and it took a pile of 3rd party hacks to make it work. Why o why in 2017 can this not be a checkbox, a line of config, or so on. For that matter, setting up DKIM is horrendously complicated for no good reason... -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: WEBINAR TUESDAY: Can We Make IPv4 Great Again?
In a message written on Mon, Mar 06, 2017 at 12:16:32PM -0500, valdis.kletni...@vt.edu wrote: > Oh, and you're going to need support buy-in from *at least* Microsoft, Apple, > Linux, Cisco, Juniper, and a significant chunk of major makers of CPE gear. Valdis is just spouting a bunch of fake requirements. It's all lies folks. I mean the thing is called "EZIP", the EZ is right in the name. We're going to drain that IETF swamp of all their so-called experts and make sure simple proposals like this that regular people can understand get a fair shot. It's going to change the Internet, bigly. And, what about the e-mails? I mean, come on, what are those SMTP people hiding? [For the humor impared, it's a joke folks.] -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Cellular enabled console server
In a message written on Fri, Feb 24, 2017 at 10:08:52AM -0600, Ben Bartsch wrote: > NANOG - Are any of you running a console server to access your network > equipment via a serial connection at a remote site? If so, what are you > using and how much do you like it? I have a project where I need to stand > up over 100 remote sites and would like a backdoor to the console just to > be able to see what's going on with the equipment to hopefully avoid a > truck roll for something simple like a hung device. I need 4 console ports > and 1 RJ45 ethernet jack. My quick Google search landed me at > BlackBox LES1204A-3G-R2, but I've never actually used such a device. This > would be for use in the USA. OpenGear all the way. Models for every need. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Juniper Advertise MED on EBGP session.
In a message written on Tue, Feb 21, 2017 at 11:10:35AM -0800, Keenan Tims wrote: > I also spent a significant amount of time trying to figure out a way to > do this, and was using communities for a while before I found a > solution. It turns out that the expression knob lets you use the > existing metric as an input, and this works to export the iBGP MED, at > least on my 12.3X48 SRX: > > then { > metric { > expression { > metric multiplier 1; > } > } > } This is exactly what I needed. It works perfectly. Many, many thanks. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Juniper Advertise MED on EBGP session.
I tried to pull an old trick out of my playbook this morning and failed. I'd like to advertise BGP Metrics on an EBGP session, specifically the existing internal metrics. I know how to do this on a Cisco, but I tried on a Juniper and it seems to be impossible. I can set a metric in a policy, or put a default metric on the session as a whole, or even set it to IGP. But none of those are what I want. I want the existing metrics advertised as-is, just like would be done over an IBGP session. After an hour of reading documentation and trying a few things, I'm starting to think it may be impossible on JunOS. Anyone have a tip or trick? -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Passive Optical Network (PON)
In a message written on Sat, Jan 21, 2017 at 03:22:20PM -0600, Stas Bilder wrote: > Now, to the projects. > I have never heard of seen PON on a DC level. A friend of mine told me of a fascinating in-data center PON solution. He had a customer that needed high speed multicast fan-out. They chose to use a 10G/1G PON solution, a single source node sending 10G PON with a 1G backchannel, 100% of the traffic was multicast. It could be passively split in DC up to 128:1 with zero "packet loss", good luck finding an Ethernet switch which can take in 10G of multicast in and turn it into 1280G of multicast out without dropping a frame. It was all done entirely inside of a single data center. Since then I've mentioned the trick to several other folks I know who need high speed multicast/broadcast replication. In the DC distance is rarely an issue, so the solution degenerates to special SFP's and a splitter, which is pretty dang simple. However, this is clearly a corner case, and I agree with your assessment overall. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: BGP Route Reflector - Route Server, Router, etc
In a message written on Thu, Jan 12, 2017 at 08:32:44PM +, Justin Krejci wrote: > I am working on some network designs and am adding some additional routers to > a BGP network. I'd like to build a plan of changing all of the existing > routers over from full iBGP mesh to something more scalable (ie route > reflection). You might want to better define "scalable". I don't know your background or network so I can't guess. I can say I've seen the inner workings of some large ISP networks with a lot of hosts in iBGP that work fine, and then people with 5 routers try and tell me they have a scaling problem. What is your actual problem? Memory usage? Convergence time? Configuring the sessions? Staff understanding of how it works? > I am wondering if people can point me in the direction to some good resource > material on how to select a good BGP route reflector design. Should I just > dust off some 7206VXR routers to act as route reflectors? This is a red flag to me, relative to the questions above. The 7206VXR, even with an NPE-G2, is a 1.5Ghz Power PC with a paltry 2GB of DRAM. It was not speedy when new, being roughly equivilent to the PowerPC G4 processors in Apple Laptops at the time. It is approximately 8 times slower than a current iPhone. Seriously. If convergence time is anything you care about, a 7206VXR is a very bad choice. It may also run out of memory if you have a lot of edges with full tables. So what's the actual "scaling" problem? -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Fiber Costs [Was: Re: SoCal FIOS outage(?) / static IP readdressing]
In a message written on Tue, Jan 10, 2017 at 10:21:53AM -0500, Fletcher Kittredge wrote: > Numbers for building fiber optic systems are out there if you do the > research. Joining the FTTH Council is a good start. One thing to recognise > is that the numbers vary widely based on what is being built and where it > is being built. There are large regional, technology, and product > variations. Verizon has economies of scale few can match. That's actually why I find this interesting. It's not so much the raw price, as I'm sure there is plenty written on that in various forums and anyone serious about putting fiber in the ground knows what they are. Rather it's the external factors that affect price. I'm sure everyone on this list could guess labor prices and permitting vary, but the anicdotal information about permitting, soil, working with other utilities, and so on that drives much of the cost is fascinating. Perhaps I could have phrased better, I don't care so much that it's $15/foot in Frostbite Falls, but I am very interested in why it is $15/foot in Frostbite Falls. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Fiber Costs [Was: Re: SoCal FIOS outage(?) / static IP readdressing]
I don't know about the rest of the list, but I find these numbers fascinating. There's probably not that many people who are allowed to share them, but if more could I think that would be educational for a lot of folks. In a message written on Wed, Jan 04, 2017 at 08:37:19AM -0500, Jared Mauch wrote: > I’ve been thinking of the same in my underserved area. Labor is $5/foot here > and despite friends and colleagues telling me to move, it seems I have a > sub-60 month ROI (and sub-year for some areas I’ve modeled with modest uptake > rates of 15-20% where the other options are fixed wireless, Cellular data or > dial). In a message written on Wed, Jan 04, 2017 at 01:50:48PM +, Luke Guillory wrote: > Our model is 15k a mile all in, this is for aerial not underground for our > HFC/Coax builds. A partner of ours models their underground fiber builds at > 30k a mile. In a message written on Wed, Jan 04, 2017 at 09:08:51AM -0500, Shawn L wrote: > Depending on the area and conditions (rock, etc). We're seeing > > $4 /foot Aerial > $5-$7 /foot direct bury > $10 - $14 /foot directional bore -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: SoCal FIOS outage(?) / static IP readdressing
It is interesting to see the differences. For instance to put my unit in bridge mode I just logged into it, said bridge mode, and rebooted it. That method was actually documented in their business class FAQ. I'm sure there are great differences in plant and capabilities. There's a lot of M history and a lot of historical reasons, good and bad. In a message written on Fri, Jan 06, 2017 at 11:55:56AM -0800, Owen DeLong wrote: > They don’t offer double-play business pricing in my area and, in fact, > refused to sell me business class TV service in a residential unit. When On some of the issues like this I wonder if the reason is regulatory. In the two areas I've checked there is double play business pricing. In one of them they offered business class TV at residential, or at least the rep tried to sell it to me. Its the sort of thing that just feels like a regulator issue to me. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: SoCal FIOS outage(?) / static IP readdressing
In a message written on Wed, Jan 04, 2017 at 04:51:26PM -0800, Paul B. Henson wrote: > I'd call my business FIOS "prosumer" ;). Honestly, I'm not sure why > you'd get business FIOS over residential FIOS if you don't need static > IP addresses, at least if you're at an address where both are available. I can't speak to Verizon, but I can speak to Comcast. At a past address I had Comcast Business (cable modem) service at a residential address, and then later downgraded it to Comcast Residential service. The similarities: - Both used the exact same cable coming into the house. - Both offered the same speeds. - Both offered static IP's for an additional fee. - Both clearly used the same routers, backbone, peering, etc. The differences I could see: - Cable Modem - Residential: could rent a consumer grade or BYO (I did, a good one) - Business: Comcast supplied and required their better-than-average, modem. It could be in bridge mode though. - Support - Residential: 0-30 minutes on hold, the one dispatch when I needed a truck roll took ~24 hours. - Business: 0-2 minutes on hold, I had two dispatches one where the truck arrived within 30 minutes, the other in about 2 hours. - Cost (At the time) - Residential: $75/month. - Business Class: $90/month. - Data Caps: - Residential: 250GB/month. - Business: None (with two paragraphs of disclaimer) Differences I could not see/verify: - Cable Modem Channel Selection - I'm told in some cases business class cable modems get different DOCSYS channels which have less congestion than typical residential channels. This of course varies greatly market to market, and is also dependent on the number of both resi and business subs on the segment. - Packet prioritization. - I'm told that business class packets are given somewhat higher priority (QoS) in the network. I could find no way to verify this, and generally had no packet loss issues inside the Comcast network with either service. Ultimately the reason to buy business class at a residential address (and I think the Prosumer description is correct) is generally faster repair times. On congested segments it may also result in slightly lesser packet loss. Maybe, depending on how caps are done, it could be worth while if you move a lot of data. Obviously if these differences are worth the delta in price depends on your situation and the exact delta in your location. At the time I had this I was working from home, so the extra $15/month insurance that I could do my job was money well spent. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Benefits (and Detriments) of Standardizing Network Equipment in a Global Organization
In a message written on Thu, Dec 29, 2016 at 01:22:28PM -0500, valdis.kletni...@vt.edu wrote: > Say you're doing business in 100 countries, with some stated level of > possible autonomy for each business unit. In all honesty, the original question was a poor straw man for multiple reasons: * Basically nobody does business in 100 countries. Level 3 only claims 60. Verizon does claim 150, but a lot of those are rather arms-length deals. Apple has a "presense" in 97 countries. It's a question about perhaps not a unicorn, but a rare albino pony only seen a few times in the wild! * The companies that do business in these countries rarely have "100 national business units". The chance that all countries are wholy owned subsidiaries created by the corporate parent is zero. They are parterships, co-branding deals, buyouts of existing companies. All of which bring baggage that affects the question more than any any technical details. * Because of how these entities come to be, the chance that the network contains Vendor's A and B, and corporate gets to dictate anything is zero. The network will have Vendors A-Z, plus a few more. Legacy stuff hidden in corners from 50 different M activities. Multiple engineering teams, in multiple locations. * Technical people never get to decide the level of "autonomy". A mix of local laws, M terms, and other business interests will rule. > Is it better for upper corporate to say "all 100 national business units > will use vendor A for edge devices and vendor B for routing", or "all 100 > business units shall choose, based on local conditions such as vendor > support, a standard set of vendors for their operations"? Which leads to an easy answer. It's better for upper corporate to negotiate bulk deals (note I did not say one vendor) and offer standard solutions to each national BU, so that the engineering does not need to be repeated 100 times. Simple economies of scale. That said, some number of the national BU's will not follow that advice, for perhaps good and often bad reason. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Benefits (and Detriments) of Standardizing Network Equipment in a Global Organization
In a message written on Wed, Dec 28, 2016 at 01:39:59PM -0500, Chris Grundemann wrote: > An alternative multi-vendor approach is to use 1 vendor per stack layer, > but alternate layer to layer. That is; Vendor A edge router, Vendor B > firewall, Vendor A/C switches, Vendor D anti-SPAM software, etc. This > doesn't address the bug impact issue as well as it alleviates the vendor > "ownership" issue though... While a lot of people seem to be beating you up over this approach, many folks end up in it for various reasons. For instance the chances a vendor makes both a functional edge router and a high quality firewall are low, which means they often are sourced from different companies. But I think the question others are trying to ask is a different hyptothetical. Say there are two vendors, of of which makes perfectly good edge routers and core routers. What are the pros to buying all of the edge from one, and all of the core from the other? I have to admit I'm having trouble coming up with potential technical upsides to such a solution. There may well be a business up side, in that due to the way price structures are done that such a method saves captial. But in terms of technical resilliance, if there's a bug that takes out all cores or all edges the whole network is down, and there's actually 2x the risk as it could happen at either layer! -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: PGP signature
Re: Benefits (and Detriments) of Standardizing Network Equipment in a Global Organization
In a message written on Fri, Dec 23, 2016 at 03:36:10PM -0500, Chris Grundemann wrote: > If you have a case study, lesson learned, data point, or even just a strong > opinion; I'd love to hear it! I think the high level items are pretty clear here: 1 Vendor Quicker/easier to implement, staff only needs to learn/configure one platform, vendor can help end to end, usually fewer interop issues. Spend may get extra discounts or support bennies. However one bug can wipe out everything, no ability to compare real world performance with a competitor, vendor may think they "own" you come renewal or more sales. Hard to threaten to leave. 2 Vendor Can be implemented multiple ways, for instance 1 vendor per site alternating sites, or gear deployed in pairs with one from each vendor up and down the stack. Harder to implement, staff needs to know both, all configs must be done for both, vendors will always blame the other vendor for interop issues. Twice as much chance of needing to do emergency upgrades. More resilliance to a single bug, can compare real world performance of the two vendors. Both vendors will compete hard to get more of your business, but have a harder time justifing bennies internally due to lower spend. 3 or more Vendors Generally the same as two-vendors, just ++. That is more of the pros, and more of the cons. Limited additional upside to having 3 or more active vendors. Generally occurs as a vendor falls out of favor, two new ones get deployed moving forward, the old one sticks around for a while. Having worked places that were single vendor, 2 vendor, and "whatever we can buy" shops I'll say it basically doesn't matter. What matters is how you set up the org. Want to be lean on staff, go single vendor. Want maximum resilliance and/or negotiating power, go 2 vendor. Inherit a mess, learn to live in a 3+ vendor world. It's not that one is better than the other, it's just they require different approaches to get the same outcome. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpRMgoGi_7sL.pgp Description: PGP signature
Re: Forwarding issues related to MACs starting with a 4 or a 6 (Was: [c-nsp] Wierd MPLS/VPLS issue)
In a message written on Fri, Dec 02, 2016 at 08:50:40PM +0100, Job Snijders wrote: > IEEE told one of my friends: "We changed our allocation methods to > prevent vendors using unregistered mac addresses." > > Does the cost of some squatters on poorly usable MAC space outweight the > cost of the community spending countless hours tracking down where those > dropped packets went? That's the wrong question to ask. The right question is, what could have been done to prevent this entire situation? This problem has occured in all sorts of number spaces before. There have been squatters in almost every number space, boxes "optimized" based on the pattern of allocation, code bugs that went unnoticed due to part of the number space not being used. It's happened to MAC's, IP's, ports, even protocol numbers. One of the answers is to better allocate numbers. Starting at the bottom and working up is almost never the optimal solution. Various sparce allocation strategies exist which insure a wider range of addresses are used early, there is a greater chance of wacking a squatter early, and that the number space ends up more efficiently used in many cases. Had the IETF allocated a MAC starting with 0 then 2, then 4 then 6 then 8 then 10 then 12 then 14 this problem would have likely been identified early on in vendor labs when testing the pseudowire code and would have prevented the "hack" of looking deeper in the packet and guessing because too many 4 and 6 MACs were already deployed. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpfoqPpxwNSM.pgp Description: PGP signature
Re: Forwarding issues related to MACs starting with a 4 or a 6 (Was: [c-nsp] Wierd MPLS/VPLS issue)
In a message written on Fri, Dec 02, 2016 at 03:32:13PM +0100, Job Snijders wrote: > Dear Vendors, take this issue more serious. Realise that for operators > these issues are _extremely_ hard to debug, this is an expensive time > sink. Some of these issues are only visible under very specific, rare > circumstances, much like chasing phantoms. So take every vague report of > "mysterious" packetloss, or packet reordering at face value and > immediately dispatch smart people to delve into whether your software or > hardware makes wrong assumptions based on encountering a 4 or a 6 > somewhere in the frame. I also do not think this is an IEEE/MAC assignement problem. This is a vendor's box can't forward a particular payload problem. If I had boxes with this issue, I would be talking to my vendor about how: a) They were going to replace every single one of them with something that does not have the bug. b) What discount I would get on mainteance/support for having to swap all of the devices. Then I would follow it up with the other vendors I'm talking to about all of my future purchases if they are unable to produce boxes that work. And if the vendor who supplied these did not fix it, I would give them no more business. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpUIMyQ7cxeB.pgp Description: PGP signature
Re: Accepting a Virtualized Functions (VNFs) into Corporate IT
In a message written on Mon, Nov 28, 2016 at 01:10:29PM -0500, Jared Mauch wrote: > my experiences say that most people would accept this. things like IT are a > cost > and any way to externalize that cost makes sense. If you look at something > like > a SMB service, where you have mandatory NID or provider managed CPE/handoff, > having a solution pre-built seems like a no-brainer. Historically, I agree. However I sense the winds are changing on this issue. Various auditors and certification schemes have changed over the past 2-3 years to be much more skeptical of these sorts of devices. They want to see "endpoint security" (AV and/or Fingerprinting) on all devices. To the extent these "appliance" VM's are standard OS's (often CentOS) they are more insistant it should be possible. Where it is not possible, they want to see severe network quarantine, for instance per host firewalls to lock down the devices. I'm not sure why the OP was asking, but if they are developing a new product of this type I might suggest they consider their response to a customer who says they need endpoint security on it before building it. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgp2rZXFv_Szy.pgp Description: PGP signature
Re: Spitballing IoT Security
In a message written on Tue, Oct 25, 2016 at 04:52:58AM -, John Levine wrote: > My nearest Apple stores are 50 miles away. I'm not sure 100 miles in > the car is a good tradeoff for one phone. Scroll down a bit further: "Tell us which device you have, and we’ll email you a prepaid mailing label. Once you’ve deleted your data, ship your device to us, and we’ll handle the rest." -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgp60Yt_YI5U7.pgp Description: PGP signature
Re: Should abuse mailboxes have quotas?
In a message written on Thu, Oct 27, 2016 at 08:03:11AM -0700, Stephen Satchell wrote: > For the last couple of weeks, every single abuse mail I've tried to send > to networks in a very short list of countries has bounced back with > "mailbox exceeds quota". I take this to mean that there isn't someone > actively reading, acting on, and deleting e-mail from abuse@. Are there any ISP's left that read and respond to abuse@ in a timely fashion? I haven't seen one in at least a decade. Maybe I e-mail the wrong ones. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpoLt4ToOJqY.pgp Description: PGP signature
Re: Spitballing IoT Security
In a message written on Wed, Oct 26, 2016 at 05:27:08PM -0700, Ronald F. Guilmette wrote: > do let me know how I can obtain this month's security patches for my iPhone > 3GS. > > (Note that Wikipedia says that this device was only formally discontinued > by the manufacturer as of September 12, 2012, i.e. only slightly more > than 4 short years ago. Nontheless, the current "security solution" for > this product, as made available from the manufacturer... a manufacturer > which is here being held up as a shining example of ernest social responsi- > bility... is for me to contribute the entire device to my local landfill, > where it will no doubt leach innumerable heavy metals into the soil for > my children's children's children to enjoy.) Actually, they encourage you to trade it in, where it is used for replacement parts and/or recycled, see http://www.apple.com/iphone/trade-up/. If your device is too old for that program, they will still take it for free and recycle it in an enviornmentally friendly way, see http://www.apple.com/recycling/. No iPhone should ever end up in a landfill. If it does, it's your fault for not taking advantage of the free recycling. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpAVQuHrUrJP.pgp Description: PGP signature
Re: Spitballing IoT Security
In a message written on Wed, Oct 26, 2016 at 04:40:57PM -0300, jim deleskie wrote: > So device is certified, bug is found 2 years later. How does this help. > The info to date is last week's issue was patched by the vendor in Sept > 2015, I believe is what I read. We know bugs will creep in, (source anyone > that has worked with code forever) Also certification assuming it would > work, in what country, would I need one, per country I sell into? These > are not the solutions you are looking for ( Jedi word play on purpose) You're referencing a wider problem set than I am trying to solve. Problems I think consumer safety legislation can solve: * SSH and Telnet were enabled, but there was no notification in the UI that they were enabled and no way to turn them off. Requirements could be set to show all services in the UI and if they are on or off. * There was a hard coded user + pass that the consumer COULD NOT CHANGE, and did not display. Requirements could be set to never hard code an account. * That the system has a user-friendly way to update. "Click here to check for update." "Click here to install update." What consumer safety legislation can't do is insure a patch is made available at some point in the future. As for certification, I will point out minimally all of these products are already geting CE, UL, and FCC (if Wireless). They also have to meet other regulations (e.g. RoHS) to be imported. To really minimize burden, these security items could be added to one of the existing schemes so there is no additional org. But the idea that a certification per country is difficult is pretty much debunked by the fact that it is that way already, multiple times over in most cases. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpzzs_z_tQ7g.pgp Description: PGP signature
Re: Spitballing IoT Security
In a message written on Wed, Oct 26, 2016 at 08:06:34AM -0400, Rich Kulawiec wrote: > The makers of IoT devices are falling all over themselves to rush products > to market as quickly as possible in order to maximize their profits. They > have no time for security. They don't concern themselves with privacy > implications. They don't run networks so they don't care about the impact > their devices may have on them. They don't care about liability: many of > them are effectively immune because suing them would mean trans-national > litigation, which is tedious and expensive. (And even if they lost: > they'd dissolve and reconstitute as another company the next day.) > They don't even care about each other -- I'm pretty sure we're rapidly > approaching the point where toasters will be used to attack garage door > openers and washing machines. You are correct. I believe the answer is to have some sort of test scheme (UL Labratories?) for basic security and updateability. Then federal legislation is passed requiring any product being imported into the country to be certified, or it is refused. Now when they rush to market and don't get certified they get $0 and go out of business. Products are stopped at the boader, every shipment is reviewed by authorities, and there is no cross boarder suing issue. Really it's product safety 101. UL, the CPSC, NHTSA, DOT and a host of others have regulations that if you want to import a product for sale it must be safe. It's not a new or novel concept, pretty much every country has some scheme like it. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgprvh44CzuFD.pgp Description: PGP signature
Re: Death of the Internet, Film at 11
In a message written on Sat, Oct 22, 2016 at 07:34:55AM -0500, Mike Hammett wrote: > "taken all necessary steps to insure that none of the numerous specific types > of CCVT thingies that Krebs and others identified" From https://krebsonsecurity.com/2016/10/hacked-cameras-dvrs-powered-todays-massive-internet-outage/#more-36754 The part that should outrage everyone on this list: That's because while many of these devices allow users to change the default usernames and passwords on a Web-based administration panel that ships with the products, those machines can still be reached via more obscure, less user-friendly communications services called "Telnet" and "SSH." "The issue with these particular devices is that a user cannot feasibly change this password," Flashpoints Zach Wikholm told KrebsOnSecurity. "The password is hardcoded into the firmware, and the tools necessary to disable it are not present. Even worse, the web interface is not aware that these credentials even exist." As much as I hate to say it, what is needed is regulation. It could be some form of self regulation, with retailers refusing to sell products that aren't "certified" by some group. It could be full blown government regulation. Perhaps a mix. It's not a problem for a network operator to "solve", any more than someone who builds roads can make an unsafe car safe. Yes, both the network operator and rood operator play a role in building safe infrastructure (BCP38, deformable barriers), but neither can do anything for a manufacturer who builds a device that is wholely deficient in the first place. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgp6awICsYz1u.pgp Description: PGP signature
Re: MPLS in the campus Network?
In a message written on Fri, Oct 21, 2016 at 12:02:24PM -0500, Javier Solis wrote: > In a campus network the challenge becomes extending subnets across your > core. You may have a college that started in one building with their own > /24, but now have offices and labs in other buildings. They want to stay on > the same network, but that's not feasible with the routed core setup > without some other technology overlay. We end up not being able to extend > the L2 like we did in the past and today we modify router ACL's to allow > communications. If you already have hundreds of vlans spanned across the > network, it's hard to get a campus to migrate to the routed core. I think > this may be one of Marks challenge, correct me if I'm wrong please. FWIW, if I had to solve the "college across buildings with common access control" problem I would create MPLS L3 VPN's, one subnet per building (where it is a VLAN inside of a building), with a "firewall in the cloud" somewhere to get between VLAN's with all of the policy in one place. No risk of the L2 across buildings mess, including broadcast and multicast issues at L2. All tidy L3 routing. Can use a real firewall between L3 VPN instances to get real policy tools (AV, URL Filtering, Malware detection, etc) rather than router ACL's. Scales to huge sizes because it's all L3 based. Combine with 802.1x port authentication and NAC, and in theory every L3 VPN could be in every building, with each port dynamically assigning the VLAN based on the user's login! Imagine never manually configuring them again. Write a script that makes all the colleges (20? 40? 60?) appear in every building all attached to their own MPLS VPN's, and then the NAC handles port assignment. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpdxSVr3MRkH.pgp Description: PGP signature
Re: MPLS in the campus Network?
From what you describe I do think you have many options, including more than just the ones you laid out. When you're under 10km and own your own fiber the possibilities are virtually limitless. First off, you don't want to be running spanning tree across a campus. While I don't think you need to elminate it completely as some in the industry are pressing, doing it at the scale you describe is probably a world of hurt. I would challenge your port cost assumption for "routers". For instance the Arista 7280 could deliver can be had with 48 10GE SFP+ ports with full Internet routing capabilities. If you're used to Cisco or Juniper, it is worth looking further afield these days. I would also challenge that there is one way to do the job. It may be easier to build a couple of networks. Perhaps a router based one to deliver IP services, and a separate "Metro Ethernet" network to deliver L2 VLAN transport. It may sound crazy that buying two boxes is chepaer than one, but it can be depending on the exact scale and port count. Heck, depending on your port count doing passive DWDM to interconnect switches in each office may be cheaper than encapsulating in MPLS. A lot of it also depends on your monitoring requirements, or lack of. In a message written on Thu, Oct 20, 2016 at 03:43:26PM +0200, steven brock wrote: > How would you convince your management that MPLS is the best solution for > your campus network ? How would you justify the cost or speed difference ? Well, cost and speed are two prime considerations, but there are other important considerations. Vendors support platforms and features based on the customer base. If you buy a box everyone does MPLS on, and then use it for TRILL, you'll be in a world of hurt. Particularly if you want long, stable life ride with the crowd. Use a platform many others are using for the same job. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpALqdUJoKza.pgp Description: PGP signature
Re: Two BGP peering sessions on single Comcast Fiber Connection?
In a message written on Thu, Oct 13, 2016 at 05:48:18PM +, rar wrote: > The goal is to keep the single BGP router from being a single point of > failure. I don't really understand the failure analysis / uptime calculation. There is one router on the Comcast side, which is a single point of failure. There is one circuit to your prem, which is a single point of failure. To connect two routers on your end you must terminate the circuit in a switch, which is a single point of failure. And yet, in the face of all that somehow running two routers with two BGP sessions on your end increases your uptime? The only way that would even remotely make sense is if the routers in question were horribly broken / mismanaged so (had to be?) reboot(ed) on a regular basis. However if uptime is so important using gear with that property makes no sense! I'm pretty sure without actually doing the math that you'll be more reliable with a single quality router (elminiation of complexity), and that if you really need maximum uptime that you had better get a second circuit, on a diverse path, into a different router probably from a different carrier. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpgyRLHs65Kl.pgp Description: PGP signature
Re: BCP38 adoption "incentives"?
In a message written on Tue, Sep 27, 2016 at 08:44:35PM +, White, Andrew wrote: > This assumes the ISP manages the customer's CPE or home router, which is > often not the case. Adding such ACLs to the upstream device, operated by the > ISP, is not always easy or feasible. Unicast RFP should be a feature every ISP requires of all edge devices for at least 15 years now. It should be on by default for virtually all connections, and disabled only by request or when there are circumstances to suggest it would break things (e.g. a request for BGP with full tables over the link). At this point there's no excuse, anyone who has gear who can't do that has been asleep at the switch. It's been a standard feature in too much gear for too long. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpsnqYUh9bKQ.pgp Description: PGP signature
Re: Optical Wave Providers
You're not wrong, but you're not right for two reasons. I believe the OP really wants a "transparent" service. It could be a true wave, but it could also be a 10G channel muxed on a 100G service. The properties they probably really care about are a raw bitstream and guaranteed bandwidth. In the world of marketing and sales speek with carriers this is a "wavelength" service, even if it's not a true wavelength. Indeed, these services are often made up of different underlying services end to end, a dark fiber tail, a Nx10G local transport, muxed on to an Nx100G long haul transport, and in reverse on the other end. On the technical side of things there are plenty of carriers with Nx10G systems across country that have not upgraded them to Nx100G for any number of reasons, often they are 40-80% full of paying customers and profitable. They are quite happy to sell an additional 10G wave and get more money out of their sunk cost. Indeed, if you want a wave on the right path (read, paid for, with plenty of free capacity) you can get waves for rock bottom prices. To the OP's original point, major carriers that offer this class of service include Level 3, CenturyLink, Zayo, and XO. Depending on location there are regional carriers like LightTower, specialzed providers IX Reach, or possibly even your favorite colocation provider like Equinix or CoreSite. There's probably at least 30 more, many don't advertise these services widely (low margin, requires clued customer) but if you ask they are available. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpnUWAuWR2RV.pgp Description: PGP signature
Re: NANOG67 - Tipping point of community and sponsor bashing?
In a message written on Fri, Jun 17, 2016 at 02:58:12PM +0100, Marty Strong wrote: > Yes, if the IXP is distributed across more than one building then you have > choice as to where you (and other people) put their equipment, so you may > have to go to another building to connect to certain peers. Sadly nobody > lives in a perfect world, so IMO having the IXP distributed across multiple > buildings is better as you can connect to all those who are in your building > directly, and peer with the rest over the distributed IXP. I don't think there is an absolute right or wrong answer. The ISP who needs to connect to 100 ISP's at 50M each has a dramatically different need than the ISP that needs to connect to 20 ISP's at 6x100G each. Both exist in the world. The presenter clearly thought that a number of IXP's aren't serving their customers/members well. What we're finding out in this thread is how many folks agree or disagree! :) Personally I'm with another poster, the real problem here is colos that want to charge large MRC's for a cross connect. I know of at least one still trying to get $1000/mo for a fiber pair to another customer. For $1000/mo I can get GigE transit delivered _to my office_ by multiple carriers. To charge that for a cross connect is just so, so wrong. IMHO in building fiber should be NRC only, but if it has a MRC component (to pay for future troubleshooting or somesuch) it should be small, like $5/mo. That's $60 year to do nothing, and even if the $40 an hour fiber tech spends a hour troubleshooting _every fiber_ (which doesn't happen) the colo still makes money. Cross connects are our industry's $100 gold plated HDMI cables. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpFz1LjS4KJJ.pgp Description: PGP signature
Re: NANOG67 - Tipping point of community and sponsor bashing?
In a message written on Thu, Jun 16, 2016 at 05:56:36PM +0100, Will Hargrave wrote: > Most of the major IXs in the European market operate in multiple > datacentres. Why? Because it decreases the monopoly conferred upon one > particular datacentre in a market which becomes the ‘go to’ > location. It moves the monopoly to the IXP operator! When everyone is in one facility (or at least building) it is typically easy to get low priced (although maybe not low enough, see other posts in this thread) cross connects. It's common to see a pair of public peers fill up a significant part of their port, and then move to a private peering model getting off the IXP and onto glass directly. When the IXP is distributed, this becomes glass between buildings, often requiring yet another supplier as well. The MRCs are higher making the justification to move off harder. What happens is rather than moving off to glass, they have to buy faster/more ports from the IXP and move the traffic over the IXP. The IXP becomes the go-to monopoly as a result. Now, perhaps IXP's are more benevolent than data center opertors, and this is a good trade. I think one thing the presentation was asking people to do was step back, look at the situation, and reevaluate that particular tradeoff. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgp34MlQA3jBY.pgp Description: PGP signature
Re: NANOG67 - Tipping point of community and sponsor bashing?
In a message written on Wed, Jun 15, 2016 at 08:25:04AM +0300, Hank Nussbacher wrote: > I am not at NANOG67 and am following this issue remotely. Excuse me if I am > getting this all wrong. Dave shows a slide that LINX made $2.3M profit and > AMS-IX made $4.1M last year and Randy states "that the IXPs run us over to > make an extra penny"? When I vew the presentation from a raw money basis, it seems like just a "we hate our suppliers" whine. For instance it's quite normal for a "not for profit" to both pay salaries and marketing, and to even "make a profit" from a raw accounting perspective. There's also nothing in the non-profit rules that disallow marketing departments or spending money on socials. If those things further their non-profit mission, it's all good. However there is another perspective where I think a good point is raised, and perhaps a bit lost. Some of these IXP's are "community run". Or well, they say they are "community run". But when the curtian is pulled back, perhaps they look a lot less community run and a lot more like a business with a savvy marketing department leading people to believe they are community run. I do wonder how many people became a _member_ of these "community run" IXP's thinking that entited them to some say over how it was run, only to discover due to the bylaws and corporate structure they have in fact little to no say over anything? That is a form of bait-and-switch, and _may_ be a problem with some IXP's. Maybe the community wants a no-marketing cost-recovery co-op service, and this is a way of rallying and organizing for that. It's on that point that the presentation is classic NANOG, a bunch of operators getting together to discuss their common issues and figure out if there if there is a path forward to make things better. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgp8_ySfki63Z.pgp Description: PGP signature
Re: intra-AS messaging for route leak prevention
In a message written on Fri, Jun 10, 2016 at 10:50:17AM +0200, Job Snijders wrote: > You say 'often', but I don't recognise that design pattern from my own > experience. A weakness with the egress point (in context of route leak > prevention) is that if you are filtering there, its already too late. If > you are trying to prevent route leaks on egress, you have already > accepted the leaked routes somewhere, and those leaked routes are best > path somewhere in your network, which means you've lost. It does mean the provider creating the leak has already lost, but that doesn't mean it still isn't vital to protecting the larger internet. A good example of this is fire code. Most fire codes do not do much to prevent you from starting a fire in your own house/condo/apartment, but rather prevent it from spreading to your neighbors. For instance, if you filter Customer A to A's Prefix list on ingress, B to B's, C to C's, it may also be prudent to filter outbound to your peers based on A+B+C's prefix list. When the ingress filter to A fails (typo, bug, bad engineer), your own network is hosed by whatever junk A ingested, but at least you won't pass it on to peers and spoil the rest of the Internet. Basically both ingress and egress filtering have weaknesses, and in some cases doing both can provide some mitigation. It's the old adage "belt and suspenders". -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgp7sT8ySwUCv.pgp Description: PGP signature
Re: IPv6 is better than ipv4
Warning: Hat = Enterprise Network Admin Sarcasm = High In a message written on Thu, Jun 02, 2016 at 01:31:43PM -0400, Christopher Morrow wrote: > REALLY??? I mean REALLY? people that operate networks haven't haven't had > beaten into their heads: > 1) cgn is expensive Wazzat? Isn't the C for Carrier? So, not my problem. > 2) there is no more ipv4 (not large amounts for large deployments of new > thingies) I got a /24 from my provider years ago. I only use half of it. If we needed to economize we could probably go ahead and deploy name based virtual hosting, the server guys have talked about that for years. I can't imagine I will ever run out of IPv4. > 3) there really isn't much else except the internet for global networking > and reachabilty IPv4 currently has more reach than IPv6? Didn't you just tell me people aren't deploying IPv6. > 4) ipv6 'works' on almost all gear you'd deploy in your network I can't find it in the docs for our IBM Token Ring switch that connects the payroll mainframe to the ERP NEC box. That's our only critical application. > and content side folks haven't had beaten into their heads: > 1) ipv6 is where the network is going, do it now so you aren't caught > with your pants (proverbial!) down I thought all the providers were deploying that CGN thing so IPv4 kept working. They would never leave us high and dry, right? > 2) more and more customers are going to have ipv6 and not NAT'd ipv4... > you can better target, better identify and better service v6 vs v4 users. I was told DNS64 fixed that problem, and carriers would have to deploy it as a transition strategy. > 3) adding ipv6 transport really SHOULD be as simple as adding a My IPAM software doesn't have support because I haven't bought a support contract for it for 10 years. Do I really need to buy new IPAM software? > I figure at this point, in 2016, the reasons aren't "marketing" but either: > a) turning the ship is hard (vz's continual lack of v6 on wireline > services...) > b) can't spend the opex/capex while keeping the current ship afloat > c) meh Actually it's more my boss has 100 "critical" initiatives and staff to do 20 of them, and IPv6 isn't even on the list. Our planning window is crisis to crisis, err, I mean quarter to quarter. Will my web site go down this quarter if I don't deploy it? Otherwise we can put that off. Sadly, I wish all these answers were some sort of carachture of reality, but I think they are too many folks reality. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpEleSm0I3QP.pgp Description: PGP signature
Re: LLDP via SNMP
In a message written on Thu, May 26, 2016 at 10:16:21PM +0300, Saku Ytti wrote: > LLDP standard itself is in my non-humble opinion broken. There are no > guarantees that standard compliant LLDP will produce useful data. Including inside a particular vendor's own implementation. For instance some junipers advertise the interface name (ge-0/0/0) and some the description ("To Bob's ISP") for the "interface name" field in the CLI. So depending on the platform and version of code you get totally different information from "show lldp neighbors". It really makes it difficult to consume the data by script. Lots of special cases. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpZ7GiEeRfO8.pgp Description: PGP signature
Re: Question on peering strategies
In a message written on Sun, May 22, 2016 at 09:33:38AM +0300, Max Tulyev wrote: > That should be a more easy and much less expensive way for private > interconnects than direct wires. The problem is peering is not an even distribution by traffic level. When BigCDNCo connects to BigCableCo, they need 50x100GE. It's actually cheaper to run the fiber between them at 10 locations for 5x100GE each than it is to run fiber from both of them to a switch, and have the switch providing vendor engineer the switch to that capacity. (Hint, running to the switch is 2x the fiber, plus switch ports.) On the other end of the spectrum, the guy who has 5Gbps of traffic can buy a 10GE into the switched exchange, have lots of headroom and connect to everyone with the same port. The truth of the matter is there are 40 players in the big pile, 15,000 providers in the small pile, and perhaps only 100 oddballs between the two. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpFwAqCYOOJR.pgp Description: PGP signature
Re: Cost-effectivenesss of highly-accurate clocks for NTP
In a message written on Fri, May 13, 2016 at 03:39:27PM -0400, Eric S. Raymond wrote: > According to RFC 5095 expected accuracy of NTP time is "several tens > of milliseconds." User expectations seem to evolved to on the close > order of 10ms. I think it's not by coincidence this is pretty close > to the jitter in ping times I see when I bounce ICMP off a > well-provisioned site like (say) google.com through my Verizon FIOS > connection. For a typical site, there are two distinct desires from the same NTP process. First, syncronization with the rest of the world, which is generally over the WAN and the topic that was well discussed in your post. I agree completely that the largest factor is WAN jitter. The other is local syncronization, insuring multiple servers have the same time for log analysis purposes and such. This typically takes place across a lightly loaded ethernet fabric, perhaps with small latency (across a compus). Jitter here is much, much smaller. Does the limitation of accuracy remain jitter in the low jitter LAN/Campus enviornment, or does that expose other issues like the quality of oscellators, OS scheduling, etc? Or perhaps another way, is it possible to get say 10's or 100's of nanosecond accuracy in the lan/campus? -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpE9K0yZ7Yjy.pgp Description: PGP signature
Re: NIST NTP servers
In a message written on Wed, May 11, 2016 at 09:00:54AM -0500, Josh Reynolds wrote: > I hope your receivers aren't all from a single source. I have 4 each ACTS, GPS, and CDMA in my list, agumented with a pair of PTP. Amazingly right now all but 3 are within 2 microsconds of each other, and those three outliers are 10 and 35 microseconds off. That's pretty impressive! I didn't have to buy any of them, because various trustable entities run those infrastructures. Some of the trustable entites are the same ones that send the time up to the GPS satellites. :) -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpKDSyi44ype.pgp Description: PGP signature
Re: CALEA
In a message written on Tue, May 10, 2016 at 03:00:59PM -0500, Josh Reynolds wrote: > This is a large list that includes many Tier 1 network operators, > government agencies, and Fortune 500 network operators. > > The silence should be telling. NANOG has a strong self-selection for people who run core routing devices and do things like BGP and peering negotiations with other providers. By contrast, CALEA requirements are generally all met by features deployed at the customer-edge. These groups are often a separate silo from the backbone folks at the largest providers. This is likely the wrong list for asking such questions, and the few who do answer is likely to be smaller providers where people wear multiple hats. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpWM43j2G20q.pgp Description: PGP signature
Re: NIST NTP servers
In a message written on Tue, May 10, 2016 at 08:23:04PM +, Mel Beckman wrote: > All because of misplaced trust in a tiny UDP packet that can worm its way > into your network from anywhere on the Internet. > > I say you’re crazy if you don’t run a GPS-based NTP server, especially given > that they cost as little as $300 for very solid gear. Heck, get two or three! You're replacing one single point of failure with another. Personally, my network gets NTP from 14 stratum 1 sources right now. You, and the hacker, do not know which ones. You have to guess at least 8 to get me to move to your "hacked" time. Good luck. Redundancy is the solution, not a new single point of failure. GPS can be part of the redundancy, not a sole solution. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpZ8nfasXwtV.pgp Description: PGP signature
Re: NIST NTP servers
In a message written on Mon, May 09, 2016 at 11:01:23PM -0400, b f wrote: > In search of stable, disparate stratum 1 NTP sources. http://wpollock.com/AUnix2/NTPstratum1PublicServers.htm > We tried using “time.nist.gov” which returns varying round-robin addresses > (as the link says), but Cisco IOS resolved the FQDN and embedded the > numeric address in the “ntp server” config statement. Depending on your hardware platform your Cisco Router is likely not a great NTP server. IOS is not designed for hyper-accuracy. > After letting the new server config go through a few days of update cycles, > the drift, offset and reachability stats are not anywhere as good as what > the stats for the Navy time server are - 192.5.41.41 / tock.usno.navy.mil. The correct answer here is to run multiple NTP servers in your network. And by servers I mean real servers, with good quality oscellators on the motherboard. Then configure them to talk to _many_ sources. You need 4 sources of time minimum to redundantly detect false tickers. If you're serious about it then find ~10 Stratum 1 sources (ideally authenticated and from trusted entities), one of which could be GPS as several have suggested. You'll then have high quality false ticker rejection. Configure all of your devices to get NTP from the servers you run using authentication. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpRuzcNumYGj.pgp Description: PGP signature
Re: Latency, TCP ACKs and upload needs
Others have already answered with the technical details. Let me take a stab at some more, uh, variable items. In a message written on Tue, Apr 19, 2016 at 09:29:12PM -0400, Jean-Francois Mezei wrote: > Also, when you establish a TCP connection, do most stacks have a default > window size that gives the sender enough "patience" to wait long enough > for the ACK ? Your question is phrased backwards. All will wait for the ACK, the timeouts are long (30-120 seconds). The issue is that you only get one window of data per RTT, so if the window is too small, it will choke the connection. 90%+ of the stacks deployed will be too small. Modern Unix generally has "autotuning" TCP stacks, but I don't think Windows or OS X has those features yet (but I'd be very happy to be wrong on that point). Regardless of satellite uplink/downlink speeds, boxes generally need to be tuned to get maximum performance on satellite. > What i am trying to get at here is whether 25/1 on satellite, in real > life with a few apps exchanging data, would actually be able to make use > of the 25 download speed or whether the limited 1mbps upload would choke > the downloads ? With a properly tuned stack what you're describing is not a problem. 1460 byte payloads down, maybe 64 byte acks on the return, and with SACK which is widely deployed an ACK every 2-4 packets. You would see about 2,140 packets/sec downstream (25Mbps/1460), and perhaps send 1070 ACKs back upstream, at 64 bytes each, or about 68Kbps. Well under the 1Mbps upstream bandwidth. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpRCAH4V8jii.pgp Description: PGP signature
Re: phone fun, was GeoIP database issues and the real world consequences
In a message written on Fri, Apr 15, 2016 at 09:49:37AM +0100, t...@pelican.org wrote: > Out of curiosity, does anyone have a good pointer to the history of how / why > US mobile ended up in the same numbering plan as fixed-line? The other answers address the history here better than I ever good, but I wanted to point out one example I hadn't seen mentioned. https://en.wikipedia.org/wiki/Area_code_917 917 was originally a mobile only area code overlay in New York City. For reasons that are unclear to me, after that experiement it was decided that the US would never do that again. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgp00UyD7hxZ7.pgp Description: PGP signature
Re: phone fun, was GeoIP database issues and the real world consequences
In a message written on Thu, Apr 14, 2016 at 12:29:39AM -, John Levine wrote: > The people on nanog are not typical. I looked around for statistics > and didn't find much, but it looks like only a few percent of numbers > are ported each month, and it's often the same numbers being ported > repeatedly. It's a big issue for political pollers, and they have some data: http://www.pewresearch.org/fact-tank/2016/01/05/pew-research-center-will-call-75-cellphones-for-surveys-in-2016/ "roughly half (47%) of U.S. adults whose only phone is a cellphone." "in a recent national poll, 8% of people interviewed by cellphone in California had a phone number from a state other than California. Similarly, of the people called on a cellphone number associated with California, 10% were interviewed in a different state." So maybe 10% of all cell phones are primarly used in the "wrong" area? -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpw6JzSDGLKQ.pgp Description: PGP signature
Re: GeoIP database issues and the real world consequences
In a message written on Mon, Apr 11, 2016 at 03:10:44PM -0400, Sean Donelan wrote: > If GeoIP insists on giving a specific lon/lat, instead of an uncertaintity > how about using locations such as the followign as the "default I don't > know where it is" > > United States: 38.8899 N, 77.0091 W (U.S. Capital Building) > Missouri: 38.5792 N, 92.1729 W (Missouri State Capital Building) > > After the legislators get tired of the police raiding the capital > buildings, they will probably do something to fix it. Massachusetts: 42.376702 N, 71.239076 W (MaxMind Corporate HQ) Maybe after seeing what it's like to be on the receiving end of their own inaccuracy they will be a bit more motivated to fix it. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgp7PpJPfkx2n.pgp Description: PGP signature
Re: /27 the new /24
In a message written on Fri, Oct 02, 2015 at 11:47:31AM -0500, Jason Baugher wrote: > Are you suggesting that the Tier 1 and 2's that I connect to are not > filtering out anything shorter than /24? My expectation is that they are > dropping shorter than /24, just like I am. Not exactly, but it's not what the other poster is implying either. Many providers let a customer multi-home to the provider. That is they provide two circuits from two different POPs to the customer. Allocate the customer a /27-/29 from the provider's supernet. The customer announces these small blocks back to the provider to get high availability. The provider does not announce externally, because it is part of the supernet. In Cisco speak: ip prefix-list my-supernets-small-subnets permit 10.0.0.0/8 ge 24 ip prefix-list my-supernets-small-subnets permit 172.16.0.0/12 ge 24 ! ...some route-map customer-in stuff... ! route-map customer-in permit 100 match ip prefix-list my-supernets-small-subnets set community 1234:1234 1234:5678 no-export ! ...some route-map customer-in stuff... ! Yes, many tier 1's will allow longer than /24 _from their customers_ and _out of their supernets_, and will not reannounce them. -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgp6DMytjZoko.pgp Description: PGP signature
Re: How to force rapid ipv6 adoption
In a message written on Tue, Sep 29, 2015 at 04:37:19PM -0400, David Hubbard wrote: > Had an idea the other day; we just need someone with a lot of cash > (google, apple, etc) to buy Netflix and then make all new releases > v6-only for the first 48 hours. I bet my lame Brighthouse and Fios > service would be v6-enabled before the end of the following week lol. If only people were forced to deploy IPv6...like perhaps because they couldn't get any more IPv4 addresses. Maybe we should stop issuing IPv4 addresses? (Did I need to put sarcasam tags around that, I hope not!) -- Leo Bicknell - bickn...@ufp.org PGP keys at http://www.ufp.org/~bicknell/ pgpjUWk9F5aSq.pgp Description: PGP signature
Re: NetFlow - path from Routers to Collector
In a message written on Wed, Sep 02, 2015 at 12:29:25AM +0700, Roland Dobbins wrote: > On 2 Sep 2015, at 0:18, Niels Bakker wrote: > > You're just wrong here. > > Sorry, I'm not. I've seen what happens when flow telemetry is 'squeezed > out' by pipe-filling DDoS attacks, interrupted by fat-fingers, etc. > > It'll happen to you, one day. And then you'll understand. Ah, I see your mistake, you're thinking everyone cares about that problem. They don't. Good, fast, cheap, pick two. You've selected good. Some people pick fast and cheap. They are not wrong, you are not right. Just a different lifestyle choice. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ pgpjRA813DA9_.pgp Description: PGP signature
Re: So Philip Smith / Geoff Huston's CIDR report becomes worth a good hard look today
On Aug 12, 2014, at 1:02 PM, Hank Nussbacher h...@efes.iucc.ac.il wrote: Many don't need to buy anything new. Just follow the instructions here: http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switche$ We did this in the 1st week of June. Problem solved. s/Problem solved/Critical limit pushed out long enough to give us a few more years/ -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: Muni Fiber and Politics
On Aug 4, 2014, at 11:13 AM, William Herrin b...@herrin.us wrote: 1. Enthusiasm (hence funding) for public works projects waxes and wanes. Generally it waxes long enough to get some portion of the original works project built, then it wanes until the project is in major disrepair, then it waxes again long enough to more or less fix it up. Others have hit on the major points, but just to summarize. The big one is the provider is going to charge an OM fee for the services, be it the dark fiber or a L2 lit service. Fiber cuts will happen, connectors will be broken, and gear will die. One would hope this continuous funding source could even out some of the municipal funding hurdles, if done ideally the network would be built in tax dollars, but then be fully self-sustaining moving forward. The second one is, if you require both L1 dark fiber, and allow L2 lit services, this problem self solves. If the L2 is capped at a 1G, and the world has moved to 10G, the people who need it will just move to the L1/dark offering and away from the L2 offering. That is, they have an option, and that's what this is all about, options. Unlike telecoms that might choose not to sell the dark fiber to force you into a lit service, such a muni-network should be required to sell the dark to all providers all the time. By drawing an (admittedly somewhat arbitrary) boundary between L1/L2 and L3-L7, I think a situation can be created where there is maximum flexibility on both sides of that boundary, and the least chance of stupidity from players on either side. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: Muni Fiber and Politics
On Aug 2, 2014, at 8:10 AM, Vlade Ristevski vrist...@ramapo.edu wrote: I might be misunderstanding this, but are you guys saying 10G Internet access to a tier 1 costs around $6,000 a month? I ask because I run a network for a small college and the best price I could get on 1Gbps Internet is about $5,500 a month with the fiber loop included which itself costs $2000-$2500. Or are you guys discussing a different type connection? The quotes I got were from Cogent, Lightpath, Level 3, Verizon ($8,000) and I think even ATT a few years back. I'm out in the NJ suburbs about 30 miles from Manhattan. If there is a cheaper way to get good bandwidth, I'm all ears. We're in Mahwah , NJ. I think a 10GE for $6,000 in bandwidth charges is possible, if you meet the provider. What that means is if you are in an Equinix, Coresite, Telehouse, or other sort of carrier neutral colocation point, and you're willing to make the cross connect appear at the providers cage, you can get bandwidth for that price. Basically it's the price when the provider has to do zero other work, already has a large pop, and is selling large wholesale chunks. Add in a local loop, cost for a smaller pop they have to maintain, engineering and so on and your price for 1GE 30 miles away from such places seems perfectly reasonable to me. It's kind of the difference between driving your pickup to the quarry to get a truck load of sand, vrs buying prepackaged sand at the local home improvement store. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: Muni Fiber and Politics
There are plenty of cities with zero ISP's interested in serving them today, I can't argue that point. However I believe the single largest reason why that is true is that the ISP today has to bear the capital cost of building out the physical plant to serve the customers. 15-20 year ROI's don't work for small businesses or wall street. But if those cities were to build a municipal fiber network like we've described, and pay for it with 15-20 year municipal bonds the ISP's wouldn't have to bear those costs. They could come in drop one box in a central location and start offering service. Which is why I said, if municipalities did this, I am very skeptical there would be more than a handful without a L3 operator. You can imagine a city of 50 people in North Dakota or the Northern Territories might have this issue because the long haul cost to reach the town is so high, but it's going to be a rare case. I firmly believe the municipal fiber networks presence would bring L3 operators to 90-95% of cities. On Aug 2, 2014, at 2:04 PM, Scott Helms khe...@zcorum.com wrote: Happens all the time, which is why I asked Leo about that scenario. There are large swarths of the US and even more in Canada where that's the norm. On Aug 2, 2014 1:29 PM, Owen DeLong o...@delong.com wrote: Such a case is unlikely. On Aug 1, 2014, at 13:32, Scott Helms khe...@zcorum.com wrote: I can never see a case where letting them play at Layer 3 or above helps. That’s bad news, stay away. But I think some well crafted L2 services could actually _expand_ consumer choice. I mean running a dark fiber GigE to supply voice only makes no sense, but a 10M channel on a GPON serving a VoIP box may… Even in those cases where there isn't a layer 3 operator nor a chance for a viable resale of layer 1/2 services. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: Muni Fiber and Politics
On Aug 1, 2014, at 9:44 AM, Owen DeLong o...@delong.com wrote: If you want examples of how well the model you propose tends to work, look no further than the incredible problematic nature of MCI’s attempt to offer local phone service over Pacific Bell/SBC/ATT circuits. [snip] IMHO, experience has taught us that the lines provider (or as I prefer to call them, the Layer 1 infrastructure provider) must be prohibited from playing at the higher layers. Owen has some really good points here, but may be overstating his case a smidge. If a private company is the Layer 1 (“lines provider”) entity, there will always be a temptation into moving up the stack, and up the value chain. The issue in his first example is that the companies involved compete for higher layer services. Municipalities can be different. It’s possible to write into law that they can offer L1 and L2 services, but never anything higher. There’s also a built in disincentive to risk tax dollars more speculative, but possibly more profitable ventures. So while I agree with Owen that a dark fiber model is preferred, and should be offered, I don’t have a problem with a municipal network also offering Layer 2. In fact, I see some potential wins, imagine a network where you could chose to buy dark fiber access, or a channel on a GPON system? If the customer wants GE/10GE, you get dark fiber, and if they want 50Mbps, you get a GPON channel for less (yes, that’s an assumption) cost. I can also see how some longer-distance links, imagine a link from home to office across 30-40 miles, might be cheaper to deliver as 100M VLAN than raw dark fiber and having to buy long reach optics. I can never see a case where letting them play at Layer 3 or above helps. That’s bad news, stay away. But I think some well crafted L2 services could actually _expand_ consumer choice. I mean running a dark fiber GigE to supply voice only makes no sense, but a 10M channel on a GPON serving a VoIP box may… -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
Re: Muni Fiber and Politics
On Aug 1, 2014, at 3:32 PM, Scott Helms khe...@zcorum.com wrote: Even in those cases where there isn't a layer 3 operator nor a chance for a viable resale of layer 1/2 services. I have a very hard time believing that if a city (no matter what size) had a FTTH deployment, sold on a non-discriminatory basis to any providers, that there would ever be zero layer 3 operators. Maybe it’s a corner case that will occur in one small town somewhere that the long haul is crazy expensive to reach, but it’s not a general problem that policy needs to optimize to handle. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: Muni Fiber and Politics
On Jul 30, 2014, at 1:47 AM, Mark Tinka mark.ti...@seacom.mu wrote: Symmetrical would be tough to do unless you're doing Active- E. I'm an outlier in my thinking, but I believe the best world would be where the muni offered L1 fiber, and leased access to it on a non-discrimatory basis. That would necessitate an Active-E solution since L1 would not have things like GPON splitters in it, but it enables things like buying a dark fiber pair from your home to your business, and lighting it with your own optics. That to me is a huge win. It also means future upgrades are unencumbered. Want to run 10GE? 100GE? 50x100GE WDM? Please do. You leased a dark fiber. If the muni has gear (even just splitters) in the path they will gatekeeper upgrades. It may be a smidge more expensive up front, but in the long run I think it will be cheaper, more reliable, and most importantly hugely more flexible. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: A simple proposal
On May 16, 2014, at 12:26 AM, Matthew Petach mpet...@netflight.com wrote: You want to stream a movie? No problem; the video player opens up a second data port back to a server next to the streaming box; its only purpose is to accept a socket, and send all bits received on it to /dev/null. I can improve on your proposal. Have the player return four copies of the data. Put the eyeball networks out of ratio in the other direction, then charge them using their same logic. Also, congest all their end user uplinks, which are slower of course, as that is congestion in their own network they will have to spend money to fix. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: AOL Mail updates DMARC policy to 'reject'
On Apr 23, 2014, at 12:45 AM, Grant Ridder shortdudey...@gmail.com wrote: Thought i would throw this out there. http://postmaster-blog.aol.com/2014/04/22/aol-mail-updates-dmarc-policy-to-reject/ Curious I unleashed grep on a couple of mailing lists I operate. I turned up one AOL address. I'm not saying my data is representative of the Internet, but I remember a time when they were 50% of the addresses on my mailing lists. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: DMARC - CERT?
On Apr 14, 2014, at 3:58 PM, Rich Kulawiec r...@gsp.org wrote: As I've said many times, email forgery is not the problem. It's a symptom of the problem, and the problem is rotten underlying security coupled with negligent and incompetent operational practice. But fixing that is hard, and nobody -- not Yahoo and not anybody else either -- wants to tackle it. It's much easier to roll out stuff like this and pretend that it works and write a press release and declare success. I think you're on the right track, but still suggesting their is a technical solution. I submit there is not. There is no car alarm that prevents all car thefts, no door lock that prevents all burglaries. No trigger lock that prevents all gun deaths, no lane departure system that prevents all car crashes. Spam cannot, and will never be solved by technological measures alone. They can help reduce the levels in some cases, or squeeze the balloon and move the spam to some other form. Ultimately the way to reduce spam is to catch spammers, prosecute them, and put them in prison. The way we keep all of those other crimes low is primarily by enforcement; making the punishment not worth the crime. With spam, the chance that a spammer will be punished is infinitesimal. There are hundreds, or thousands, or tens of thousands of spammers for every one that is put into jail. If we'd put even 1% of the effort that's been thrown at technical measures over the years into better laws, tools for law enforcement, and helping them build cases we'd be several orders of magnitude better off than technological solutions that are little more than wack-a-mole. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: [ PRIVACY Forum ] Critical crypto bug leaves Linux, hundreds of apps open to eavesdropping
On Mar 4, 2014, at 9:07 PM, Jay Ashworth j...@baylink.com wrote: Is this the *same* bug that just broke in Apple code last week? No, the Apple bug was the existence of an /extra/ goto fail;. The GnuTLS bug was that it was /missing/ a goto fail;. I'm figuring the same developer worked on both, and just put the line in the wrong repository. :) And yes, while this is a joke, Apple fixed their bug by removing a goto fail;, and GnuTLS fixed theirs by adding a goto fail;. I can't make up something that funny. https://www.imperialviolet.org/2014/02/22/applebug.html http://blog.existentialize.com/the-story-of-the-gnutls-bug.html -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: Managing IOS Configuration Snippets
On Feb 27, 2014, at 7:38 PM, Keegan Holley no.s...@comcast.net wrote: Putting aside the fact that snippets aren’t a good way to conceptualize deployed router code, my gut still tells me to question the question here. What I have always wanted is a way to group configuration, in particular by customer. Ideally with the ability to see it both as a unified view, and also as a per-customer view. For instance: customer A interface GigabitEthernet1/2/3.10 description A ip address 10.0.1.1 255.255.255.0 router bgp 1 neighbor 10.0.1.2 prefix-list A-in in ip prefix-list A-in 10.1.0.0/24 end customer B interface GigabitEthernet1/2/3.11 description B ip address 10.0.2.1 255.255.255.0 router bgp 1 neighbor 10.0.2.2 prefix-list B-in in ip prefix-list B-in 10.2.0.0/24 end Then I should be able to do: show run - Normal output like we see today, the device view. customer A show run - Same format as I have above, just config relevant to customer A. I can even see extending the tag to work with some other commands: customer A show int customer A show bgp ipv4 uni sum customer A show ip prefix-list The same functionality would work for snippets: customer ntp-servers-v1.0 ntp server 1.2.3.4 ntp server 1.2.3.5 ntp server 1.2.3.6 end Basically this follows the two modes in which engineers look at a device. Most of the time is configuring a specific customer, and wanting to be sure they are configured right; including the hard case of no customer A, that is making sure all configuration for a specific customer is removed. The rest of the time is typically troubleshooting a network level problem where you want the device view we have today, I see interface Gig1/2/3 is dropping packets, show run to see who's configure on it sort of operations. I don't know of any platform that has implemented this sort of config framework though. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: Why won't providers source-filter attacks? Simple.
On Feb 5, 2014, at 2:46 AM, Saku Ytti s...@ytti.fi wrote: If we keep thinking this problem as last-mile port problem, it won't be solved in next 20 years. Because lot of those ports really can't do RPF and even if they can do it, they are on autopilot and next change is market forced fork-lift change. Company may not even employ technical personnel, only buy consulting when making changes. It can be solved, but not by NANOG. Imagine if Cable labs required all DOCSIS compliant cable modems to default to doing source address verification in the next version of DOCSIS? It would (eventually) get rolled out, and it would solve the problem. Even if it doesn't default to on, requiring the hardware to be capable would be a nice step. The consumer last mile is actually simpler in that there are a few organizations who control the standards. Efforts need to focus on getting the BCP38 stuff into those standards, ideally as mandatory defaults. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: best practice for advertising peering fabric routes
On Jan 15, 2014, at 12:02 AM, Dobbins, Roland rdobb...@arbor.net wrote: Again, folks, this isn't theoretical. When the particular attacks cited in this thread were taking place, I was astonished that the IXP infrastructure routes were even being advertised outside of the IXP network, because of these very issues. I know a lot of people push next-hop-self, and if you're a large ISP with thousands of BGP customers is pretty much required to scale. However, a good engineer would know there are drawbacks to next-hop-self, in particular it slows convergence in a number of situations. There are networks where fast convergence is more important than route scaling, and thus the traditional design of BGP next-hops being edge interfaces, and edge interfaces in the IGP performs better. By attempting to force IX participants to not put the route in IGP, those IX participants are collectively deciding on a slower converging network for everyone. I don't like a world where connecting to an exchange point forces a particular network design on participants. IXPs are not the problem when it comes to breaking PMTU-D. The problem is largely with enterprise networks, and with 'security' vendors who've propagated the myth that simply blocking all ICMP somehow increases 'security'. That's some circular reasoning. Networks won't 9K peer at exchange points for a number of reasons, including PMTU-D discovery issues. Since there are virtual no 9K peering at exchange points, PMTU-D is a non-issue. Maybe if IXP design didn't break PMTU-D it would help attract more 9K peers, or there might even be a future where 9K peering was required? This whole problem smacks to me of exchange points that are too big to fail. Since some of these exchanges are so big, everyone else must bend to their needs. I think the world would be a better place if some of these were broken up into smaller exchanges and they imposed less restrictions on their participants. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: best practice for advertising peering fabric routes
On Jan 15, 2014, at 8:49 AM, Dobbins, Roland rdobb...@arbor.net wrote: Not really. What I'm saying is that since PMTU-D is already broken on so many endpoint networks - i.e., where traffic originates and where it terminates - that any issues arising from PMTU-D irregularities in IXP networks are trivial by comparison. I think we're looking at two different aspects of the same issue. I believe you're coming at it from a 'for all users of the Internet, what's the chance they have connectivity that does not break PMTU-D.' That's an important group to study, particularly for those DSL users still left with 1500 byte MTU's. And you're right, for those users IXP's are the least of their worries, mostly it's content-side poor networking, like load balancers and firewalls that don't work correctly. I am approaching it from a different perspective, 'where is PMTU-D broken for people who want to use 1500-9K frames end to end?' I'm the network guy who wants to buy transit in the US, and transit in Germany and run a tunnel of 1500 byte packets end to end, necessitating a ~1540 byte packet. Finding transit providers who will configure jumbo frames is trivial these days, and most backbones are jumbo frame clean (at least to 4470, but many to 9K). There's probably about a 25% chance private peelings are also jumbo clean. Pretty much the only thing broken for this use case is IXP's. Only a few have a second VLAN for 9K peerings, and most participants don't use it for a host of reasons, including PMTU-D problems. I'm an oddball. I think MPLS VPN's are a terrible idea for the consumer, locking them into a single provider in the vast majority of cases. Consumers would be better served by having a tunnel box (IPSec maybe?) at their edge and running there own tunnel over IP provider-independently, if they could get 1500B MTU at the edge, and move those packets end to end. While I've always thought that, in the post-Snowden world I think I seem a little less crazy, rather than relying on your provider to keep your VPN traffic secret, customers should be encrypting it in a device they own. But hey, I get why ISP's don't want to offer 9K MTU clean paths end to end. Customers could then buy a VPN appliance and manage their own VPN's with no vendor lock-in. MPLS VPN revenues would tumble, and customers would move more fluidly between providers. That's terrible if you're an ISP. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: best practice for advertising peering fabric routes
On Jan 15, 2014, at 9:37 AM, Dobbins, Roland rdobb...@arbor.net wrote: But what I'm saying is that that whether or not they want to use jumbo frames for Internet traffic, it doesn't matter, because PMTU-D is likely to be broken either at the place where the traffic is initiated, the place where the traffic is received, or both - so any nonsense in the middle, especially on IXP networks in particular, isn't really a significant issue in and of itself. Your assertion does not match my deployment experience. When I have deployed endpoints that have working PMTU-D, I have 99.999% success with the ISP's in the middle having working PMTU-D. It even works fine for 9K providers connected to 1500B exchange points, because the packet-too-big typically originates from the input side of the router (the backbone link to the IXP router). Indeed, the only place I've seen it broken is where the ISP 9K peers at an exchange, and the far end ISP runs a 9K backbone (like 4470), so the far end IXP-router does the packet-to-big, and originates it from the exchange LAN, which because it's no longer in the table fails to past uRPF. (Business class) ISP's don't break PMTU-D, end users break it with the equipment they connect. So a smart user connecting equipment that is properly configured should be able to expect it to work properly. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: best practice for advertising peering fabric routes
On Jan 14, 2014, at 7:55 PM, Eric A Louie elo...@yahoo.com wrote: I have a connection to a peering fabric and I'm not distributing the peering fabric routes into my network. There's a two part problem lurking. Problem #1 is how you handle your internal routing. Most of the big boys will next-hop-self in iBGP all external routes. However depending on the size and configuration of your network there may be advantages to not using next-hop-self, or just putting it in your IGP. Basically, you should be doing the same thing you do for a /30 from a peer or transit provider in your network. There is one thing special about an exchange point though, for security reasons you probably want to add it to your never accept routing filter from peers/customers/transit providers. You don't need someone injecting a couple of more specifics to mess with your routing. Problem #2 is your customers. If you have customers that may operate default free, and they use one of the traceroute tools that not only finds the route, but then continues to probe it (like MTR, or Visual Traceroute) there can be an issue. The initial traceroute probe may return an IP on the exchange of your peer's router, but then when they subsequently source ICMP Ping to that IP there will be no route in their network, and it will simply never respond. Some call this a feature, some call this a problem. There is also an extremely rare problem where the far end of the peering exchange steps down MTU, and thus PMTU discovery is invoked, but your customers use Unicast RPF. Since the exchange LAN isn't in their table, Unicast RPF may drop the PMTU packet-too-big message, causing a timeout. If your customers have a default to you, all is well. However if they have a default to someone else, and take a table from you to selectively override the same problem can occur for any routes they select through you that also traverse the exchange. IMHO the best fix for #2 is that the exchange have an ASN, and announce the exchange LAN from that ASN, typically via the route server. You should then peer with the route server to pick up that network. That makes the announcement consistent, and makes it clear who operates that network, and your customers can then access it. Many exchanges do not do this, and then the next best solution might be to originate it from your ASN and announce it to your customers only, with no-export set on the way out. Various people will no doubt chime in and tell you the last two suggestions are either excellent wonderful and the worst idea ever. Safe to say I know of networks doing both and the world has not ended. YMMV, some assembly required, batteries not included, actual conditions may affect product performance, do not taunt the happy fun ball, and consult a doctor if your network is up for more than four hours. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: best practice for advertising peering fabric routes
On Jan 14, 2014, at 9:35 PM, Patrick W. Gilmore patr...@ianai.net wrote: So Just Don't Do It. Setting next-hop-self is not just for big guys, the crappiest, tiniest router that can do peering at an IXP has the same ability. Use it. Stop putting me and every one of your peers in danger because you are lazy. I'm going to have to disagree here with Patrick, because this is security through obscurity, and that doesn't work well. For some history about why people like Patrick take the position he did, read: http://blog.cloudflare.com/the-ddos-that-almost-broke-the-internet Exchange points got attacked, so people yanked them from the routing table hoping to prevent attacks. If you're on this list it should take you all of about 3 seconds to realize the attackers could do a traceroute, and attack the IP one hop on the far side of the exchange for a few dozen providers and still cause all sorts of havoc, or do any of another half dozen things I won't mention to cause problems. The effect would be nearly, if not perfectly identical, since that traffic still has to cross the exchange. I'll point out the MTU step-down issue is real, and it's part of why we can't have 9K MTU exchanges be the default on the Internet, which would really make things better for a significant number of users. I think Patrick is a bit quick to dismiss some of the potential issues. Every link on every router is subject to attack. Exchange point LAN's really aren't special in that regard. If anything the only thing that makes them slightly special is that they may in fact be more oversubscribed than most links. Where a backbone might have a router with 20x10GE, so attackers could try and drive 190GE out a 10GE in theory; an exchange point may have 100 people with 20x10GE coming in. An alternate view that mega-exchange points are massively oversubscribed potential single points of failure, and perhaps network operators should consider that. While a DDOS taking an exchange down for half a day is bad, imagine if there was a more sinister attack, taking out the physical infrastructure of an exchange. That can't be fixed with a routing advertisement. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: turning on comcast v6
On Jan 5, 2014, at 11:44 PM, valdis.kletni...@vt.edu wrote: If Joe Home User has a rogue device spewing RA's, he probably has a bigger problem than just not having RA Guard enabled. He either has a badly misconfigured router (and one that's disobeying the mandate to not RA if you don't have an uplink), or he has a compromised malicious host. In either case, he's got bigger fish to fry. mandate isn't the right description. http://tools.ietf.org/html/rfc6059 There is a ~3 year old _proposed standard_ for the behavior you describe. I have yet to see any compliant equipment at $LocalBigBox, but maybe I'm not purchasing the right gear. So yet again, the response I get to ra's are fragile is deploy this brand new band-aid that can't be purchased yet. Can we just have DHCPv6, please? How many dozens of technologies are we going to invent to try and avoid putting a default route in DHCP? -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: turning on comcast v6
On Jan 3, 2014, at 7:52 PM, Owen DeLong o...@delong.com wrote: Well… Sure, 15 years after DHCP attacks first started being a serious problem… I doubt it will take anywhere near 15 years for RA guard on by default to be the norm in switches, etc. I count over a dozen ethernet switches in my home that do not have DHCP guard. Indeed, half of them do not have a management interface at all. Even my business class cable modem does not implement DHCP guard on it's integrated switch. I also don't know of a single device, from any vendor, that turns DHCP guard on by default. I'd appreciate pointers if there is one. I know a half dozen people sent some form of don't do that when I gave the example of plugging in a rogue router with my corporate scenario. Maybe in a corporate scenario that's plausible, there will be intelligent admins (ha!). What happens when Joe Home User buys a new Linksys and wants to plug it in to get a firmware update before installing it? Are we really supposed to expect that every Joe Homeowner understands RA Guard and configures it for their home network? -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
Re: turning on comcast v6
On Jan 3, 2014, at 12:30 AM, TJ trej...@gmail.com wrote: I'd argue that while the timing may be different, RA and DHCP attacks are largely the same and are simply variations on a theme. Rogue RA's can take down statically IPv6'ed boxes. Rogue DHCP servers will never affect a statically configured IPv4 box. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: NSA able to compromise Cisco, Juniper, Huawei switches
On Dec 31, 2013, at 8:32 AM, Saku Ytti s...@ytti.fi wrote: I'm going to wait calmly for some of the examples being recovered from the field, documented and analysed. If I were Cisco/Juniper/et all I would have a team working on this right now. It should be trivial for them to insert code into the routers that say, hashes all sorts of things (code image, BIOS, any PROMS and EERPOMS and such on the linecards) and submits all of those signatures back. Any APT that has been snuck into those things should be able to be detected. For most of them the signatures should be known, as the code shipped from the factory and was never intended to be modified (e.g. BIOS). A transparent public report about how many devices are running signatures they do not know would be very interesting. Plus, it's an opportunity to sell new equipment to those people, so they can rid themselves of the infection. I also wonder how this will change engineering going forward. Maybe the BIOS should be a ROM chip, not an EEPROM again. Maybe the write line needs to be run through a physical jumper on the motherboard that is normally not present. Why do we accept our devices, be it a PC or a router, can be persistently infected. The hardware industry needs to do better. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: turning on comcast v6
On Dec 31, 2013, at 12:36 PM, Tony Hain alh-i...@tndh.net wrote: likely pointless. Do you really believe that dhcp messages picked up by the rogue router wouldn't end up answering with the wrong values and breaking both IPv4 IPv6? Next, do you really believe that DHCP Guard for an IPv4 aware switch will do anything when an IPv6 DHCP message goes by? Don't you have to replace every switch and reconfigure anyway? Or is rogue DHCP service a problem that goes away with IPv4? Why do people continue to insist that a cornerstone of their network security model is tied to an inherently insecure protocol that was never intended to be used as a security tool? ... but I digress … In the scenario I described, which I suggest is extremely common, the rogue router will not provide any DHCPv4 client with an address, ever. It is configured to forward to a DHCP sever, and based on the steps I suggested it has no route to reach it. It will forward the packet out its down uplink, and never get a reply. It is in fact 100% benign. Let me repeat, it's 100% benign, and will not affect any users, ever. Yes, if the router has a local DHCP server there would be an issue. But that's not actually very common, most of the people running DHCP want a central server and the logging that goes along with it. I'll admit my scenario was contrived, constructed to specifically show ONE aspect of the problem. It is not the only operational issue, but it is one that is easy for engineers to understand and replicate. However, it's also common. I know multiple people who shot themselves in the foot in this very way, with operational networks. It's not a theoretical problem, it's one that happens every day. Here's another problem that happens every day in data centers and offices. Someone accidentally bridges two VLAN's together for a few minutes by plugging in a cable to the wrong port, realizing it and then unplugging it. In an IPv4+DHCPv4 world the majority of the time the impact is, well, NONE. No hosts notice. Some switches compute a new spanning tree adjacency and some broadcast traffic goes where it shouldn't, but everything remains operational. Do the same with IPv6 and all of the hosts on one of the two VLAN's will instantly stop working. There are controlled environments, like a single tenant data center where a rogue DHCP server is simply not a security concern, but where a tech accidentally bridging VLAN's is a very real possibility. The fact of the matter is that the scale, scope, and impact from a rogue DHCP server is entirely different from a rogue RA server. IPv4+DHCP is operationally much more robust in the face of various scenarios, where as IPv6+RA pretty much always results in near instantaneous 100% failure. Unfortunately there have been too many voices demanding a 'one size fits all' approach within the IETF, and we have gotten to the current situation where you need to deploy parts of both models to have a functional network. RFC 6106 is a half-baked concession from the 'dhcp is the only true way' crowd to allow home networks to be functional, but if you want anything more than DNS you have to return to the one-true-way, simply because getting consensus for a more generic dhcp-options container in the RA was not going to happen. The Routing Information DHCP option has been held hostage by those that might be described as a 'dhcp is broken by design' crowd, because many saw that as a bargaining point for consensus around a more feature rich RA. Both hard line positions preventing utility in the other model are wrong, but in the presence of a leadership mantra of one-size-fits-all, neither side was willing to allow complete independent functionality to the other. I think you describe the IETF situation reasonably well. The internal bickering between the RA camp and the DHCP camp have largely prevented either one from producing something robust. Making progress on the Routing Information option requires a clear scenario to justify it, because vast swamp of dhcp options that have ever been used in IPv4 are not brought forward without some current usage case. Lee was asking for that input, and while the scenario you paint below might be helped by that option, it presumes that every device on the network has additional configuration to ignore an errant RA sent from the router being configured simply because the network is supposed to using DHCP. This is some extremely circular logic. We can't have DHCP until there are options in devices to ignore RA's. We can't have an option to ignore RA's in devices, because at the moment RA's are the only way to get a default route so it doesn't make sense. Someone has to go first, the other side will follow. I suggest it makes a lot more sense to get working DHCP, before pressuring end-user boxes to have an option or even default to ignoring RA's. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http
Re: turning on comcast v6
On Dec 24, 2013, at 8:15 AM, Lee Howard l...@asgard.org wrote: Why? You say, The protocol suite doesn't meet my needs; I need default gateway in DHCPv6. So the IETF WG must change for you to deploy IPv6. Why? Why must the people who want it justify to _you_? This is fundamental part I've not gotten about the IPv6 crowd. IPv4 got to where it is by letting people extend it and develop new protocols and solutions. DHCP did not exist when IPv4 was created, it was tacked on later. Now people want to tack something on to IPv6 to make it more useful to them. Why do they need to explain it to you, if it doesn't affect your deployments at all? Some of us think the model where a DHCP server knows the subnet and hands out a default gateway provides operational advantages. It's an opinion. And the current IPv6 crowds view that not having a default route and relaying on RA's is better is also an opinion. We've spent years of wasted bits and oxygen on ONE STUPID FIELD IN DHCP. Put it in their, and let the market sort it out, unless you can justify some dire harm from doing so. What is more important fast IPv6 adoption or belittling people who want to deploy it in some slightly different way than you did? -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
Re: turning on comcast v6
On Dec 30, 2013, at 3:43 PM, Owen DeLong o...@delong.com wrote: The current situation isn’t attributable to “the current IPv6 crowd” (whoever that is), it’s the current IETF consensus position. Changing that IETF consensus position is a matter of going through the IETF process and getting a new consensus. That requires justifying your position and convincing enough people willing to actively participate in the working group process of that position. Some of us tried to engage the IETF on this topic in multiple working groups. If you search the archives you'll find this topic has come up before. I would charitably describe the environment there as hostile to anyone who has not been inside the IEFT machine for the last 15 years. And that's assuming the working group is working, there are plenty inside the IEFT that are extremely dysfunctional even when the people on them agree. It's not enough to tell a bunch of enterprise people who have never dealt with the IETF before that they should go there are plead their case. Most do not know how, and the few who try find themselves berated by that community for being ignorant of the way things should be. What the enterprise folks need is IPv6 champions, like yourself, like Lee, to user stand their use case that even if you don't end up deploying it on your own network you will show up at the IETF, or at least participate on the IETF mailing lists and help them get what they need, so IPv6 deployment can proceed apace. If you really don't think there is harm, help them go get what they (think they?) need. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: turning on comcast v6
On Dec 30, 2013, at 2:49 PM, Lee Howard l...@asgard.org wrote: I'm not really an advocate for or against DHCP or RAs. I really just want to understand what feature is missing. I encourage you to try this simple experiment in your lab, because this happens all day long on corporate networks around the world, and illustrates the differences quite nicely. While not a complete treatment of the operational differences, it's an important illustration. Configure up a simple network topology of three boxes representing a hub and spoke network: DHCP SVR | --lan--r1--wan--hubrtr--wan--r2--lan Number it up as expected for both protocols: wan links: IPv4 /30, IPv6 /64 lan links: IPv4 /24, IPv6 /64 Drop a DHCP server off the hubrtr, set r1 and r2 to forward DHCP requests to it. Verify a machine plugged into either of the LANs gets a fully functional network for both protocols. Now, take r1 turn it off, and stick it in a closet. You see, that office closed. Normal every day thing. Plug up two PC's to the remaining LAN off r2. This represents your desktop, and your neighbors desktop. Think Bob from accounting perhaps. Open two windows on each, one with an IPv4 ping, one with an IPv6 ping running. Now, boss man comes in and has a new office opening up. Go grab the r1 box out of the closet, you need to upgrade the code and reconfigure it. Cable it up to your PC with a serial port, open some some sort of terminal program so you can catch the boot and password recover it. Plug it's ethernet into your lan, as you're going to need to tftp down new config, and turn it on. Here's what you will soon find: 1) The IPv6 pings on both machines cease to work. 2) The IPv4 network continues to work just fine. Now, for even more fun, turn on another PC, say Sally from HR just rebooted her machine. It will come up in the same state, IPv6 not working, while IPv4 works just fine. Lastly, for extra credit, explain to Mr MoneyBagsCEO why Bob and Sally are now complaining to him that their network is down, again. Make sure you not only understand the technical nuances of why the problem above happened, but also how to explain them to a totally non-technical CEO. (Oh yeah, go ahead and unplug r1 to fix the problem, and observe how long until the pings start working again. I think it's 15 minutes, IIRC. For super-extra credit tell me how to make that time shorter for everyone on the LAN with Cisco or Juniper commands on r2.) I'll give you a hint on understanding this issue. The IETF's grand solution is to replace every ethernet switch in your entire network with new hardware that supports RA Guard, and then deploy new configuration on every single port of every single device in your network. Please develop a capital justification plan for Mr MoneyBagsCEO for replacing every switch in your network so you can safely deploy IPv6. Now do you understand why people just want to put the route in DHCPv6 and move on? It's not a feature that's different between the two, it's that operationally they have entirely different attack surfaces and failure modes. And yes, it involves people doing stupid things. However if the IETF is going to rely on smart people deploying networking devices we might as well give up and go home now. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: turning on comcast v6
On Dec 30, 2013, at 4:37 PM, Victor Kuarsingh vic...@jvknet.com wrote: On Mon, Dec 30, 2013 at 3:49 PM, Lee Howard l...@asgard.org wrote: The better question is are you using RIP or ICMP to set gateways in your network now? I disagree that that's a better question. I'm not using RIP because my hosts don't support it (at least, not without additional configuration), and it would be a very unusual configuration, adding weight and complexity for no benefit. RAs are the opposite. Not even sure how you would use ICMP to set a default gateway. Maybe there's a field I'm unaware of. [VK] The RIP comparison is somewhat confusing to me. I don't see how RIP is comparable in this context (I guess technically you can pass a default route in RIP, but as Lee mentions, the protocol is designed for a different purpose and requires configuration). There was a time, I'm going to roughly guess approximately 1987-1992, although I may be off by a year or two, that you needed to have lived through for this to make sense. You see, in that time the available IGP was, well, RIP. RIPv1. Routers, at least ones you could buy, did not have OSPF, EIGRP, or even in many cases EGP/BGP. They had RIPv1, and perhaps some bleeding edge Cisco's had IGRP. So almost every campus network ran RIPv1. This is also pre-CIDR, so remember every subnet had to have the same mask. For instance the University I went to had a /16, divided into entirely /22 networks for each LAN. The RIP config enabled it for the entire /16. Certain vendors, like Sun (who was popular at the time) shipped SunOS boxes with routed enabled by default, where they received a default route (if the admins filtered) or a full (local) table via RIPv1. In short, there was a time when getting a default route via RIP was in fact common. It was also the time of telnet, and rsh, decidedly pre SSL, ssh, or IPSEC. It was also a time when the Internet came under heavy, well, attack, by people who realized how soft and squishy it was. Injecting a route into RIP allowed you to hijack rsh sessions, for example. Lots of people who were admins at that time learned through personal pain and late night hacking that sending a dynamic route to a box via an unauthenticated protocol was a recipe for disaster. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: turning on comcast v6
On Dec 30, 2013, at 6:56 PM, Owen DeLong o...@delong.com wrote: You can accomplish the same thing in IPv4…. Plug in Sally’s PC with Internet Connection Sharing turned on and watch as her DHCP server takes over your network. No, the failure mode is still different. With IPv6 RA's, the rouge router breaks all hosts on the LAN with a single broadcast. With a rogue DHCP server no currently working clients will stop working. In fact many will do directed renews, and never notice said rogue server. It is only a freshly booted host that might be captured by a rogue DHCP server. In a corporate environment the difference between one user getting a rogue DHCP server, being down, and asking for troubleshooting, and taking out an entire department/floor/office is enormous. Yes, you have to pay attention when you plug in a router just like you’d have to pay attention if you plugged in a DHCP server you were getting ready to recycle. Incompetence in execution really isn’t the protocol’s fault. We can't work around incompetent admins. Even the best humans goof from time to time. What we can do is design protocols that are robust, or not in the face of stupidity and accident. I should tell you about the time rogue RA's took down a data center network because in the middle of the night the tech I was talking to couldn't tell if I said port fifteen or port fifty over the phone, and thus plugged the router into the wrong network taking down several hundred hosts. The IPv4 side was fine for the 30 seconds or so until we straightened it out. There's a reason why there's huge efforts to put RA guard in switches, and do cryptographic RA's. These are two admissions that the status quo does not work for many folks, but for some reason these two solutions get pushed over a simple DHCP router assignment option. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: turning on comcast v6
On Dec 11, 2013, at 9:11 AM, Eric Oosting eric.oost...@gmail.com wrote: It brings a tear to my eye that it takes: 1) specific, and potentially high end, CPE for the res; 2) specific and custom firmware, unsupported by CPE manufacturer ... or anyone; I think this says more about Randy's specific choice/luck in hardware than the general state of play. Unfortunately in low end CPE land hardware ships with a specific set of software features, and generally there is no (economic) model for the vendors to ever offer new features. People don't buy support for low end CPE. The way to get new software is to buy new hardware, which is really only a good solution when the feature set required is stable over long periods of time. There are plenty of low end residential style boxes that just work with Comcast's setup out of the box with vendor images. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: turning on comcast v6
On Dec 11, 2013, at 1:46 PM, Kinkaid, Kyle kkink...@usgs.gov wrote: I would love to go to NewEgg and get a home router for $50 (or even $100) that is ready to go. http://mydeviceinfo.comcast.net/?homegateway contains devices Comcast has actually tested in their lab, and so they are safer than most. There are devices not on this list that meet your criteria as well. I believe the absolute cheapest at NewEgg is the D-Link DIR 655, which is $63.99 with Extra savings .. promo code right now: http://www.newegg.com/Product/Product.aspx?Item=N82E16833127215 -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: ATT UVERSE Native IPv6, a HOWTO
On Dec 2, 2013, at 4:35 PM, Ricky Beam jfb...@gmail.com wrote: DHCPv6-PD isn't a restriction, it's simply what gets handed out today. A simple reconfiguration on the DHCP server and it's handing out /56's instead. (or *allowing* /56's if requested -- it's better to let the customer ask for what they need/want; assuming they just default to asking for the largest block they're allowed and using only 3 networks.) I find it amusing that people want to argue both that: - A /56 is horribly wrong and the world will end if we don't fix it NOW. - Providers could give out more by simply changing a setting on the DHCP server. I would love to know what number of home users need 256 subnets. The good news is that folks doing DHCP-PD will be able to report on how many people request all 256 networks available, and are thus out. In fact they can make a histogram from 1 to 256 networks per household, and show us how many request each number of subnets. I challenge Comcast, ATT, and others to do just that, and publish it on a regular basis, if only to make people stop talking about this issue. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
Re: Reverse DNS RFCs and Recommendations
On Oct 30, 2013, at 11:55 AM, Andrew Sullivan asulli...@dyn.com wrote: As I think I've said before on this list, when we tried to get consensus on that claim in the DNSOP WG at the IETF, we couldn't. Indeed, we couldn't even get consensus on the much more bland statement, Some people rely on the reverse, and you might want to take that into consideration when running your services. It's taking all of my willpower to avoid an IETF rant. :) The SHOULD here is one way. A PTR record should point to a valid forward name that resolves to the same IP address. To quote RFC 1034, a PTR is a pointer to another part of the domain name space. If the RHS of a PTR is not a valid domain name, that's just not true. But rather than some pedantic rant about standards there's a practical purpose here. Tools that receive IP addresses will generate names using reverse lookups, those names should then work. For instance if trace route gives a name, ping name should then work. But the opposite is not true. Many forward records may point to the same IP address, and there is no need for reverses to match. (in shorthand) 10.0.0.1 PTR webhosting.foo.com webhosting.foo.com A 10.0.0.1 www.sitea.com A 10.0.0.1 www.siteb.com A 10.0.0.1 www.sitec.com A 10.0.0.1 -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: Pad 1310nm cross-connects?
In a message written on Sat, Oct 19, 2013 at 07:33:19PM -0700, Chris Costa wrote: What are the opinions/views on attenuating short, 1310nm LR cross-connects. Assume 20m cable length and utilizing the same vendor optics on each side of the link. Considering the LR transmit spec doesn't exceed the receiver's high threshold value do you pad the receiver closer to the median RX range to avoid potential receiver burnout over time, or just leave it un-padded? With any optics, you need to go to the specifications. I assume here you mean 10GbaseLR, although I will point out that LR is ambiguous as there is also for instance OC192-LR. I'm going to pick on Juniper specs, just because they were the easiest to find with Google: http://www.juniper.net/techpubs/en_US/release-independent/junos/topics/reference/specifications/transceiver-m-mx-t-series-10-gigabit-optical-specifications.html And similar for 1000baseLX, the similar technology for GigE: http://www.juniper.net/techpubs/en_US/release-independent/junos/topics/reference/specifications/transceiver-m-mx-t-series-1000base-optical-specifications.html Note that for both 10GbaseLR and 1000baseLX the transmitter power range is entirely inside the allowed receiver range. They were designed this way on purpose, to never need a pad. An in-spec optic can never over drive the receiver, even with zero loss. Answering your question, I would never pad them. Compare with for instance a 10Gbase-ER or 1000baseEX, 40km over single mode optics. In both cases an in-spec can exceed a receiver. 10Gbase-ER can transmit up to +4.0dBm, while the receiver needs -1.0dBm or below. When connecting them back to back a 5dB attenuator is required to keep the receiver in-spec. For any real connections (over a fiber path more trivial than a jumper) a light meter should be used, the value checked, and an attenuator that places the circuit 1-2dB inside of the safe zone of the receiver should be used. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ pgpJO4c8u_J2X.pgp Description: PGP signature
Re: Policy-based routing is evil? Discuss.
On Oct 11, 2013, at 12:27 PM, William Waites wwai...@tardis.ed.ac.uk wrote: I'm having a discussion with a small network in a part of the world where bandwidth is scarce and multiple DSL lines are often used for upstream links. The topic is policy-based routing, which is being described as load balancing where end-user traffic is assigned to a line according to source address. Doing this with actual routing, in a way that doesn't become fragile is hard. It is not impossible as Jared points out, but is non-trivial. However there is a variant which is much less brittle, but is more annoying to configure with most tools. The idea is that the gateway box is a NAT, with an outbound IP on each of the two uplinks. The box can then make intelligent decisions about which provider to use based on layer 8+9 information. I've seen this done multiple times where for instance there is high bandwidth satellite, and low bandwidth terrestrial services. Latency sensitive traffic (dns, ssh, etc) are send over the low bandwidth terrestrial, while bulk downloads go over satellite. It's quite robust and useful in these situations. Making open source boxes do this is possible, but quite annoying in my experience. I don't think it's possible to make a Cisco or Juniper do this sort of thing in any reasonable way. A number of manufacturers have developed custom solutions around this idea. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: iOS 7 update traffic
On Sep 23, 2013, at 8:10 AM, Simon Leinen simon.lei...@switch.ch wrote: Not necessarily. I think most of the iOS 7 update traffic WAS in fact delivered from CDN servers (in particular Akamai). And many/most large service providers already have Akamai servers in their networks. But they may not have enough spare capacity for such a sudden demand - either in terms of CDN (Akamai) servers or in terms of capacity between their CDN servers and their customers. Apple claims 200 million[1] IOS devices upgrade to version 7 in the past week. A typical download was on the order of 800MB. At the same time, Apple released some other updates, like OSX 10.8.5[2] (275MB) and XCode 5.0[3] (2GB). They also made the iWork and iLife applications (Pages, Numbers, Keynote iMovie, and iPhoto) free to download[4] for all new IOS purchasers. Oh, and they sold 9 million iPhone 5s/c devices[1], most of which needed an update to IOS 7.0.1[5] which was a 1.2GB download. With all of that going on the grumbling on NANOG has pretty much been limited to yeah, we saw a spike, and a few providers of alternative technologies (like Satellite) grousing a bit. I'm not saying the industry can't do better, but I'm finding it hard to describe what happened as anything besides a success for CDN's and most consumer facing ISP's. I only hope the various CDN's and ISP's study what happened so they can be prepared for the next event, which will no doubt be bigger. We're all in an up and to the right industry. 1: http://9to5mac.com/2013/09/23/apple-announces-9-million-iphone-sales-over-first-three-days/ 2: http://support.apple.com/kb/DL1675 3: http://9to5mac.com/2013/09/18/xcode-5-0-released-with-ios-7-sdk-64-bit-app-compiler/ 4: http://9to5mac.com/2013/09/10/apple-makes-iwork-apps-iphoto-and-imovie-free-with-all-new-ios-devices/ 5: http://support.apple.com/kb/DL1683 -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ signature.asc Description: Message signed with OpenPGP using GPGMail
Re: common method to count traffic volume on IX
On Sep 17, 2013, at 3:15 PM, Niels Bakker niels=na...@bakker.net wrote: I don't know of any IXP that does this. Industry standard is as you and others wrote before: the 5-minute counter difference on all customer-facing ports, publishing both input and output bps and pps. I guess MRTG is to 'blame' for these values more than anything. Serious question, at an IXP shouldn't IN = OUT nearly perfectly? Most exchanges do everything possible to eliminate broadcast packets, and they don't allow multicast on the unicast VLAN's. So properly behaved you have a bunch of routers speaking unicast to each other. The only way to get a difference is if there is packet loss, IN - loss = OUT. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
Re: common method to count traffic volume on IX
In a message written on Tue, Sep 17, 2013 at 07:11:23PM +0300, Martin T wrote: counting traffic on inter-switch links is kind of cheating, isn't it? I mean if input bytes and output bytes on all the ports facing the IX members are already counted, then counting traffic on links between the switches in fabric will count some of the traffic multiple times. Sounds like a marketing opportunity. customer--s1--s2--s3--s4--s5--s6--s7--s8--s9--s10--customer Presto, highest volume IX! Maybe I should patent that idea. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ pgpVcE7xwywbv.pgp Description: PGP signature
Re: Yahoo is now recycling handles
In a message written on Thu, Sep 05, 2013 at 12:17:28AM -0400, valdis.kletni...@vt.edu wrote: On Wed, 04 Sep 2013 20:47:40 -0500, Leo Bicknell said: There's still the much more minor point that when I tried to self serve I ended up at a blank page on the Yahoo! web site, hopefully they will figure that out as well. I'm continually amazed at the number of web designers that don't test their pages with NoScript enabled. Just sayin'. While perhaps likely I would use NoScript, the failure in question happened with a bone stock, up to date Safari client with JavaScript enabled. No ad-block or other software to interfear. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ pgpOleZLB16a8.pgp Description: PGP signature