Re: Is multihoming hard? [was: DNS amplification]
On Sat, 23 Mar 2013 11:28:07 -0700, Owen DeLong said: A reliable cost-effective means for FTL signaling is a hard problem without a known solution. Agreed. An idiot-proof simple BGP configuration is a well known solution. Automating it would be relatively simple if there were the will to do so. As others pointed out, a reliable cost effective way of automating the layer 8 problems is *also* a hard problem without a known solution. pgppTSQfUMdHL.pgp Description: PGP signature
Re: Is multihoming hard? [was: DNS amplification]
In a message written on Sun, Mar 24, 2013 at 12:54:18PM -0400, John Curran wrote: I believe that the percentage which _expects_ unabridged connectivity today is quite high, but that does not necessarily mean actual _demand_ (i.e. folks who go out and make the necessary arrangements despite the added cost and hassle...) Actually, I think most of the people who care have made alternate arrangements, but I have to back up first. Like most of the US, I live in a area with a pair of providers, BigCable and BigTelco. The reality of those two providers is that from my house they both run down the same backyard. The both go to pedistals next to each other at the edge of the neighborhood. They both ride the same poles down towards the center of town. At some point they finally diverge, to two different central offices. About 80% of the time when one goes out the other does as well. The backhoe digs up both wires. The pole taken out by a car accident takes them both down. Heck, when the power goes out to a storm neither has a generator for their pedistal. The other 20% of the time one has an equipment failure and the other does not. Even if I wanted to pay 2x the monthly cost to have both providers active (and could multi-home, etc), it really doesn't create a significanlty higher uptime, and thus is economically foolish. However, there is an alternative that shares none of this infrastructure. A cell card. Another option finally available due to higher speeds and better pricing is a satellite service. These provide true redundancy from all the physical infrastructure I described above. It could be aruged then, the interesting multi-homing case is between my Cable Modem and my Cell Card, however even that is not the case. Turns out my cell hard has bad latency compared to the cable modem, so I don't want to use it unless I have to, and it also turns out the cell provider charges me for usage, at a modestly high rate, so I don't want to use it unless I have to. The result is an active/passive backup configuration. A device like a cradlepoint can detect the cable modem being down and switch over to the cell card. Sure, incoming connections are not persisitent, but outbound it's hard to notice other than performance getting worse. TL;DR People paying for redundancy want physical redundancy including the last mile. In the US, that exists approximately nowhere for residential users. With no diverse paths to purchase, the discussion of higher level protocol issues is academic. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ pgpeRJib6c7Ca.pgp Description: PGP signature
Re: Is multihoming hard? [was: DNS amplification]
On 3/23/13 9:13 PM, Matt Palmer wrote: On Sat, Mar 23, 2013 at 07:47:12PM -0700, Kyle Creyts wrote: You do realize that there are quite a few people (home broadband subscribers?) who just go do something else when their internet goes down, right? [...] Will they really demand ubiquitous, unabridged connectivity? When? Probably around the time their phone, TV, books, shopping, and *life* are all delivered over that connectivity. Especially if they don't have any meaningful local storage or processing, as everything has been delegated to the cloud. When the cable is down there's the verizon usb stick (which this point can be into the router and serve the whole house), when verizon is down there's t-mobile handset. when t-mobile is down there's a workphone with att. When the cable/verizon/t-mobile/att are all down for any signifcant length of time, I expect to be digging my neighbors our of the sorts of natural disasters that befall California and listening to the radio and maybe 2-meter. In practice, however, I suspect that we as providers will just get a whole lot better at providing connectivity, rather than have everyone work out how to do fully-diverse BGP from their homes. I'm going to be somewhat contrarian, connectivity/availability with cloud services is important, where you access them from not so much. I doubt very much that reliance on the cloud drives multihoming for end-sites/consumers, it drives a demand for connectivity diversity so that one failure mode doesn't leave you stranded. - Matt
Re: Is multihoming hard? [was: DNS amplification]
On Sat, Mar 23, 2013 at 10:47 PM, Kyle Creyts kyle.cre...@gmail.com wrote: Will they really demand ubiquitous, unabridged connectivity? When? When the older generation that considers the Internet a side show dies off. When your grandparents' power went out, they broke out candles and kerosene lamps. When yours goes out, you pull out flashlights and generators. And when it stays out you book a motel room so your family can have air conditioning and television. For most folks under 30 and many who are older, Internet isn't a side show, it's a way of life. An outage is like a power failure or the car going kaput: a major disruption to life's flow. This need won't be ubiquitous for two to three decades, but every year between now and then the percentage of your customer base which demands unabridged connectivity will grow. What do you have in the pipeline to address that demand as it arrives? BGP multihoming won't get the job done for the hundred million households in North America, let alone the seven billion people in the world. Regards, Bill Herrin -- William D. Herrin her...@dirtside.com b...@herrin.us 3005 Crane Dr. .. Web: http://bill.herrin.us/ Falls Church, VA 22042-3004
Re: Is multihoming hard? [was: DNS amplification]
On Mar 23, 2013, at 7:47 PM, Kyle Creyts kyle.cre...@gmail.com wrote: Will they really demand ubiquitous, unabridged connectivity? Let's back up. End users do not as a rule* have persistent inbound connections. If they have DSL and a Cable Modem they can switch manually (or with a little effort automatically) if one goes down. * Servers-at-home-or-small-office is the use case for Owen's magic BGP box. Which is true for many of us and other core geeks but not an appreciable percent of the populace. I believe that full BGP to end user is less practical for this use case than a geographically dispersed BGP external facing intermediary whose connectivity to the end user servers is full-mesh multi-provider-multi-physical-link VPNs. It's a lot easier to manage and has less chance of a config goof blowing up bigger network neighbors. Every time I look at productizing this, though, the market's too small to support it. Which probably means it's way too small for home BGP... George William Herbert Sent from my iPhone
Re: Is multihoming hard? [was: DNS amplification]
On Mar 24, 2013, at 12:06 PM, William Herrin b...@herrin.us wrote: ... For most folks under 30 and many who are older, Internet isn't a side show, it's a way of life. An outage is like a power failure or the car going kaput: a major disruption to life's flow. Yes, this is increasingly the case (and may not be as generational as you think) This need won't be ubiquitous for two to three decades, but every year between now and then the percentage of your customer base which demands unabridged connectivity will grow. I believe that the percentage which _expects_ unabridged connectivity today is quite high, but that does not necessarily mean actual _demand_ (i.e. folks who go out and make the necessary arrangements despite the added cost and hassle...) The power analogy might be apt here; I know many folks who have a home UPS, a few that have a manual generator, and just one or two who did the entire home automatic UPS/generator combo that's really necessary for 100% reliable power. This reflects a truism: while many people may expect 100% reliable today, the demand (in most areas) simply doesn't match. What do you have in the pipeline to address that demand as it arrives? See above: increasing expectations does not necessarily equate with demand. FYI, /John Disclaimer: My views alone. Sent via less than 100% reliable networks.
Re: Is multihoming hard? [was: DNS amplification]
As an under-30, working in the industry, I have to say, when the power goes out at home for a few days, we pull out the camping gear. When our cable-based internet goes out, our life changes hardly at all. We go for a walk, or hike, do the things we would normally. I can imagine that an outage of 1 week would be slightly different, but I'm pretty sure that the spans of most of the outages which would be resolved by multi-provider solutions like those outlined herein would probably only apply to situations where the outage would only last less than 48 hours. On Sun, Mar 24, 2013 at 9:06 AM, William Herrin b...@herrin.us wrote: On Sat, Mar 23, 2013 at 10:47 PM, Kyle Creyts kyle.cre...@gmail.com wrote: Will they really demand ubiquitous, unabridged connectivity? When? When the older generation that considers the Internet a side show dies off. When your grandparents' power went out, they broke out candles and kerosene lamps. When yours goes out, you pull out flashlights and generators. And when it stays out you book a motel room so your family can have air conditioning and television. For most folks under 30 and many who are older, Internet isn't a side show, it's a way of life. An outage is like a power failure or the car going kaput: a major disruption to life's flow. This need won't be ubiquitous for two to three decades, but every year between now and then the percentage of your customer base which demands unabridged connectivity will grow. What do you have in the pipeline to address that demand as it arrives? BGP multihoming won't get the job done for the hundred million households in North America, let alone the seven billion people in the world. Regards, Bill Herrin -- William D. Herrin her...@dirtside.com b...@herrin.us 3005 Crane Dr. .. Web: http://bill.herrin.us/ Falls Church, VA 22042-3004 -- Kyle Creyts Information Assurance Professional BSidesDetroit Organizer
Re: Is multihoming hard? [was: DNS amplification]
I assume those people will not bother with any attempt to multihome in any form. They are not, therefore, part of what is being discussed here. Owen On Mar 23, 2013, at 19:47 , Kyle Creyts kyle.cre...@gmail.com wrote: You do realize that there are quite a few people (home broadband subscribers?) who just go do something else when their internet goes down, right? There are people who don't understand the difference between a site being slow and packet-loss. For many of these people, losing internet service carries zero business impact, and relatively little life impact; they might even realize they have better things to do than watch cat videos or scroll through endless social media feeds. Will they really demand ubiquitous, unabridged connectivity? When? On Mar 23, 2013 12:58 PM, Owen DeLong o...@delong.com wrote: On Mar 23, 2013, at 12:12 , Jimmy Hess mysi...@gmail.com wrote: On 3/23/13, Owen DeLong o...@delong.com wrote: A reliable cost-effective means for FTL signaling is a hard problem without a known solution. Faster than light signalling is not merely a hard problem. Special relativity doesn't provide that information may travel faster than the maximum speed C.If you want to signal faster than light, then slow down the light. An idiot-proof simple BGP configuration is a well known solution. Automating it would be relatively simple if there were the will to do so. Logistical problems... if it's a multihomed connection, which of the two or three providers manages it, and gets to blame the other provider(s) when anything goes wrong: or are you gonna rely on the customer to manage it? The box could (pretty easily) be built with a Primary and Secondary port. The cable plugged into the primary port would go to the ISP that sets the configuration. The cable plugged into the other port would go to an ISP expected to accept the announcements of the prefix provided by the ISP on the primary port. BFD could be used to illuminate a tri-color LED on the box for each port, which would be green if BFD state is good and red if BFD state is bad. At that point, whichever one is red gets the blame. If they're both green, then traffic is going via the primary and the primary gets the blame. If you absolutely have to troubleshoot which provider is broken, then start by unplugging the secondary. If it doesn't start working in 5 minutes, then clearly there's a problem with the primary regardless of what else is happening. Lather, rinse, repeat for the secondary. Someone might be able to make a protocol that lets this happen, which would need to detect on a per-route basis any performance/connectivity issues, but I would say it's not any known implementation of BGP. A few additional options to DHCP could actually cover it from the primary perspective. For the secondary provider, it's a little more complicated, but could be mostly automated so long as the customer identifies the primary provider and/or provides an LOA for the authorized prefix from the primary to the secondary. The only complexity in the secondary case is properly filtering the announcement of the prefix assigned by the primary. 1. ISPs are actually motivated to prevent customer mobility, not enable it. 2. ISPs are motivated to reduce, not increase the number of multi-homed sites occupying slots in routing tables. This is not some insignificant thing. The ISPs have to maintain routing tables as well; ultimately the ISP's customers are in bad shape, if too many slots are consumed. I never said it was insignificant. I said that solving the multihoming problem in this manner was trivial if there was will to do so. I also said that the above were contributing factors in the lack of will to do so. How about 3. Increased troubleshooting complexity when there are potential issues or complaints. I do not buy that it is harder to troubleshoot a basic BGP configuration than a multi-carrier NAT-based solution that goes woefully awry. I'm sorry, I've done the troubleshooting on both scenarios and I have to say that if you think NAT makes this easier, you live in a different world than I do. The concept of a fool proof BGP configuration is clearly a new sort of myth. Not really. Customer router accepts default from primary and secondary providers. So long as default remains, primary is preferred. If primary default goes away, secondary is preferred. Customer box gets prefix (via DHCP-PD or static config or whatever either from primary or from RIR). Advertises prefix to both primary and secondary. All configuration of the BGP sessions is automated within the box other than static configuration of customer prefix (if static is desired).
Re: Is multihoming hard? [was: DNS amplification]
On Mar 22, 2013, at 15:44 , valdis.kletni...@vt.edu wrote: On Wed, 20 Mar 2013 15:16:57 -0500, Owen DeLong said: On Mar 20, 2013, at 9:55 AM, Seth Mattinen se...@rollernet.us wrote: Based on the average clue of your average residential subscriber (anyone here need not apply) I'd say that's a good thing. If BGP were plug-and-play automated with settings specified by the provider, what would the user's clue level have to do with it? The hypothetical existence of such a box doesn't change the fact that providers have to make business decisions based on actual boxes and users. Yes, if a plug-n-play idiot-proof BGP box existed, then the profit calculus would be different. On the other hand, if there existed a reliable cost-effective means for faster-than-light signaling, if would drastically change intercontinental peering patterns. All the same, anybody who's planning their interconnects in 2013 reality and not looking at who has 40K km of underwater cable and who doesn't is in for a surprise There is a difference and you know it. A reliable cost-effective means for FTL signaling is a hard problem without a known solution. An idiot-proof simple BGP configuration is a well known solution. Automating it would be relatively simple if there were the will to do so. However, the reality is that ISPs don't want the solution for a number of reasons: 1. ISPs are actually motivated to prevent customer mobility, not enable it. 2. ISPs are motivated to reduce, not increase the number of multi-homed sites occupying slots in routing tables. In addition, most of the consumers that could benefit from such a solution do not have enough knowledge to know what they should be demanding from their vendors, so they don't demand it. This is a classic case of the invisible hand only works when all participants have equal access to information and relatively equal knowledge. The problem with technical products and especially with technical based services products is that the vendor consistently has much greater knowledge than the subscriber and is therefore in a position to optimize for the vendors best interests even when they are counter to the best interests of their customers. Owen
Re: Is multihoming hard? [was: DNS amplification]
On 3/23/13, Owen DeLong o...@delong.com wrote: A reliable cost-effective means for FTL signaling is a hard problem without a known solution. Faster than light signalling is not merely a hard problem. Special relativity doesn't provide that information may travel faster than the maximum speed C.If you want to signal faster than light, then slow down the light. An idiot-proof simple BGP configuration is a well known solution. Automating it would be relatively simple if there were the will to do so. Logistical problems... if it's a multihomed connection, which of the two or three providers manages it, and gets to blame the other provider(s) when anything goes wrong: or are you gonna rely on the customer to manage it? Someone might be able to make a protocol that lets this happen, which would need to detect on a per-route basis any performance/connectivity issues, but I would say it's not any known implementation of BGP. 1.ISPs are actually motivated to prevent customer mobility, not enable it. 2.ISPs are motivated to reduce, not increase the number of multi-homed sites occupying slots in routing tables. This is not some insignificant thing. The ISPs have to maintain routing tables as well; ultimately the ISP's customers are in bad shape, if too many slots are consumed. How about 3. Increased troubleshooting complexity when there are potential issues or complaints. The concept of a fool proof BGP configuration is clearly a new sort of myth. The idea that the protocol on its own, with a very basic config, does not ever require any additional attention, to achieve expected results; where expected results include isolation from any faults with the path from one of of the user's two, three, or four providers, and balancing for optimal throughput and best latency/loss to every destination. BGP multihoming doesn't prevent users from having issues because: o Connectivity issues that are a responsibility of one of their provider's That they might have expected multihoming to protect them against (latency, packet loss). o very Poor performance of one of their links; or poor performance of one of their links to their favorite destination o Asymmetric paths; which means that when latency or loss is poor, the customer doesn't necessarily know which provider to blame, or if both are at fault, and the providers can spend a lot of time blaming each other. These are all solvable problems, but at cost, and therefore not for massmarket lowest cost ISP service. It's not as if they can have Hello, DSL technical support... did you try shutting off your other peers and retesting'? The average end user won't have a clue -- they will need one of the providers, or someone else to be managing that for them, and understand how each provider is connected. I don't see large ISPs training up their support reps for DSL $60/month services, to handle BGP troubleshooting, and multihoming management/repair. In addition, most of the consumers that could benefit from such a solution do not have enough knowledge to know what they should be demanding from their vendors, so they don't demand it. Owen -- -JH
Re: Is multihoming hard? [was: DNS amplification]
On Fri, Mar 22, 2013 at 6:44 PM, valdis.kletni...@vt.edu wrote: On Wed, 20 Mar 2013 15:16:57 -0500, Owen DeLong said: On Mar 20, 2013, at 9:55 AM, Seth Mattinen se...@rollernet.us wrote: Based on the average clue of your average residential subscriber (anyone here need not apply) I'd say that's a good thing. If BGP were plug-and-play automated with settings specified by the provider, what would the user's clue level have to do with it? The hypothetical existence of such a box doesn't change the fact that providers have to make business decisions based on actual boxes and users. Providers who don't wish to be leap-frogged have to make business decisions about unserved and underserved demand for which they don't already have an effective product. Yes, if a plug-n-play idiot-proof BGP box existed, then the profit calculus would be different. On the other hand, if there existed a reliable cost-effective means for faster-than-light signaling, if would drastically change intercontinental peering patterns. That's not a particularly compelling counterpoint. We have a mechanism for multihoming: BGP. We have a mechanism for flying to the moon: rocket ships. At a strictly technical level, either could be made suitable for use by John Q. Public. In both cases the cost attributable to John Q's desired activity, when using known techniques, greatly exceeds his budget. That having been said, I'd be very interested in your take on how FTL would change intercontinental peering patterns. How would dropping all links to a 0 ms latency change the ways in which we choose to interconnect and why? Regards, Bill Herrin -- William D. Herrin her...@dirtside.com b...@herrin.us 3005 Crane Dr. .. Web: http://bill.herrin.us/ Falls Church, VA 22042-3004
Re: Is multihoming hard? [was: DNS amplification]
On Mar 23, 2013, at 12:12 , Jimmy Hess mysi...@gmail.com wrote: On 3/23/13, Owen DeLong o...@delong.com wrote: A reliable cost-effective means for FTL signaling is a hard problem without a known solution. Faster than light signalling is not merely a hard problem. Special relativity doesn't provide that information may travel faster than the maximum speed C.If you want to signal faster than light, then slow down the light. An idiot-proof simple BGP configuration is a well known solution. Automating it would be relatively simple if there were the will to do so. Logistical problems... if it's a multihomed connection, which of the two or three providers manages it, and gets to blame the other provider(s) when anything goes wrong: or are you gonna rely on the customer to manage it? The box could (pretty easily) be built with a Primary and Secondary port. The cable plugged into the primary port would go to the ISP that sets the configuration. The cable plugged into the other port would go to an ISP expected to accept the announcements of the prefix provided by the ISP on the primary port. BFD could be used to illuminate a tri-color LED on the box for each port, which would be green if BFD state is good and red if BFD state is bad. At that point, whichever one is red gets the blame. If they're both green, then traffic is going via the primary and the primary gets the blame. If you absolutely have to troubleshoot which provider is broken, then start by unplugging the secondary. If it doesn't start working in 5 minutes, then clearly there's a problem with the primary regardless of what else is happening. Lather, rinse, repeat for the secondary. Someone might be able to make a protocol that lets this happen, which would need to detect on a per-route basis any performance/connectivity issues, but I would say it's not any known implementation of BGP. A few additional options to DHCP could actually cover it from the primary perspective. For the secondary provider, it's a little more complicated, but could be mostly automated so long as the customer identifies the primary provider and/or provides an LOA for the authorized prefix from the primary to the secondary. The only complexity in the secondary case is properly filtering the announcement of the prefix assigned by the primary. 1. ISPs are actually motivated to prevent customer mobility, not enable it. 2. ISPs are motivated to reduce, not increase the number of multi-homed sites occupying slots in routing tables. This is not some insignificant thing. The ISPs have to maintain routing tables as well; ultimately the ISP's customers are in bad shape, if too many slots are consumed. I never said it was insignificant. I said that solving the multihoming problem in this manner was trivial if there was will to do so. I also said that the above were contributing factors in the lack of will to do so. How about 3. Increased troubleshooting complexity when there are potential issues or complaints. I do not buy that it is harder to troubleshoot a basic BGP configuration than a multi-carrier NAT-based solution that goes woefully awry. I'm sorry, I've done the troubleshooting on both scenarios and I have to say that if you think NAT makes this easier, you live in a different world than I do. The concept of a fool proof BGP configuration is clearly a new sort of myth. Not really. Customer router accepts default from primary and secondary providers. So long as default remains, primary is preferred. If primary default goes away, secondary is preferred. Customer box gets prefix (via DHCP-PD or static config or whatever either from primary or from RIR). Advertises prefix to both primary and secondary. All configuration of the BGP sessions is automated within the box other than static configuration of customer prefix (if static is desired). Primary/Secondary choice is made by plugging providers into the Primary or Secondary port on the box. The idea that the protocol on its own, with a very basic config, does not ever require any additional attention, to achieve expected results; where expected results include isolation from any faults with the path from one of of the user's two, three, or four providers, and balancing for optimal throughput and best latency/loss to every destination. I have installed these configurations at customer sites for several of my consulting clients that wanted to multihome their SMBs. Some of them have been running for more than 8 years without a single issue. For all of the above requirements, no. You can't do that with the most advanced manual BGP configurations today. However, if we reduce it to: 1. The internet connection stays up so long as one of the two providers is up. 2. Traffic prefers the primary provider so long as the primary provider is up. 3. My addressing remains stable so long as I remain
Re: Is multihoming hard? [was: DNS amplification]
You do realize that there are quite a few people (home broadband subscribers?) who just go do something else when their internet goes down, right? There are people who don't understand the difference between a site being slow and packet-loss. For many of these people, losing internet service carries zero business impact, and relatively little life impact; they might even realize they have better things to do than watch cat videos or scroll through endless social media feeds. Will they really demand ubiquitous, unabridged connectivity? When? On Mar 23, 2013 12:58 PM, Owen DeLong o...@delong.com wrote: On Mar 23, 2013, at 12:12 , Jimmy Hess mysi...@gmail.com wrote: On 3/23/13, Owen DeLong o...@delong.com wrote: A reliable cost-effective means for FTL signaling is a hard problem without a known solution. Faster than light signalling is not merely a hard problem. Special relativity doesn't provide that information may travel faster than the maximum speed C.If you want to signal faster than light, then slow down the light. An idiot-proof simple BGP configuration is a well known solution. Automating it would be relatively simple if there were the will to do so. Logistical problems... if it's a multihomed connection, which of the two or three providers manages it, and gets to blame the other provider(s) when anything goes wrong: or are you gonna rely on the customer to manage it? The box could (pretty easily) be built with a Primary and Secondary port. The cable plugged into the primary port would go to the ISP that sets the configuration. The cable plugged into the other port would go to an ISP expected to accept the announcements of the prefix provided by the ISP on the primary port. BFD could be used to illuminate a tri-color LED on the box for each port, which would be green if BFD state is good and red if BFD state is bad. At that point, whichever one is red gets the blame. If they're both green, then traffic is going via the primary and the primary gets the blame. If you absolutely have to troubleshoot which provider is broken, then start by unplugging the secondary. If it doesn't start working in 5 minutes, then clearly there's a problem with the primary regardless of what else is happening. Lather, rinse, repeat for the secondary. Someone might be able to make a protocol that lets this happen, which would need to detect on a per-route basis any performance/connectivity issues, but I would say it's not any known implementation of BGP. A few additional options to DHCP could actually cover it from the primary perspective. For the secondary provider, it's a little more complicated, but could be mostly automated so long as the customer identifies the primary provider and/or provides an LOA for the authorized prefix from the primary to the secondary. The only complexity in the secondary case is properly filtering the announcement of the prefix assigned by the primary. 1. ISPs are actually motivated to prevent customer mobility, not enable it. 2. ISPs are motivated to reduce, not increase the number of multi-homed sites occupying slots in routing tables. This is not some insignificant thing. The ISPs have to maintain routing tables as well; ultimately the ISP's customers are in bad shape, if too many slots are consumed. I never said it was insignificant. I said that solving the multihoming problem in this manner was trivial if there was will to do so. I also said that the above were contributing factors in the lack of will to do so. How about 3. Increased troubleshooting complexity when there are potential issues or complaints. I do not buy that it is harder to troubleshoot a basic BGP configuration than a multi-carrier NAT-based solution that goes woefully awry. I'm sorry, I've done the troubleshooting on both scenarios and I have to say that if you think NAT makes this easier, you live in a different world than I do. The concept of a fool proof BGP configuration is clearly a new sort of myth. Not really. Customer router accepts default from primary and secondary providers. So long as default remains, primary is preferred. If primary default goes away, secondary is preferred. Customer box gets prefix (via DHCP-PD or static config or whatever either from primary or from RIR). Advertises prefix to both primary and secondary. All configuration of the BGP sessions is automated within the box other than static configuration of customer prefix (if static is desired). Primary/Secondary choice is made by plugging providers into the Primary or Secondary port on the box. The idea that the protocol on its own, with a very basic config, does not ever require any additional attention, to achieve expected results; where expected results include isolation from any faults with the path from one of of the user's two, three, or four providers,
Re: Is multihoming hard? [was: DNS amplification]
On Wed, 20 Mar 2013 15:16:57 -0500, Owen DeLong said: On Mar 20, 2013, at 9:55 AM, Seth Mattinen se...@rollernet.us wrote: Based on the average clue of your average residential subscriber (anyone here need not apply) I'd say that's a good thing. If BGP were plug-and-play automated with settings specified by the provider, what would the user's clue level have to do with it? The hypothetical existence of such a box doesn't change the fact that providers have to make business decisions based on actual boxes and users. Yes, if a plug-n-play idiot-proof BGP box existed, then the profit calculus would be different. On the other hand, if there existed a reliable cost-effective means for faster-than-light signaling, if would drastically change intercontinental peering patterns. All the same, anybody who's planning their interconnects in 2013 reality and not looking at who has 40K km of underwater cable and who doesn't is in for a surprise pgpfyTVVObIH4.pgp Description: PGP signature
Re: Is multihoming hard? [was: DNS amplification]
On 3/20/13, John Curran jcur...@istaff.org wrote: On Mar 20, 2013, at 2:25 PM, Owen DeLong o...@delong.com wrote: However, if there were motivation on the provider side, automated BGP configuration could enable consumers to attach to multiple providers and actually reduce support calls significantly. Do you really think making a large SP making theirt customers' configuration more complicated, using a protocol at a scale its implementations were never designed for, will amount to a reduction in net average support costs? See... I think there is an economic argument to made against massive multihoming; upward sloping supply curve situation, ultimately slots in the global routing table are a competitive market. Providing service to a network that wants to be multihomed could be expected to incur a greater marginal price on the provider (additional overhead to implement, maintain, and service the more complicated service). If that added price tag exceeds the amount that the customer values their marginal benefit from multihoming, then requiring multihoming hurts the provider, because a lesser quantity is purchased, and hurts the customer, because their increased payment in excess of the benefit is added cost. The more multihomed customers, the more routes, the greater the marginal cost of adding every BGP router, the greater cost of every route advertised,which you could speculate the tier1's will ultimately be passing onto service providers, and then the customers, in due time. The increased price tags reduce the quantity of services purchased. If you can figure out a way to persuade service providers of this belief, I would ask that you also persuade them that they have to offer dual-stack for all of their customers (which may have already resulted in them losing a small number of customers who expected IPv6 by now... :-) Until people are actually using dual-stack services, the current perceived benefit is $0, so it's really a tough argument to make. You have to rely on the prediction, that within a few years, dual-stack services will provide the added benefit of full internet reachability, and ipv4-only services will have significant impairments. Thanks! /John -- -JH