Re: large organization nameservers sending icmp packets to dns servers.
On Wed, Aug 08, 2007 at 03:20:56PM -0700, william(at)elan.net [EMAIL PROTECTED] wrote a message of 23 lines which said: How is that an anti DoS technique when you actually need to return an answer via UDP in order to force next request via TCP? Because there is no amplification: the UDP response packet can be very small.
Re: [ppml] too many variables
the fib in a heavily peered dfz router does not often converge now. the question is when will the router not be able to process the volume of churn, i.e. fall behind further and further? as there is non-trivial headroom in the algorithms, moore's law on the processors, etc. etc., your message is as operationally meaningful as dave and john telling us they can handle 2m prefixes today. randy
Re: too many variables
Yes a very big unless. Multi-core processors are already available that would make very large BGP convergence possible. Change the algorithm as well and perhaps add some multi-threading to it and it's even better. -- Leigh Porter Patrick Giagnocavo wrote: On Aug 9, 2007, at 12:21 PM, [EMAIL PROTECTED] wrote: so putting a stake in the ground, BGP will stop working @ around 2,500,000 routes - can't converge... regardless of IPv4 or IPv6. unless the CPU's change or the convergence algorithm changes. That is a pretty big unless . Cordially Patrick Giagnocavo [EMAIL PROTECTED]
Re: too many variables
On Aug 9, 2007, at 12:09 PM, Leigh Porter wrote: Yes a very big unless. Multi-core processors are already available that would make very large BGP convergence possible. Change the algorithm as well and perhaps add some multi-threading to it and it's even better. Anyone have a decent pointer to something that covers the current state of the art in algorithms and (silicon) router architecture, and maybe an analysis that shows the reasoning to get from those to realistic estimates of routing table size limits? Cheers, Steve -- Leigh Porter Patrick Giagnocavo wrote: On Aug 9, 2007, at 12:21 PM, [EMAIL PROTECTED] wrote: so putting a stake in the ground, BGP will stop working @ around 2,500,000 routes - can't converge... regardless of IPv4 or IPv6. unless the CPU's change or the convergence algorithm changes. That is a pretty big unless . Cordially Patrick Giagnocavo [EMAIL PROTECTED]
Re: too many variables
On Aug 9, 2007, at 3:47 PM, Tony Li wrote: On Aug 9, 2007, at 12:09 PM, Leigh Porter wrote: Yes a very big unless. Multi-core processors are already available that would make very large BGP convergence possible. Change the algorithm as well and perhaps add some multi-threading to it and it's even better. Not necessarily. BGP convergence is strongly dependent on memory bandwidth and multiple cores do not increase that. Tony Sun just released the T2 chip, claimed 60GB/s memory bandwidth, on- board 10GbE interface etc. Pricing under $1000 for an 8-core chip with 64 threads. Cordially Patrick Giagnocavo [EMAIL PROTECTED]
Re: [ppml] too many variables
the fib in a heavily peered dfz router does not often converge now. never? or over some predefined period of time? not often as there is non-trivial headroom in the algorithms, the BGP algorithm does not change (BGP-5, BGP-6 etc anyone) algorithm != protocol randy
Re: Industry best practices (was Re: large organization nameservers sending icmp packets to dns servers)
I can add one more voice to the chorus, not that it will necessarily change anyone's mind. :) When I was at Yahoo! the question of whether to keep TCP open or not had already been settled, since they had found that if they didn't have it open there was some small percentage of users who could not reach them. Given the large total number of {users|dns requests}/day even a small percentage was too much to sacrifice. In addition to that, it was already well established policy that all RR sets should be kept under the 512 byte limit. I took this a step further and worked (together with others) on a patch to restrict the size of DNS answers to 512 by returning a random selection of any RR set larger than that. Even with all of those precautions, I still measured a non-trivial amount of TCP traffic to our name servers, most of which was for valid requests. BTW, one of the things that a lot of people don't take into account in this little equation is the fact that the size of the QUERY will affect the size of the response. So, given this experience, my conclusions (for whatever they are worth) are: 1. You can restrict 53/TCP on an authoritative name server if you want to, but you will lose traffic because of it. 2. Whether this is an acceptable loss or not is a local policy decision, but you should understand the consequences before you act. 3. No matter what your policy is, you cannot guarantee that employees will never make a mistake and create an RR set larger than 512 bytes. 4. You cannot control the behavior of client software out in the world, no matter how much you rant about it. Others have already brought up the issues of DNSSEC, IPv6, etc. so I won't belabor how important having working TCP _and_ EDNS0 is going to be down the road. And last but not least, the yang of My network, my rules has a yin to balance it out, Be liberal in what you accept hth, Doug -- If you're never wrong, you're not trying hard enough
RE: too many variables
I asked this question to a couple of folks: at the current churn rate/ration, at what size doe the FIB need to be before it will not converge? and got these answers: - jabber log - a fine question, has been asked many times, and afaik noone has provided any empirically grounded answer. a few realities hinder our ability to answer this question. (1) there are technology factors we can't predict, e.g., moore's law effects on hardware development Moore's Law is only half of the equation. It is the part that deals with route churn the rate at which those can be processed (both peer notification and control-plane programming data-plane in the form of FIB changes). Moore's Law almost has zero relevance to FIB sizes. It doesn't map to growth in SRAM or innovations/mechanisms for how to reduce the requirements for SRAM while growing FIB sizes. cheers, lincoln.
Re: too many variables
Lincoln Dale wrote: I asked this question to a couple of folks: at the current churn rate/ration, at what size doe the FIB need to be before it will not converge? and got these answers: - jabber log - a fine question, has been asked many times, and afaik noone has provided any empirically grounded answer. a few realities hinder our ability to answer this question. (1) there are technology factors we can't predict, e.g., moore's law effects on hardware development Moore's Law is only half of the equation. It is the part that deals with route churn the rate at which those can be processed (both peer notification and control-plane programming data-plane in the form of FIB changes). Moore's law just makes an observation that the transistor count feasible for a minimum cost component doubles every 24 months. It actually says nothing about the performance of those components or their speed. Moore's Law almost has zero relevance to FIB sizes. It doesn't map to growth in SRAM or innovations/mechanisms for how to reduce the requirements for SRAM while growing FIB sizes. sram components are following their own trajectory and you can fairly easily at this point project how big a cam you'll be able to buy and what it's power consumption will be out a couple years from the products currently in your routers (which are for the most part not state of the art). That said, not all forwarding engines in line cards utilize ternary cams or srams so assumptions that involve sram and sram-like components being the only game in town for fib storage are dangerous. cheers, lincoln.