Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 8 Oct 2002, Iljitsch van Beijnum wrote: Ok, but how do you generate megabits worth of traffic for which there is no return traffic? At some level, someone or something must be trying to do something _really hard_ but keep failing every time. It just doesn't make sense. I could show you VOLUMES of name server logs for people doing things that could never possibly succeed, over and over and over again. My favorite are the people who try to use my authoritative name servers as resolvers. No one at my company can recall a time that our auth. name servers EVER allowed recursion. My point is simply that we shouldn't underestimate the stupidity of the masses, and anything that can be done to improve things, should be. Of course, the problem in this thread is the varying definitions of improve. Doug
Re: Who does source address validation? (was Re: what's that smell?)
On Thu, 10 Oct 2002, Greg A. Woods wrote: [ On Thursday, October 10, 2002 at 11:53:18 (-0400), Richard A Steenbergen wrote: ] Subject: Re: Who does source address validation? (was Re: what's that smell?) I'm sure we can all agree on at least the concept that sourcing packets from an address which cannot receive a reply is at least potentially useful, for example to avoid DoS against a critical piece of infrastructure. Would it make people feel better if there was a specific seperate non-routed address space reserved for router generated messages which don't want replies? Why? Why not just use 127.0.0.1?!?!?!?!? and thats different from rfc1918 because?
Re: Who does source address validation? (was Re: what's that smell?)
On Thu, Oct 10, 2002 at 01:06:15AM -0400, [EMAIL PROTECTED] wrote: On Wed, 09 Oct 2002 23:05:59 BST, Stephen J. Wilcox said: On a related issue (pMTU) I recently discovered that using a link with MTU 1500 breaks a massive chunk of the net - specifically mail and webservers who block all inbound icmp.. the servers assume 1500, send out the packets with DF My personal pet peeve is the opposite - we'll try to use pMTU, some provider along the way sees fit to run it through a tunnel, so the MTU there is 1460 instead of 1500 - and the chuckleheads number the tunnel endpoints out of 1918 space - so the 'ICMP Frag Needed' gets tossed at our border routers, because we do both ingress and egress filtering. It's bad enough when all the interfaces on the offending unit are 1918-space, but it's really annoying when the critter has perfectly good non-1918 addresses it could use as the source... Argh... Ok, I know how this manages to rile people up, but might I suggest that you brought it upon yourself? There is a time and a place for messages sourced from addresses to which you cannot reply, and a time and place where those messages should not exist. Obviously, a dns *QUERY* is not the place for a message which cannot be returned. But what about an ICMP *RESPONSE*? Nothing depends upon the source address of the IP header for operation, the original headers which caused the problem are encoded in the ICMP message. And yet people are so busy concerning themselves with this mythical thing which might break from receiving ICMP overlapping existing internal 1918 space, the extra 0.4% of bandwidth which might be wasted, and the righteous feeling that they have done something useful, that they don't stop to realize *THEY* are the ones breaking PMTU-D. I'm sure we can all agree on at least the concept that sourcing packets from an address which cannot receive a reply is at least potentially useful, for example to avoid DoS against a critical piece of infrastructure. Would it make people feel better if there was a specific seperate non-routed address space reserved for router generated messages which don't want replies? Why? Even Windows 2000+ includes blackhole detection which will eventually remove the DF bit if packets aren't getting through and ICMP messages aren't coming back, something many unixes lack. But the heart of the problem is that people still push packets like every one must include the maximum data the MTU can support. Do we have any idea how much network suffering is being caused by that damn 1500 number right now? Aside from the fact that it is one of the worst numbers possible for the data, it throws a major monkey wrench in the use of tunnels, pppoe, etc. Eventually we will realize the way to go is something like 4096 data octets, plus some room for headers, on a 4470 MTU link. But if the best reason we can come up with is ISIS, the IEEE will just keep laughing. /rant -- Richard A Steenbergen [EMAIL PROTECTED] http://www.e-gerbil.net/ras PGP Key ID: 0x138EA177 (67 29 D7 BC E8 18 3E DA B2 46 B3 D8 14 36 FE B6)
Re: Who does source address validation? (was Re: what's that smell?)
On Thu, 10 Oct 2002, Richard A Steenbergen wrote: Even Windows 2000+ includes blackhole detection which will eventually remove the DF bit if packets aren't getting through and ICMP messages aren't coming back, something many unixes lack. Wow, now I'm impressed. And what about the 1999 other versions of Windows? This is hardly a new problem. Still, it's good that some people at least make progress, even if very slowly. But the heart of the problem is that people still push packets like every one must include the maximum data the MTU can support. And why not? Do we have any idea how much network suffering is being caused by that damn 1500 number right now? Aside from the fact that it is one of the worst numbers possible for the data, it throws a major monkey wrench in the use of tunnels, pppoe, etc. So don't use those. Eventually we will realize the way to go is something like 4096 data octets, plus some room for headers, on a 4470 MTU link. So what then if someone runs a secure tunnel over wireless over a PPPoE over ADSL using mobile IPv6 that runs over a tunnel or two ad nauseum until the headers get bigger than 374 bytes? Then you'll have your problem right back. Might as well really solve it the first try. One of the problems is that there is no generally agreed on and widely available set of rules for this stuff. Setting the DF bit on all packets isn't good, but it works. Using RFC1918 space to number your tunnel routers isn't good, but it works. Filtering validating source addresses on ingress is good, but hey, it doesn't work! Making a good list of best practices (and then have people widely implement them) might also go a long way towards showing concerned parties such as the US administration that the network community consists of responsible people that can work together for the common good. But if the best reason we can come up with is ISIS, the IEEE will just keep laughing. Why is the IEEE laughing?
Re: Who does source address validation? (was Re: what's that smell?)
On Thu, Oct 10, 2002 at 06:36:33PM +0200, Iljitsch van Beijnum wrote: So what then if someone runs a secure tunnel over wireless over a PPPoE over ADSL using mobile IPv6 that runs over a tunnel or two ad nauseum until the headers get bigger than 374 bytes? Then you'll have your problem right back. Might as well really solve it the first try. This is a problem that would be solved by everyone being responsible and doing pmtud properly. One of the problems is that there is no generally agreed on and widely available set of rules for this stuff. Setting the DF bit on all packets isn't good, but it works. Using RFC1918 space to number your tunnel routers isn't good, but it works. Filtering validating source addresses on ingress is good, but hey, it doesn't work! I think we're starting to get at the heart of the problem but let me stick my neck out and say it: Registries (APNIC, ARIN, RIPE, usw) charge for ip addresses. be it via a lease/registration fee, it's a per-ip charge that ISPs must get via some means out of their subscribers. (Unless people don't care about money that is). Back in the days, one could obtain ip addresses from Internic saying i will not connect to internet, i intend to connect at some later date in a year or two .. (or similar), i intend to connect now. People number out of 1918 space primarily for a few reasons, be them good or not: 1) Internal use 2) Cost involved.. nobody else needs to telnet to my p2p links but me, and i don't want to pay {regional_rir} for my internal use to reduce costs 3) security of not being a publicly accessible network. This can break many things, pmtu, multicast and various streaming (multi)media applications. With the past scare of we'll be out of ip addresses by 199x still fresh in some peoples memories, they in their good consience decided to also conserve ips via this method. The problem is not everyone today that considers themselves a network operator understands all the ramifications of their current practices, be they good or bad. Going into fantasy-land mode, if IPv6 addresses were instantly used by everyone, people could once again obtain ips that could be used for internal private use yet remain globally unique, therefore allowing tracking back of who is leaking their own internal sources. Making a good list of best practices (and then have people widely implement them) might also go a long way towards showing concerned parties such as the US administration that the network community consists of responsible people that can work together for the common good. I agree here, I personally think that numbering your internal links out of 1918 space is not an acceptable solution unless it's behind your natted network/firewall and does not leak out. Perhaps some of those that are the better/brighter out there want to start to write up a list of networking best practices. Then test those book smart ccie/cne types with the information to insure they understand the ramifications. a few good whitepapers about these might be good to include or quiz folks on. i suspect there's only a handful of people that actually understand the complete end-to-end problem and all the ramifications involved as it is quite complicated. But if the best reason we can come up with is ISIS, the IEEE will just keep laughing. Why is the IEEE laughing? The implication is that IEEE will not change the 802.x specs to allow larger [default] link-local mtu due to legacy interop issues. imagine your circa 1989 ne2000 card attempting to process a 4400 byte frame on your local lan. a lot of the cheap ethernet cards don't include enough buffering to handle such a large frame let alone the legacy issues involved.. and remember the enterprise networks have a far larger number of ethernet interfaces deployed than the entire internet combined * 100 at least. any change to the spec would obviously affect them also. - jared -- Jared Mauch | pgp key available via finger from [EMAIL PROTECTED] clue++; | http://puck.nether.net/~jared/ My statements are only mine.
Re: Who does source address validation? (was Re: what's that smell?)
[EMAIL PROTECTED] (Sean Donelan) writes: If there is a magic solution, I would love to hear about it. Unfortunately, the only solutions I've seen involve considerable work and resources to implement and maintain all the exceptions needed to do 100% source address validation. I had no idea this was so hard. I guess the people who maintain AS3557 (or AS6461 for that matter) do such a job of making this _look_ easy that I just naturally thought it _was_ easy. Forgive my simple minded approach, if it really is simple minded, but... any given interface or peering session or whatever is either customer facing, peer/transit facing, or a trunk which leads ultimately to more customer AND more peer/transit facing interfaces elsewhere in the network. On customer-facing connections, there's a short list of things they should be allowed to say as IP source addresses. (They might be multihomed but chances are low that you want them giving transit to other parts of the network through you, no matter whether you do usage sensitive billing or not.) On transit/peer facing connections, there's a short list of things they should NOT be allowed to send from (your own customers, chiefly) and a short list of things you should NOT be allowed to send them from (RFC1918 being the big example.) Because F-root's network operator was filtering out inbound RFC1918-sourced packets, I could only see them at C-root. Now, F-root can also see them, so I can once again collect stats from (and complain about stats from) both. RFC1918 routes are allowed to float around inside AS3557, by the way, since customers use them for VPN purposes. So we don't filter out ingress 1918 from customer-facing interfaces; instead we filter out egress 1918 toward our peers/transits. Like I said, I had no idea this was generally thought to be so complicated. -- Paul Vixie
Re: Who does source address validation? (was Re: what's that smell?)
[EMAIL PROTECTED] (Sean Donelan) writes: If c.root-servers.net provider did this, they wouldn't see any RFC1918 traffic because it would be dropped at their provider's border routers. Right. But then I wouldn't be able to measure it, which would be bad. If c.root-servers.net provider's peer did this, again c.root-servers.net provider wouldn't see the rfc1918 packets. This is the single case where not being able to measure/complain would be OK, because the problem wouldn't be in the core, it would be (correctly) stopped at the source-AS. So why doesn't c.root-servers.net provider or its peers implement this simple solution? Its not a rhetorical question. If it was so simple, I assume they would have done it already. C-root's provider is also C-root's owner, and they have offerred to shut this traffic off further upstream, as F-root's network operators were doing until yesterday, but I asked that it not be filtered anywhere except C-root itself (where I can measure it) or distant source-AS's (which is where it makes sense.) -- Paul Vixie
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 8 Oct 2002, John M. Brown wrote: Simulation models I've been running show that an average of 12 to 18 percent of a providers traffic would disappear if they filtered RFC-1918 sourced packets. The percentage ranges scale with the size of the provider. Smaller providers, less impact, larger providers more impact. In addition to the bandwidth savings, there is also a support cost reduction and together, I believe backbone providers can see this on the bottom line of their balance sheets. Testing a couple of years ago on a widely used router vendor's implementation of uRPF showed in certain pathalogical cases a 50% throughput hit when uRPF was turned on. Even a single line access list permit ip any any had a throughput hit on certain platforms. http://www.nc-itec.org/archive/URPF/Unicast%20RPF%20Test%20Results%20Summary%20-%20performance%20assessment%20v0.2.pdf Whether this is still true, the legend lives on. A 20% throughput hit won't be offset by a 12 to 18 percent bandwidth savings. Especially on heavily loaded circuits. Some network engineers are reluctant to do any type of packet filtering (uRPF or ACL based) because of the belief it will hurt performance (latency, throughput, etc). While I think its a good idea, and generally do it on any network I design from scratch; so far you really haven't given me much ammo to convince people to change what is already working for them. Going back to the IBM/Ahmdal mainframe days, the traditional requirement to get people to change was it needed to be 30% cheaper or 30% better. Anything less, and it was usually wasn't worth the effort of making the change, especially if the current system didn't have a visible problem.
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 8 Oct 2002, Greg A. Woods wrote: [ On Tuesday, October 8, 2002 at 22:34:51 (+0100), Stephen J. Wilcox wrote: ] Subject: Re: Who does source address validation? (was Re: what's that smell?) So I guess you may argue block RFC1918 tcp inbound but icmp and udp .. you start to break things, perhaps that is why large providers dont do this on backbone links. Such things REALLY _NEEED_ to be broken, and the sooner the better as then perhaps the offenders will fix such things sooner too, because they are by definition already broken and in violation of RFC 1918 and good common sense. Ok but real world calling. I have tried this and when customers find something doesnt work on your network but it does on your competitor you make it work even if that means breaking rules. You've snipped the other comments from my email which goes on to say take any RFC for a protocol eg POP, SMTP etc and look at whats actually being done with it, most commonly look at how Microsoft have implemented it or what the big ISPs are doing on their servers etc and you either tow the line or your service suffers. Steve
Re: Who does source address validation? (was Re: what's that smell?)
On Wednesday, Oct 9, 2002, at 11:36 Canada/Eastern, Stephen J. Wilcox wrote: On Tue, 8 Oct 2002, Greg A. Woods wrote: Such things REALLY _NEEED_ to be broken, and the sooner the better as then perhaps the offenders will fix such things sooner too, because they are by definition already broken and in violation of RFC 1918 and good common sense. Ok but real world calling. I have tried this and when customers find something doesnt work on your network but it does on your competitor you make it work even if that means breaking rules. What services require transport of packets with RFC1918 source addresses across the public network? I can think of esoteric examples of things it would be possible to do, but nothing that a real-world user might need (or have occasion to complain about). Do you have experience of such breakage from your own customers? It would be interesting to hear details. Joe
Re: Who does source address validation? (was Re: what's that smell?)
Ok but real world calling. I have tried this and when customers find something doesnt work on your network but it does on your competitor you make it work even if that means breaking rules. What services require transport of packets with RFC1918 source addresses across the public network? I can think of esoteric examples of things it would be possible to do, but nothing that a real-world user might need (or have occasion to complain about). Do you have experience of such breakage from your own customers? It would be interesting to hear details. Loss of ICMP packets generated by links with endpoints numbered in RFC1918 space. Holes in traceroutes, broken PMTU detection. DS
Re: Who does source address validation? (was Re: what's that smell?)
On Wed, 9 Oct 2002, Joe Abley wrote: What services require transport of packets with RFC1918 source addresses across the public network? I can think of esoteric examples of things it would be possible to do, but nothing that a real-world user might need (or have occasion to complain about). Do you have experience of such breakage from your own customers? It would be interesting to hear details. Check the archives, its been covered every time this issue has come up... a. Intra-provider links using RFC1918 addresses and MTU changes/PMTU discovery b. Traceroutes TTL exceeded packets across RFC1918 intra-provider links People used to have lots of problems with Home customers trying to access their websites if their filtered RFC1918 addresses using large MTU connected servers (i.e. non-ethernet). Ok, so Home is out of business, but I'm sure there are other similar cases which would break.
Re: Who does source address validation? (was Re: what's that smell?)
Loss of ICMP packets generated by links with endpoints numbered in RFC1918 space. Holes in traceroutes, broken PMTU detection. Sherman, set the Way-Back Machine for August: To: David Schwartz [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: NSPs filter? In-reply-to: Your message of Thu, 08 Aug 2002 17:57:35 PDT. [EMAIL PROTECTED]@whenever Date: Thu, 08 Aug 2002 18:45:17 -0700 From: Stephen Stuart [EMAIL PROTECTED] In August you said it this way: One thing that sometimes comes up is that people do number links using RFC1918 address space which occasionally results in an ICMP 'fragmentation needed but DF bit set' packet with an RFC1918 source address. Filtering out this packet could result in TCP breaking. I still say this: That can be accomodated; behold, all the joy of PMTUD, with none of the other crap from designated special-use address space: firewall { family inet { filter external-filter { term allow-icmp-unreach { from { protocol icmp; icmp-type unreachable; icmp-code fragmentation-needed; } then { count allow-icmp-need-frag; accept; } } term allow-icmp-timxceed { from { protocol icmp; icmp-type time-exceeded; icmp-code [ ttl-eq-zero-during-transit ttl-eq-zero-during-reassembly ]; } then { count allow-icmp-timxceed; accept; } } term deny-rfc1918 { from { source-address { 10.0.0.0/8; 172.16.0.0/12; 192.168.0.0/16; } } then { count deny-rfc1918; discard; } } term deny-test { from { source-address { 192.0.2.0/24; } } then { count deny-test-net; discard; } } term deny-autoconfig { from { source-address { 169.254.0.0/16; } } then { count deny-autoconfig; discard; } } term LAST { then accept; } } } } Application is left as an exercise to the reader. Stephen
Re: Who does source address validation? (was Re: what's that smell?)
Do you have experience of such breakage from your own customers? It would be interesting to hear details. Loss of ICMP packets generated by links with endpoints numbered in RFC1918 space. Holes in traceroutes, broken PMTU detection. Why do those links have endpoints in RFC1918 space to begin with? Alex
Re: Who does source address validation? (was Re: what's that smell?)
Just out of interest how do you co-ordinate use of RFC 1918 addresses and routes amongst your customers? Do you run a registry for them, or do you just let them fight it out and the one with the biggest packets wins or something like that? there's a registry. we also maintain IN-ADDR zones for them and encourage the use of stub zones in customer name servers in order to direct their queries toward the local RFC1918 registry. now, i'll admit that it took almost two hours to get this working initially, and almost a week for it to settle down, and that the network is small -- only about 50 customers. but for the last few years no RFC1918-sourced traffic, nor any RFC1918-IN-ADDR DNS query, has egressed from this network. it can't be THAT hard.
Re: Who does source address validation? (was Re: what's that smell?)
On Wed, 9 Oct 2002, Joe Abley wrote: On Wednesday, Oct 9, 2002, at 11:36 Canada/Eastern, Stephen J. Wilcox wrote: On Tue, 8 Oct 2002, Greg A. Woods wrote: Such things REALLY _NEEED_ to be broken, and the sooner the better as then perhaps the offenders will fix such things sooner too, because they are by definition already broken and in violation of RFC 1918 and good common sense. Ok but real world calling. I have tried this and when customers find something doesnt work on your network but it does on your competitor you make it work even if that means breaking rules. What services require transport of packets with RFC1918 source addresses across the public network? None afaik which is why they should be blocked - on ingress from customer links. Dont get me wrong, I'm just sharing experience not ethics and saying we should all adhere to the RFC but if you apply filters that assume others are also doing so you may be surprised.. Without repeating myself or list archives its all very well strictly following all the RFC guidelines and saying to tell the planet its Microsoft or Home's fault its not working but the customers really dont buy it and they will go elsewhere and it mightnt be about corporate $$$s but those same $$$s pay your wages and then it starts to hurt! I can think of esoteric examples of things it would be possible to do, but nothing that a real-world user might need (or have occasion to complain about). On a related issue (pMTU) I recently discovered that using a link with MTU 1500 breaks a massive chunk of the net - specifically mail and webservers who block all inbound icmp.. the servers assume 1500, send out the packets with DF set, they hit the link generating an icmp frag, icmp is filtered and data stops. Culprits included several major ISP/Telcos ... I'd love to tell the customer the link is fine its the rest of the Internet at fault but in the end I just forced the DF bit clear as a temp workaround before finally swapping out to MTU 1500! Do you have experience of such breakage from your own customers? It would be interesting to hear details. I did attempt strict ingress filtering at borders after a DoS some time ago, I figured I'd disallow any non public addresses. I took it off within a day after a number of customers found a whole bunch of things had stopped working... Unfortunately I cant give you an example as this was a while back and I dont have the details to hand. But if anyone with an appreciable sized customer base wants to try implementing such filters feel free to forward the customer issues to the list as references! Steve
Re: Who does source address validation? (was Re: what's that smell?)
On Wed, 9 Oct 2002 15:53:40 -0400 (EDT), [EMAIL PROTECTED] wrote: Do you have experience of such breakage from your own customers? It would be interesting to hear details. Loss of ICMP packets generated by links with endpoints numbered in RFC1918 space. Holes in traceroutes, broken PMTU detection. Why do those links have endpoints in RFC1918 space to begin with? Alex Because some administrators are ignorant, clueless, or malicious. We don't all have the luxury of saying, It doesn't work on our network and it does on our competitor's, and we could fix it if we wanted to at no significant harm to us, but we won't because we are in the right. DS
Re: Who does source address validation? (was Re: what's that smell?)
would be interesting to hear details. Loss of ICMP packets generated by links with endpoints numbered in RFC1918 space. Holes in traceroutes, broken PMTU detection. Why do those links have endpoints in RFC1918 space to begin with? Alex Because some administrators are ignorant, clueless, or malicious. We don't all have the luxury of saying, It doesn't work on our network and it does on our competitor's, and we could fix it if we wanted to at no significant harm to us, but we won't because we are in the right. In that case you should not complain about 1918 space being used for say.. attacking you either. After all, it does work on the network of your competitors. ALex
Re: Who does source address validation? (was Re: what's that smell?)
Though the docs arent indexed in the web search tool yet, JUNOS 5.5 adds the ability to perform loose uRPF now. [edit int name unit 0 family inet] set rpf-check mode loose Watch for wrapping http://www.juniper.net/techpubs/software/junos/junos55/swconfig55-interfaces/download/swconfig55-interfaces.pdf Cheers, -- steve Date: Tue, 8 Oct 2002 12:29:48 -0400 From: Jared Mauch [EMAIL PROTECTED] Subject: Re: Who does source address validation? (was Re: what's that smell?) On Tue, Oct 08, 2002 at 10:15:28AM -0600, Danny McPherson wrote: reachable-via any means you're only going to drop the packet if you don't have *ANY* route back to them. What's a route? An IP RIB instance? A BGP Loc-RIB instance? An IGP LSDB IP prefix entry? A BGP Adj-RIB-In instance? I think you mean if you don't have *ANY* **FIB** entry for the source address. If I peer with two large providers on the same router and both have prefix D.1 behind them and advertise the prefix to me, it's likely that only one of those two paths is going to make it into the BGP Loc-RIB (and subsequently, the IP RIB then FIB). If I use ANY FIB entry as proof that it's a valid source then that only addresses RFC1918ish space and only suggest that I first need to generate an invalid BGP route for the prefix, then spoof the packets. This doesn't fix spoofing with global IP addresses. If I use only entries that occur in the RIB and associate them with the receiving interface and receive a packet with an SA of D.1 from the peer whose path wasn't installed in the BGP Loc-RIB then I'll drop it. (And there's nothing broken with this configuration -- it's why we have routers with 1 million BGP paths but only 150K routes/fib entries, as I'm sure you know). If you're going to do source address validation then you need to associated all potential valid paths for a given prefix with the associated ingress interface, else it's mostly useless. Yes, but if i continue in my ideal situation of people (mostly) filter their bgp customers, so they won't announce the 1918 space, or similar. even the large peers filter out each other so they don't pick up 1918 announcements. Plus people use Robs Secure IOS Template to drop extraneous bgp announcements for unregistered/unassigned space (from IANA). I'm not purporting this as a solution to all problems on the internet, but if one walks before one runs this is a reasonable step in the correct direction. Or at least a nice bandaid (duct tape?) to help keep the network in a bit more sensible shape. And if everyone did it, it would help with the orignal problem/statistics posted about how much 1918 space was hitting one specific root server. I am interested in hearing other solutions to the problem including extra validations such as the above, but those aren't avalable today and what i'm suggesting is in the 12.0S and 12.1E IOS images and probally others. - Jared - -- Jared Mauch | pgp key available via finger from [EMAIL PROTECTED] clue++; | http://puck.nether.net/~jared/ My statements are only mine.
Re: Who does source address validation? (was Re: what's that smell?)
On Wed, 09 Oct 2002 23:05:59 BST, Stephen J. Wilcox said: On a related issue (pMTU) I recently discovered that using a link with MTU 1500 breaks a massive chunk of the net - specifically mail and webservers who block all inbound icmp.. the servers assume 1500, send out the packets with DF My personal pet peeve is the opposite - we'll try to use pMTU, some provider along the way sees fit to run it through a tunnel, so the MTU there is 1460 instead of 1500 - and the chuckleheads number the tunnel endpoints out of 1918 space - so the 'ICMP Frag Needed' gets tossed at our border routers, because we do both ingress and egress filtering. It's bad enough when all the interfaces on the offending unit are 1918-space, but it's really annoying when the critter has perfectly good non-1918 addresses it could use as the source... Argh... -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech msg06006/pgp0.pgp Description: PGP signature
Re: Who does source address validation? (was Re: what's that smell?)
[EMAIL PROTECTED] wrote: My personal pet peeve is the opposite - we'll try to use pMTU, some provider along the way sees fit to run it through a tunnel, so the MTU there is 1460 instead of 1500 - and the chuckleheads number the tunnel endpoints out of 1918 space - so the 'ICMP Frag Needed' gets tossed at our border routers, because we do both ingress and egress filtering. That's not terribly hard to overcome - allow icmp unreachables (from any source) in your acl, then deny all traffic from RFC 1918 addresses, then the rest of the ACL. Combined with CAR (or CatOS QoS rate limiting) on icmp's, you end up with all the functionality, and almost none of the bogus traffic.
RE: Who does source address validation? (was Re: what's that smell?)
IMHO, it's not too bad if you do it at your edges. Explicit permits for valid source addrs is a well-known defense against source spoofing which of course also addresses the RFC1918 leakage issue to some degree. It's not that hard to incorporate this into customer installation and support processes. A lot more difficult to manage at the borders. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Sean Donelan Sent: Tuesday, October 08, 2002 10:09 AM To: Joe Abley Cc: Kelly J. Cooper; [EMAIL PROTECTED] Subject: Who does source address validation? (was Re: what's that smell?) On Tue, 8 Oct 2002, Joe Abley wrote: What is difficult about dropping packets sourced from RFC1918 addresses before they leave your network? I kind of assumed that people weren't doing it because they were lazy. I've checked the marketing stuff of several backbones, as far as I could tell only one makes the blanket statement about source address validation on their entire network. http://www.ipservices.att.com/backbone/techspecs.cfm ATT has also implemented security features directly into the backbone. IP Source Address Assurance is implemented at every customer point-of-entry to guard against hackers. ATT examines the source address of every inbound packet coming from customer connections to ensure it matches the IP address we expect to see on that packet. This means that the ATT IP Backbone is RFC2267-compliant. What backbones do 100% source address validation? And how much of it is real, and how much is marketing? On single-homed or few-homed stub networks its easy. But even a moderately complex transit network it becomes difficult. Yes, I know about uRPF-like stuff, but the router vendors are still tweaking it. If there is a magic solution, I would love to hear about it. Unfortunately, the only solutions I've seen involve considerable work and resources to implement and maintain all the exceptions needed to do 100% source address validation. Heck, the phone network still has trouble getting the correct Caller-ID end-to-end.
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, Oct 08, 2002 at 11:09:10AM -0400, Sean Donelan wrote: If there is a magic solution, I would love to hear about it. to drop the rfc1918 space, there is a close to magic solution. install this on all your internal, upstream, downstream interfaces (cisco router) [cef required]: ip verify unicast source reachable-via any This will drop all packets on the interface that do not have a way to return them in your routing table. Unfortunately, the only solutions I've seen involve considerable work and resources to implement and maintain all the exceptions needed to do 100% source address validation. Juniper has a somewhat viable solution to the 100% source validation for bgp customers. they will consider non-best paths in their unicast-rpf check on the customer interface. This means that even if 35.0.0.0/8 is best returned via your peer instead of via the provider the packet came in, but they are advertizing the prefix to you, you will not drop the packet. Heck, the phone network still has trouble getting the correct Caller-ID end-to-end. Uh, this is because it costs another 1/2 a cent a minute (or more) to provision a caller-id capable trunk (long distance) and people just don't want to pay the extra money and it's cheaper to not identify oneself. (This is why most telemarketers don't generate caller-id or if they can, they supress it). - jared -- Jared Mauch | pgp key available via finger from [EMAIL PROTECTED] clue++; | http://puck.nether.net/~jared/ My statements are only mine.
Re: Who does source address validation? (was Re: what's that smell?)
If there is a magic solution, I would love to hear about it. I strongly doubt any of the large providers perform dataplane source address validation from peers. Heck, I doubt any perform explicit route filtering on routes learned from peers at the control plane. Ideally, one would first employ some mechanism to generate *explicit* ingress BGP route filters. With BGP Route Refresh the largest offshoot (manual session reset or bouncing the route) is no longer necessary. From there, you could either use BGP's Adj-RIBs-In in some uRPFish thing, or employ the same set of BGP route filters for source address filters. Of course, then the lack of registry route object integrity, secure update mechanism, etc.., etc... comes to question. -danny
Re: Who does source address validation? (was Re: what's that smell?)
install this on all your internal, upstream, downstream interfaces (cisco router) [cef required]: ip verify unicast source reachable-via any This will drop all packets on the interface that do not have a way to return them in your routing table. Of course, this is the IP RIB and may not include all the potential paths in the BGP Adj-RIBs-In, right? As such, you've still got the potential for asymmetric routing to break things. Juniper has a somewhat viable solution to the 100% source validation for bgp customers. they will consider non-best paths in their unicast-rpf check on the customer interface. This means that even if 35.0.0.0/8 is best returned via your peer instead of via the provider the packet came in, but they are advertizing the prefix to you, you will not drop the packet. What's a bgp customer? Can they support 500K+ uRPF entries here? -danny
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, Oct 08, 2002 at 09:34:19AM -0600, Danny McPherson wrote: install this on all your internal, upstream, downstream interfaces (cisco router) [cef required]: ip verify unicast source reachable-via any This will drop all packets on the interface that do not have a way to return them in your routing table. Of course, this is the IP RIB and may not include all the potential paths in the BGP Adj-RIBs-In, right? As such, you've still got the potential for asymmetric routing to break things. No, this is if i have a path in fib back to this source, transmit else drop; It does not validate that it is reachable via that interface, just reachable at all. so as long as you aren't null routing 1918 space in your network to drop packets destined for 1918 space, it will determine there is no route (back) and drop it. Juniper has a somewhat viable solution to the 100% source validation for bgp customers. they will consider non-best paths in their unicast-rpf check on the customer interface. This means that even if 35.0.0.0/8 is best returned via your peer instead of via the provider the packet came in, but they are advertizing the prefix to you, you will not drop the packet. What's a bgp customer? Can they support 500K+ uRPF entries here? I'm not sure what the hardware limitations on the Juniper router are with this unicast rpf. It was introduced recently (I think in 5.3?) and i personally have not done a significant amount of testing with it. I'm just offering it as general knowledge for those that aren't aware that Juniper has unicast rpf, and that it is somewhat different from the cisco per-interface model as well as offering a different type of check that may address some peoples design issues. (this uses the bgp adj-rib-in info), not the cisco check i describe above. - jared -- Jared Mauch | pgp key available via finger from [EMAIL PROTECTED] clue++; | http://puck.nether.net/~jared/ My statements are only mine.
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 08 Oct 2002 11:09:10 EDT, Sean Donelan said: http://www.ipservices.att.com/backbone/techspecs.cfm ATT has also implemented security features directly into the backbone. IP Source Address Assurance is implemented at every customer point-of-entry to guard against hackers. ATT examines the source address of every inbound packet coming from customer connections to ensure it matches the IP address we expect to see on that packet. This means that the ATT IP Backbone is RFC2267-compliant. Thank you, ATT. msg05936/pgp0.pgp Description: PGP signature
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, Oct 08, 2002 at 11:49:41AM -0400, Jared Mauch wrote: Of course, this is the IP RIB and may not include all the potential paths in the BGP Adj-RIBs-In, right? As such, you've still got the potential for asymmetric routing to break things. No, this is if i have a path in fib back to this source, transmit else drop; Unless I'm missing something, that's what he said; fib == loc-rib for the purposes of this discussion, and loc-rib is built from the various adj-ribs-in. That said, I'm curious to know how asymmetric routing can break this. As long as someone is sending (and you are installing) a prefix that includes the source address this check will pass. If you don't have a route back to the source at all, that isn't asymmetric routing, it's network partitioning, assuming the source is legitimate. --Jeff
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, Oct 08, 2002 at 12:09:56PM -0400, Jeff Aitken wrote: On Tue, Oct 08, 2002 at 11:49:41AM -0400, Jared Mauch wrote: Of course, this is the IP RIB and may not include all the potential paths in the BGP Adj-RIBs-In, right? As such, you've still got the potential for asymmetric routing to break things. No, this is if i have a path in fib back to this source, transmit else drop; Unless I'm missing something, that's what he said; fib == loc-rib for the purposes of this discussion, and loc-rib is built from the various adj-ribs-in. Correct, but it is not doing a check to see if it's returnable via the interface it came in, just if it's returnable at all. As the fib/rib is built off of the adj-rib-in (minus filtering and local policy), and the check on the cisco validates against the CEF (fib) table on the Linecard (or centralized CPU in the case of non-[fully-]distributed platforms) i wanted to clarify the check that is performed. That said, I'm curious to know how asymmetric routing can break this. As long as someone is sending (and you are installing) a prefix that includes the source address this check will pass. If you don't have a route back to the source at all, that isn't asymmetric routing, it's network partitioning, assuming the source is legitimate. Exactly. If I can't reach you, I don't want to have my hosts or routers spend more time than is necessary dealing with your requests. - Jared -- Jared Mauch | pgp key available via finger from [EMAIL PROTECTED] clue++; | http://puck.nether.net/~jared/ My statements are only mine.
Re: Who does source address validation? (was Re: what's that smell?)
reachable-via any means you're only going to drop the packet if you don't have *ANY* route back to them. What's a route? An IP RIB instance? A BGP Loc-RIB instance? An IGP LSDB IP prefix entry? A BGP Adj-RIB-In instance? I think you mean if you don't have *ANY* **FIB** entry for the source address. If I peer with two large providers on the same router and both have prefix D.1 behind them and advertise the prefix to me, it's likely that only one of those two paths is going to make it into the BGP Loc-RIB (and subsequently, the IP RIB then FIB). If I use ANY FIB entry as proof that it's a valid source then that only addresses RFC1918ish space and only suggest that I first need to generate an invalid BGP route for the prefix, then spoof the packets. This doesn't fix spoofing with global IP addresses. If I use only entries that occur in the RIB and associate them with the receiving interface and receive a packet with an SA of D.1 from the peer whose path wasn't installed in the BGP Loc-RIB then I'll drop it. (And there's nothing broken with this configuration -- it's why we have routers with 1 million BGP paths but only 150K routes/fib entries, as I'm sure you know). If you're going to do source address validation then you need to associated all potential valid paths for a given prefix with the associated ingress interface, else it's mostly useless. -danny
Re: Who does source address validation? (was Re: what's that smell?)
There are two separate issues: 1. Making sure packets with falsified source addresses don't leave your network This can be done by having customer-specific filters on all customer-facing interfaces. (And on interfaces connecting to any type of hosts in case those are compromised.) Or use the plain and simple version of uRPF, with just one caveat: when a BGP customer announces a route just for backup, they can't use this route for outbound packets either until their outer route disappears. 2. Making sure packets with falsified source addresses don't enter your network 2a. Customers See 1. 2b. Transit Can't be done. (Well, you could filter traffic with source addresses from peers that comes in over transit.) 2c. Peers This is the part where straight uRPF doesn't work because of asymmetric routing. However, it is possible to make this work by making every border router always prefer its own external routes. This is easily accomplished on Cisco routers by setting a higher weight for EBGP sessions. No, it's not painless, and yes, it will break some weird stuff (one way links, people legitimately sourcing packets but for strange reasons not announcing the accompanying routes), but don't tell me it can't be done. The catch-22 is that if you refuse to peer with people who don't do type 1 filtering so you don't have to implement 2c, you end up with the garbage coming in over transit, where you can't filter it.
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, Oct 08, 2002 at 10:15:28AM -0600, Danny McPherson wrote: reachable-via any means you're only going to drop the packet if you don't have *ANY* route back to them. What's a route? An IP RIB instance? A BGP Loc-RIB instance? An IGP LSDB IP prefix entry? A BGP Adj-RIB-In instance? I think you mean if you don't have *ANY* **FIB** entry for the source address. If I peer with two large providers on the same router and both have prefix D.1 behind them and advertise the prefix to me, it's likely that only one of those two paths is going to make it into the BGP Loc-RIB (and subsequently, the IP RIB then FIB). If I use ANY FIB entry as proof that it's a valid source then that only addresses RFC1918ish space and only suggest that I first need to generate an invalid BGP route for the prefix, then spoof the packets. This doesn't fix spoofing with global IP addresses. If I use only entries that occur in the RIB and associate them with the receiving interface and receive a packet with an SA of D.1 from the peer whose path wasn't installed in the BGP Loc-RIB then I'll drop it. (And there's nothing broken with this configuration -- it's why we have routers with 1 million BGP paths but only 150K routes/fib entries, as I'm sure you know). If you're going to do source address validation then you need to associated all potential valid paths for a given prefix with the associated ingress interface, else it's mostly useless. Yes, but if i continue in my ideal situation of people (mostly) filter their bgp customers, so they won't announce the 1918 space, or similar. even the large peers filter out each other so they don't pick up 1918 announcements. Plus people use Robs Secure IOS Template to drop extraneous bgp announcements for unregistered/unassigned space (from IANA). I'm not purporting this as a solution to all problems on the internet, but if one walks before one runs this is a reasonable step in the correct direction. Or at least a nice bandaid (duct tape?) to help keep the network in a bit more sensible shape. And if everyone did it, it would help with the orignal problem/statistics posted about how much 1918 space was hitting one specific root server. I am interested in hearing other solutions to the problem including extra validations such as the above, but those aren't avalable today and what i'm suggesting is in the 12.0S and 12.1E IOS images and probally others. - Jared -- Jared Mauch | pgp key available via finger from [EMAIL PROTECTED] clue++; | http://puck.nether.net/~jared/ My statements are only mine.
Re: Who does source address validation? (was Re: what's that smell?)
Yes, but if i continue in my ideal situation of people (mostly) filter their bgp customers, so they won't announce the 1918 space, or similar. even the large peers filter out each other so they don't pick up 1918 announcements. Plus people use Robs Secure IOS Template to drop extraneous bgp announcements for unregistered/unassigned space (from IANA). What you're doing makes plenty of sense, we agree on that. I just wanted to be sure folks understood it doesn't validate valid sources. -danny
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 8 Oct 2002, Jared Mauch wrote: install this on all your internal, upstream, downstream interfaces (cisco router) [cef required]: ip verify unicast source reachable-via any This will drop all packets on the interface that do not have a way to return them in your routing table. Once again, which providers do this? If c.root-servers.net provider did this, they wouldn't see any RFC1918 traffic because it would be dropped at their provider's border routers. If c.root-servers.net provider's peer did this, again c.root-servers.net provider wouldn't see the rfc1918 packets. So why doesn't c.root-servers.net provider or its peers implement this simple solution? Its not a rhetorical question. If it was so simple, I assume they would have done it already. PSI wrote one of the original peering agreements that almost everyone else copied. If it was a concern, I imagine PSI could have included the requirement, most of their peers would have signed it 10 years ago. But they didn't. Does ATT? Yes Does UUNET? ? Does Cable Wireless? ? Does Level 3? ? Does Qwest? ? Does Genuity? ? Does Sprint? ?
Re: Who does source address validation? (was Re: what's that smell?)
It seems to reason that if people started filtering RFC-1918 on their edge, we would see a noticable amount of traffic go away. Simulation models I've been running show that an average of 12 to 18 percent of a providers traffic would disappear if they filtered RFC-1918 sourced packets. The percentage ranges scale with the size of the provider. Smaller providers, less impact, larger providers more impact. In addition to the bandwidth savings, there is also a support cost reduction and together, I believe backbone providers can see this on the bottom line of their balance sheets. We have to start someplace. There is no magic answer for all cases. RFC-1918 is easy to admin, and easy to deploy, in relative terms compared to uRPF or similar methods. For large and small alike it can be a positive marketing tool, if properly implemented. john brown On Tue, Oct 08, 2002 at 11:09:10AM -0400, Sean Donelan wrote: On Tue, 8 Oct 2002, Joe Abley wrote: What is difficult about dropping packets sourced from RFC1918 addresses before they leave your network? I kind of assumed that people weren't doing it because they were lazy. I've checked the marketing stuff of several backbones, as far as I could tell only one makes the blanket statement about source address validation on their entire network. http://www.ipservices.att.com/backbone/techspecs.cfm ATT has also implemented security features directly into the backbone. IP Source Address Assurance is implemented at every customer point-of-entry to guard against hackers. ATT examines the source address of every inbound packet coming from customer connections to ensure it matches the IP address we expect to see on that packet. This means that the ATT IP Backbone is RFC2267-compliant. What backbones do 100% source address validation? And how much of it is real, and how much is marketing? On single-homed or few-homed stub networks its easy. But even a moderately complex transit network it becomes difficult. Yes, I know about uRPF-like stuff, but the router vendors are still tweaking it. If there is a magic solution, I would love to hear about it. Unfortunately, the only solutions I've seen involve considerable work and resources to implement and maintain all the exceptions needed to do 100% source address validation. Heck, the phone network still has trouble getting the correct Caller-ID end-to-end.
Re: Who does source address validation? (was Re: what's that smell?)
So why doesn't c.root-servers.net provider or its peers implement this simple solution? Its not a rhetorical question. If it was so simple, I assume they would have done it already. PSI wrote one of the original peering agreements that almost everyone else copied. If it was a concern, I imagine PSI could have included the requirement, most of their peers would have signed it 10 years ago. But they didn't. My guess would be inertia. It tends to take quite some time to get people off their butts to do something. It is also a feature which protects others more than it protects you, and there are serious psychological hurdles many providers (cf peering) have to doing anything which might benefit someone else more than it benefits them, even if it will benefit everyone over the long term. Was uRPF even available 10 years ago?
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 8 Oct 2002, John M. Brown wrote: It seems to reason that if people started filtering RFC-1918 on their edge, we would see a noticable amount of traffic go away. Simulation models I've been running show that an average of 12 to 18 percent of a providers traffic would disappear if they filtered RFC-1918 sourced packets. That is hard very to believe, unless you are referring to the load on the root nameservers. Since they obviously don't receive a reply, these resolvers will keep coming back. In addition to the bandwidth savings, there is also a support cost reduction and together, I believe backbone providers can see this on the bottom line of their balance sheets. We have to start someplace. There is no magic answer for all cases. RFC-1918 is easy to admin, and easy to deploy, in relative terms compared to uRPF or similar methods. uRPF is easier: one configuration command per interface. A filter for RFC 1918 space is also one configuration command per interface, and some command to create the filter. For large and small alike it can be a positive marketing tool, if properly implemented. Sure. We can't be bothered to do proper filtering, but since filter 0.39% of what we should, we are cool.
Re: Who does source address validation? (was Re: what's that smell?)
Why is it hard to believe that a large amount of RFC-1918 sourced traffic is floating around the net? Root name servers are just one victim of this trash. DOS, DDOS and other just stupid configurations contribute to the pile. My data is from various core servers, and various clients of ours We look at the ingres traffic and see these kinds of numbers. In the day of the InternetBoom (growth period) people wanted to see traffic and capacity used up. It helped fuel the need for more fiber growth, and thus spending. Now that we are in more realistic times, providers need to save money and reduce costs. Costs can be reduced in several areas: 1. Egress filtering, don't let RFC-1918 packets out of your network. 2. Spoof filtering. 3. Better tools to mitigate DOS/DDOS attacks. The technology exists for say, cable providers to reduce port scans and DOS type attacks. If 1 and 2 are done, this will reduce complaint calls from non-customers, which reduces man hour cycles. john brown On Tue, Oct 08, 2002 at 09:17:46PM +0200, Iljitsch van Beijnum wrote: On Tue, 8 Oct 2002, John M. Brown wrote: It seems to reason that if people started filtering RFC-1918 on their edge, we would see a noticable amount of traffic go away. Simulation models I've been running show that an average of 12 to 18 percent of a providers traffic would disappear if they filtered RFC-1918 sourced packets. That is hard very to believe, unless you are referring to the load on the root nameservers. Since they obviously don't receive a reply, these resolvers will keep coming back. In addition to the bandwidth savings, there is also a support cost reduction and together, I believe backbone providers can see this on the bottom line of their balance sheets. We have to start someplace. There is no magic answer for all cases. RFC-1918 is easy to admin, and easy to deploy, in relative terms compared to uRPF or similar methods. uRPF is easier: one configuration command per interface. A filter for RFC 1918 space is also one configuration command per interface, and some command to create the filter. For large and small alike it can be a positive marketing tool, if properly implemented. Sure. We can't be bothered to do proper filtering, but since filter 0.39% of what we should, we are cool.
Re: Who does source address validation? (was Re: what's that smell?)
In addition to the bandwidth savings, there is also a support cost reduction and together, I believe backbone providers can see this on the bottom line of their balance sheets. If the backbone providers bill their customers for traffic, then filtering out those packets would let them bill less. Since their costs are fixed, and the amount of billable traffic decreases, the break-even price per meg goes up, not down. They wont filter up until it would be more expensive not to filter. Alex
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 8 Oct 2002, John M. Brown wrote: Why is it hard to believe that a large amount of RFC-1918 sourced traffic is floating around the net? Because if 20% of all people generate this crap (which is a huge number) it must be 90% of their traffic to get at 18%. How can someone generate so much useless traffic and keep doing it, too? Root name servers are just one victim of this trash. DOS, DDOS and other just stupid configurations contribute to the pile. So only allow proper source addresses, that's the first step towards getting rid of DoS. Costs can be reduced in several areas: 1. Egress filtering, don't let RFC-1918 packets out of your network. I'm not convinced this is (in general) a substantial amount of traffic. 2. Spoof filtering. 3. Better tools to mitigate DOS/DDOS attacks. The technology exists for say, cable providers to reduce port scans and DOS type attacks. I would happily kick anyone doing anything that is conclusively abusive off the net. But access providers aren't going to do this because it costs them money. Being a good netizen doesn't do them any good. I'm reminded of the two guys walking over the Serengeti, and they spot a lion. One guy bends down to tie his shoe laces, and the other says: what are you doing, you can't outrun a lion! The first guy says: I don't have to, as long as I can outrun you. People aren't in any hurry to protect the common good, they just want to keep one step ahead of those who get in trouble for not doing enough. If 1 and 2 are done, this will reduce complaint calls from non-customers, which reduces man hour cycles. Don't count on it. Some people start calling when they're pinged.
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 8 Oct 2002 [EMAIL PROTECTED] wrote: They wont filter up until it would be more expensive not to filter. Gross/Willfull negligence lawsuits? Im sure one of these days a large corporation like ebay/m$/etc will be annoyed enough at backbone providers spoof-DOSing them to file a lawsuit. Then it will suddenly become more expensive not to filter. Im rather suprised such lawsuits havent already happened. -Dan -- [-] Omae no subete no kichi wa ore no mono da. [-]
RE: Who does source address validation? (was Re: what's that smell?)
2. Spoof filtering. 3. Better tools to mitigate DOS/DDOS attacks. The technology exists for say, cable providers to reduce port scans and DOS type attacks. I would happily kick anyone doing anything that is conclusively abusive off the net. But access providers aren't going to do this because it costs them money. Being a good netizen doesn't do them any good. I'm reminded of the two guys walking over the Serengeti, and they spot a lion. One guy bends down to tie his shoe laces, and the other says: what are you doing, you can't outrun a lion! The first guy says: I don't have to, as long as I can outrun you. People aren't in any hurry to protect the common good, they just want to keep one step ahead of those who get in trouble for not doing enough. I guess you are describing the result of the bean counters' vision of an Ideal World colliding with the engineer's concept of poor technical practice. I can't buy the above reasoning, though, for two reasons. First, I just don't think there are bean counters clueful enough to sit around calculating return-on-investment (or lack thereof) on source- address filtering. And insofar as that is true, it is a mighty good thing, as it prolongs the time when engineering practice is still within the purview of engineers. Second, I think there are still enough people around who remember how Agis was hounded out of business for being spam-friendly. Nobody wants the same thing to happen to them, and to avoid it, will avoid even the perception of irresponsible operation.
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 08 Oct 2002 22:06:12 +0200, Iljitsch van Beijnum said: Because if 20% of all people generate this crap (which is a huge number) it must be 90% of their traffic to get at 18%. How can someone generate so much useless traffic and keep doing it, too? How much you want to bet that *all* the internal backbone traffic from these sites is pouring out into the Internet, and they've had to upgrade from a T1 to a DS3 and are looking at a OC3, and the service provider is keeping their mouth shut because they can just catch an OC3's worth of packets and drop most of them on the floor (because they don't have a route to the 1918 destination address - only the random stuff with actual valid destinations like a root nameserver gets forwarded). Oh, and since 90% of their traffic is dropped on the floor, they can provision an OC3 to the customer and still only need to provision a DS3 upstream. If 20% of your customers do this, you can just label it cash cow.. ;) If you thought there was disincentive for people selling transit to filter, this is even worse... ;) -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech msg05962/pgp0.pgp Description: PGP signature
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 8 Oct 2002 [EMAIL PROTECTED] wrote: Because if 20% of all people generate this crap (which is a huge number) it must be 90% of their traffic to get at 18%. How can someone generate so much useless traffic and keep doing it, too? How much you want to bet that *all* the internal backbone traffic from these sites is pouring out into the Internet, and they've had to upgrade from a T1 to a DS3 and are looking at a OC3, and the service provider is keeping their mouth shut because they can just catch an OC3's worth of packets and drop most of them on the floor Ok, but how do you generate megabits worth of traffic for which there is no return traffic? At some level, someone or something must be trying to do something _really hard_ but keep failing every time. It just doesn't make sense.
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 8 Oct 2002, Iljitsch van Beijnum wrote: Ok, but how do you generate megabits worth of traffic for which there is no return traffic? spammers... smurfers... attackers... At some level, someone or something must be trying to do something _really hard_ but keep failing every time. spammers... smurfers... attackers... It just doesn't make sense. Yes, it doesnt make sense to not filter. -Dan -- [-] Omae no subete no kichi wa ore no mono da. [-]
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 08 Oct 2002 22:57:42 +0200, Iljitsch van Beijnum said: Ok, but how do you generate megabits worth of traffic for which there is no return traffic? At some level, someone or something must be trying to do something _really hard_ but keep failing every time. It just doesn't make sense. Imagine if you will the following config: (pipe to ISP) +--+ DMZ 10.1.1/24 +-+ internal 192.68.1/22 ===|router|| NAT |--- +--++-+ Now give the router a default route to the ISP - and then screw the NAT config up so 198.68.1 packets show up on the DMZ. Or have something catch a broken RIP announcement.. or any number of stupid things. Whoosh, instant money for the ISP.. ;) Last April (2001), while worrying about the NTP buffer overflow, we ran a trace to see where NTP packets were going. In a 10 minute span, we caught no less than 6 packets looking for an address that had been a stratum-2 server - 11 years previously. They've probably generated megabits of data for so long that they don't even realize there's a problem. The perpetrators have retired or moved on, and the incumbent admins don't see anything anomalous since it's always been that way. Remember - the sort of admin that's not clued enough to get his NAT to behave is probably the sort that wouldn't know how to run a network monitor on his outbound pipe either. Lots of unclued admins out there... -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech msg05966/pgp0.pgp Description: PGP signature
Re: Who does source address validation? (was Re: what's that smell?)
On Tue, 8 Oct 2002, Sean Donelan wrote: On Tue, 8 Oct 2002, Jared Mauch wrote: install this on all your internal, upstream, downstream interfaces (cisco router) [cef required]: ip verify unicast source reachable-via any This will drop all packets on the interface that do not have a way to return them in your routing table. Once again, which providers do this? If c.root-servers.net provider did this, they wouldn't see any RFC1918 traffic because it would be dropped at their provider's border routers. If c.root-servers.net provider's peer did this, again c.root-servers.net provider wouldn't see the rfc1918 packets. So why doesn't c.root-servers.net provider or its peers implement this simple solution? Its not a rhetorical question. If it was so simple, I assume they would have done it already. PSI wrote one of the original peering agreements that almost everyone else copied. If it was a concern, I imagine PSI could have included the requirement, most of their peers would have signed it 10 years ago. But they didn't. If you do it on ingress from customers then this is probably a good thing and makes your network compliant to RfC1918. But you need to accept the internet isnt RFC1918 compliant in the same way that we implement hacks in all kinds of applications to enable compatibility with other non-RFC compliant implementations. You try running an RFC822 compliant mail server as an example and see how many microsoft users complain they cant send email! Not all IP packets require a return, indeed only TCP requires it. It is quite possible to send data over the internet on UDP or ICMP with RFC1918 source addresses and for their to be no issue. Examples of this might be icmp fragments or UDP syslog which altho shouldnt according to RFC1918 be on these source addresses might be and if you block these on major backbone routes you may break something. So I guess you may argue block RFC1918 tcp inbound but icmp and udp .. you start to break things, perhaps that is why large providers dont do this on backbone links. Steve Does ATT? Yes Does UUNET? ? Does Cable Wireless? ? Does Level 3? ? Does Qwest? ? Does Genuity? ? Does Sprint? ?
Re: Who does source address validation? (was Re: what's that smell?)
At 10:34 PM 10/8/02 +0100, Stephen J. Wilcox wrote: Not all IP packets require a return, indeed only TCP requires it. It is quite possible to send data over the internet on UDP or ICMP with RFC1918 source addresses and for their to be no issue. Examples of this might be icmp fragments or UDP syslog which altho shouldnt according to RFC1918 be on these source addresses might be and if you block these on major backbone routes you may break something. No. Filtering RFC1918 doesn't break anything. It merely shows you what was already broken and you didn't know it. If you have a box that is putting an RFC1918 source address in its packets destined for external nets, and it doesn't get NAT'd, your net config is broken. ...Barb
Re: Who does source address validation? (was Re: what's that smell?)
Why is it hard to believe that a large amount of RFC-1918 sourced traffic is floating around the net? Because if 20% of all people generate this crap (which is a huge number) it must be 90% of their traffic to get at 18%. How can someone generate so much useless traffic and keep doing it, too? funny question from someone who reads this mailing list :-)
Re: Who does source address validation? (was Re: what's that smell?)
I believe the RFC states SHALL NOT propigate these out to the global net. SHOULD NOT != SHALL NOT On Tue, Oct 08, 2002 at 10:34:51PM +0100, Stephen J. Wilcox wrote: On Tue, 8 Oct 2002, Sean Donelan wrote: On Tue, 8 Oct 2002, Jared Mauch wrote: install this on all your internal, upstream, downstream interfaces (cisco router) [cef required]: ip verify unicast source reachable-via any This will drop all packets on the interface that do not have a way to return them in your routing table. Once again, which providers do this? If c.root-servers.net provider did this, they wouldn't see any RFC1918 traffic because it would be dropped at their provider's border routers. If c.root-servers.net provider's peer did this, again c.root-servers.net provider wouldn't see the rfc1918 packets. So why doesn't c.root-servers.net provider or its peers implement this simple solution? Its not a rhetorical question. If it was so simple, I assume they would have done it already. PSI wrote one of the original peering agreements that almost everyone else copied. If it was a concern, I imagine PSI could have included the requirement, most of their peers would have signed it 10 years ago. But they didn't. If you do it on ingress from customers then this is probably a good thing and makes your network compliant to RfC1918. But you need to accept the internet isnt RFC1918 compliant in the same way that we implement hacks in all kinds of applications to enable compatibility with other non-RFC compliant implementations. You try running an RFC822 compliant mail server as an example and see how many microsoft users complain they cant send email! Not all IP packets require a return, indeed only TCP requires it. It is quite possible to send data over the internet on UDP or ICMP with RFC1918 source addresses and for their to be no issue. Examples of this might be icmp fragments or UDP syslog which altho shouldnt according to RFC1918 be on these source addresses might be and if you block these on major backbone routes you may break something. So I guess you may argue block RFC1918 tcp inbound but icmp and udp .. you start to break things, perhaps that is why large providers dont do this on backbone links. Steve Does ATT? Yes Does UUNET? ? Does Cable Wireless? ? Does Level 3? ? Does Qwest? ? Does Genuity? ? Does Sprint? ?