Re: Interesting Point of view - Russian police and RIPE accused of aiding RBN
2009/11/6 Jeffrey Lyon jeffrey.l...@blacklotus.net The primary issue is that we receive a fair deal of customers who end up with wide scale DDoS attacks followed by an offer for protection to move to your network. In almost every case the attacks cease once the customer has agreed to pay this protection fee. Every one of these attacks was nearly identical in signature. By the way, Jeffrey, we can provide reports on HTTP-flood because our system builds it's signatures on http traffic dumps like === IP: 88.246.76.65, last receiving time: 2009-10-25T23:07:37+03:00, many identical requests (length 198): GET / HTTP/1.1 Accept: */* Accept-language: en-us User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1 Host: [censored] Connection: Keep-Alive So using this info we can map botnets, learn different attacks and in collaboration with ISPs - find CCs of new botnets. And what are your accusations of the identical signatures based on when simple Staminus resellers (like you are) do not have access to their signatures database? Kanak Akrino Abuse Team
Failover how much complexity will it add?
HI, I was recently brought onto a project where some failover is desired, but I think that the number of connections provisioned is excessive. Also hoping to get some guidance with regards to how well I can get the failover to actually work. So currently 4 X 100Mb/s Internet connections have been provisioned. One is to be used for general Internet, out of the organisation, it also terminates VPNs from remote sites belonging to the organisation and some publicly accessible servers -routed DMZ and translated IPs. Second Internet connection to be used for a separate system which has a site-to-site VPN to a third party support vendor. Internet connections 3 and 4 are currently thought of as providing backups for one and two. Both connections firewalled by a Juniper SSG of some description. Now I couldn't get any good answers as to why Internet connections 1 and 2 need to be separate. I think the idea was to make sure that there was enough bandwidth for the third party support VPN. I feel that I can consolidate this into one connection and just use rate limiting to reserve some portion of the bandwidth on the connection and this should be fine. Now if I was to do this then I can make a case for just having one backup Internet connection. However I'm still concerned about failover and reliability issues. So my questions regarding this are: - Should I make sure that the backup Internet connection is from a separate provider? - How can I acheive a failover which doesn't require me to change all the remote VPN endpoints in case of a failover? Its possible to configure failover VPNs on the Junipers, which should take care of this, but how do I take care of the DMZ hosts and external translation? - In fact I think I'm asking what are my options with regard to failover between one Internet connection and the other? I'm hoping to figure out whether adding an extra Internet connection actually gives us that much, in fact whether it justifies the complexity and spend. Many Thanks for your comments. Adel
RE: Failover how much complexity will it add?
-Original Message- From: a...@baklawasecrets.com [mailto:a...@baklawasecrets.com] Sent: Sunday, November 08, 2009 4:52 AM To: nanog@nanog.org Subject: Failover how much complexity will it add? HI, I was recently brought onto a project where some failover is desired, but I think that the number of connections provisioned is excessive. Also hoping to get some guidance with regards to how well I can get the failover to actually work. So currently 4 X 100Mb/s Internet connections have been provisioned. One is to be used for general Internet, out of the organisation, it also terminates VPNs from remote sites belonging to the organisation and some publicly accessible servers -routed DMZ and translated IPs. Second Internet connection to be used for a separate system which has a site-to-site VPN to a third party support vendor. Internet connections 3 and 4 are currently thought of as providing backups for one and two. Both connections firewalled by a Juniper SSG of some description. Now I couldn't get any good answers as to why Internet connections 1 and 2 need to be separate. I think the idea was to make sure that there was enough bandwidth for the third party support VPN. I feel that I can consolidate this into one connection and just use rate limiting to reserve some portion of the bandwidth on the connection and this should be fine. Now if I was to do this then I can make a case for just having one backup Internet connection. However I'm still concerned about failover and reliability issues. So my questions regarding this are: - Should I make sure that the backup Internet connection is from a separate provider? Yes yes yes yes a thousand times yes. Depending on the criticality of internet connectivity you should also aim to have your redundant connections coming from a complete separate direction. Example, fiber from Level 3 come from the north in a dedicated conduit and fiber from Verizon coming in a dedicated conduit from the south of the building. Why? Put simply we had construction ignore the painted lines and dig up our conduit a few years back. At that point we have 4 bonded T1's from a single carrier. That was a long couple of days... Carrier diversity is not a bad thing, spend some time shopping an additional provider. Make sure they operate their own network for last mile, and also make sure they don’t piggyback off the same network your main carrier does anywhere locally. Comcast Ethernet, Verizon and Cogent make great secondary connections when you need high availability. You don’t need your secondary to have 99.999% uptime. 97% is usually good enough if it's on a separate network. I wouldn't sway from the big names for your primary connections either. - How can I acheive a failover which doesn't require me to change all the remote VPN endpoints in case of a failover? Its possible to configure failover VPNs on the Junipers, which should take care of this, but how do I take care of the DMZ hosts and external translation? With recent experience with the Juniper SSG VPN functions put nicely they suck. VPN failover is in there, but we had issues with the tunnel staying active for extended periods of time. Also depending on if you do a route based or a policy based VPN, it becomes so much of a headache. We used 2 SSG550 devices as a proof of concept and the one thing which annoyed me to no end was the complete and total crap options within then VPN configuration. When I typically set up a VPN, I use a SonicWall NSA or E-class device (yes I know hiss boo) or an ASA. Saying that the Juniper was lacking was a complete understatement. I personally would completely avoid even attempting VPN failover within a Juniper device. I will say they are rock solid though for generic firewall functionality, just try to keep the config simple or they turn into giant slow dogs. - In fact I think I'm asking what are my options with regard to failover between one Internet connection and the other? Considering you have 4x 100mbit lines, have you looked at BGP? Even if you drop line 2 and its associated backup, you have 2x 100mbit lines. Or even if you have 3 unique carriers with a 100mbit from each of them it makes BGP very appealing. I think this would be an ideal situation for a BGP setup using a couple of small routers. You could probably get away with something as small as a Cisco 3825 for each connection (purely redundancy). If the Cisco name scares you Juniper routers are great as well. Don’t forget Vyatta! If you do BGP, you have 1 VPN to configure, you have 1 tunnel to configure, there is no VPN failover configuration and hopefully you are not pushing more than 1 subnet across the VPN otherwise you end up doing a route based VPN instead of a policy based VPN and you will be significantly happier. That’s a Juniper headache for another day however. I'm hoping to figure out whether
Re: Failover how much complexity will it add?
a...@baklawasecrets.com wrote: HI, Now I couldn't get any good answers as to why Internet connections 1 and 2 need to be separate. I think the idea was to make sure that there was enough bandwidth for the third party support VPN. I feel that I can consolidate this into one connection and just use rate limiting to reserve some portion of the bandwidth on the connection and this should be fine. Now if I was to do this then I can make a case for just having one backup Internet connection. However I'm still concerned about failover and reliability issues. So my questions regarding this are: I wouldnt jump to any conclusions that everything will work properly if you are terminating multiple connections directly on the SSG, what with egress likely being different than the ingress, even if you are using the same IP range (BGP) on all the links. You could really be asking for trouble if you are planning on using a different ISP provided IP range on each connection for each purpose. Front it all with routers that can policy route, whether or not you also use BGP. Joe
Re: Human Factors and Accident reduction/mitigation
Owen, We could learn a lot about this from Aviation. Nowhere in human history has more research, care, training, and discipline been applied to accident prevention, mitigation, and analysis as in aviation. A few examples: Others later in this thread duly noted a definite relationship of costs associated, which are clearly worth it given the particular application of these methods [snipped]. However, I assert this is warranted because of the specific public trust that commercial aviation must be given. Additionally, this form of professional or industry standard isn't unique in the world; you can find (albeit small) parallels in most states' PE certification tracks and the like. In the case of the big-I internet, I assert we can't (yet) successfully argue that it's deserving of similar public trust. In short, I'm arguing that big-I internet deserves special-pleading status in these sorts of instrument - record - improve strawmen and that we shouldn't apply similar concepts or regulation. (Robert B. then responded): All, The real problem is same human factors we have in aviation which cause most accidents. Look at the list below and replace the word Pilot with Network Engineer or Support Tech or Programmer or whatever... and think about all the problems where something didn't work out right. It's because someone circumvented the rules, processes, and cross checks put in place to prevent the problem in the first place. Nothing can be made idiot proof because idiots are so creative. I'd like to suggest we also swap bug for software defect or hardware defect - perhaps if operators started talking about problems like engineers, we'd get more global buy-in for a process-based solution. I certainly like the idea of improving the state of affairs where possible - especially the operator-device direction (i.e fat-fingering acl, prefix list, community list, etc). When people make mistakes, it seems very wise to accurately record the entrance criteria, the results of their actions, and ways to avoid it - then shared to all operators (like at NANOG meetings!). The part I don't like is being ultimately responsible for, or to design around a class of systemic problems which are entirely outside of an operators sphere of control. What curve must we shift to get routers with hardware and software that's both a) fast b) reliable and c) cheap -- in the hopes that the only problems left to solve indeed are human ones? -Tk
Re: Human Factors and Accident reduction/mitigation
Anton Kapela wrote: What curve must we shift to get routers with hardware and software that's both a) fast b) reliable and c) cheap -- in the hopes that the only problems left to solve indeed are human ones? Fast, Reliable, Cheap - pick any two. No, you can't have all three. The fastest(best) and most reliable *anything* can't be the cheapest one because someone will quickly seize the market opportunity to make one that is lower quality (slower) or less reliable and sell it for a lower price. jc
Re: Failover how much complexity will it add?
a...@baklawasecrets.com wrote: HI, I was recently brought onto a project where some failover is desired, but I think that the number of connections provisioned is excessive. Also hoping to get some guidance with regards to how well I can get the failover to actually work. So currently 4 X 100Mb/s Internet connections have been provisioned. One is to be used for general Internet, out of the organisation, it also terminates VPNs from remote sites belonging to the organisation and some publicly accessible servers -routed DMZ and translated IPs. Second Internet connection to be used for a separate system which has a site-to-site VPN to a third party support vendor. Internet connections 3 and 4 are currently thought of as providing backups for one and two. Both connections firewalled by a Juniper SSG of some description. Now I couldn't get any good answers as to why Internet connections 1 and 2 need to be separate. I think the idea was to make sure that there was enough bandwidth for the third party support VPN. I feel that I can consolidate this into one connection and just use rate limiting to reserve some portion of the bandwidth on the connection and this should be fine. Now if I was to do this then I can make a case for just having one backup Internet connection. However I'm still concerned about failover and reliability issues. So my questions regarding this are: - Should I make sure that the backup Internet connection is from a separate provider? - How can I acheive a failover which doesn't require me to change all the remote VPN endpoints in case of a failover? Its possible to configure failover VPNs on the Junipers, which should take care of this, but how do I take care of the DMZ hosts and external translation? - In fact I think I'm asking what are my options with regard to failover between one Internet connection and the other? Forget all of that and just multihome to two separate providers with BGP. Also make sure that of the providers you choose that one is not a customer of the other. Instant, painless redundancy. Having multiple circuits to one provider *will not* back anything up if that provider has an outage as they are %99.999 likely to be part of the same larger circuit and certainly share the same infrastructure at the provider. I'm hoping to figure out whether adding an extra Internet connection actually gives us that much, in fact whether it justifies the complexity and spend. Only if you calculate the cost (money, time, angry customers, etc.) of an outage to be greater than the cost of additional connectivity. Many Thanks for your comments. ~Seth
Re: Failover how much complexity will it add?
On 2009-11-08-10:23:41, Blake Pfankuch bpfank...@cpgreeley.com wrote: Make sure they operate their own network for last mile [...] I wouldn't sway from the big names for your primary connections either. Because ownership of the provider/subsidiary delivering the last mile means one hand is talking to the other, and you're going to get good service and reliability as a result? And big names never have any peering-related spats and always deliver the best possible end-user experience, right? :-) (Some good points further on, though important we don't lead the OP down the wrong path or with a false sense of security there...) -a
Re: Failover how much complexity will it add?
Thanks for all your comments guys. With regards to bgp I did think about placing two bgp routers in front of the ssg's. However my limited understanding makes me think that if I had two bgp connections from different providers I would still have issues. So I guess that if my primary Internet goes down I lose connectivity to all the publicly addressed devices on that connection. Like dmz hosts and so on. I would be interested to hear how this can be avoided if at all or do I have to use the same provider. I should add that we currently have provisioned two ssg in ha mode. Also is terminating bgp on the ssg also an option? I really like the flexibility of route based VPN with addresable tun interfaces. Thanks adel On Sun 3:47 PM , Joe Maimon jmai...@ttec.com sent: adel@ baklawasecrets.com wrote: HI, Now I couldn't get any good answers as to why Internet connections 1 and 2 need to be separate. I think the idea was to make sure that there was enough bandwidth for the third party support VPN. I feel that I can consolidate this into one connection and just use rate limiting to reserve some portion of the bandwidth on the connection and this should be fine. Now if I was to do this then I can make a case for just having one backup Internet connection. However I'm still concerned about failover and reliability issues. So my questions regarding this are: I wouldnt jump to any conclusions that everything will work properly if you are terminating multiple connections directly on the SSG, what with egress likely being different than the ingress, even if you are using the same IP range (BGP) on all the links. You could really be asking for trouble if you are planning on using a different ISP provided IP range on each connection for each purpose. Front it all with routers that can policy route, whether or not you also use BGP. Joe
RE: Failover how much complexity will it add?
Seth Mattinen [se...@rollernet.us] said: Forget all of that and just multihome to two separate providers with BGP --Assuming that you're advertising PI space or can work around that appropriately with your providers, I agree, that's the ideal situation. Having multiple circuits to one provider *will not* back anything up if that provider has an outage as they are %99.999 likely to be part of the same larger circuit --True - if you don't specify otherwise when you're ordering, then why would they make the effort? Comments made in some of the other responses in this thread are also valid even with a single service provider - diverse entry points into your facility, diverse upstream circuit routing, and homing to different POPs - which may mean backhauling your secondary circuit away from your local POP and taking a hit for the higher latency on that second link. The moral of this is that whether you're using one provider or more than one, state your diversity requirements clearly up front, and then stay involved and make sure that what's presented to you is _actually_ diverse (oldsflash: even the best intentioned people sometimes make mistakes, especially when there's a handoff to a different last mile provider who may not have been clear on the requirement ). Of course, all of this is potentially wasted effort if the data center you're providing connectivity for does not also maintain the same kind of diversity itself in terms of power, connectivity, architecture, etc. and certainly share the same infrastructure at the provider. --If you enter a single provider's network at diverse points, then that local infrastructure isn't the same at least. But by the same measure, if that provider has a major BGP issue for example, then yeah - they're both screwed... in which case we loop back to the dual provider scenario you mentioned in the first place :) Ultimately choosing the appropriate solution will boil down to the what level of service unavailability one can tolerate in the first place, and put a business value on that impact. From that one can derive technical options, then go cap in hand with a business case to the poor soul paying the bill ;-) j.
Re: Failover how much complexity will it add?
a...@baklawasecrets.com wrote: Thanks for all your comments guys. With regards to bgp I did think about placing two bgp routers in front of the ssg's. However my limited understanding makes me think that if I had two bgp connections from different providers I would still have issues. So I guess that if my primary Internet goes down I lose connectivity to all the publicly addressed devices on that connection. Like dmz hosts and so on. I would be interested to hear how this can be avoided if at all or do I have to use the same provider. No, you will announce the same IP addresses (minimum of a /24 which you can easily obtain from one upstream just by saying I want to multihome if you don't already have a /24) over both. That's the whole point of multihoming. If cost is an issue you can just use one BGP speaking router. If you multihome there is no primary like you're thinking. ~Seth
Re: Failover how much complexity will it add?
On Sun, 08 Nov 2009 08:23:41 MST, Blake Pfankuch said: I wouldn't sway from the big names for your primary connections either. This is, of course, dependent on the OP's location and budget. I know when we were getting our NLR connection set up, there was a fair amount of You want 40G worth of DWDM *where*? involved, and the resulting topology was... complicated. At least at one time, there were places where our provider was running our link across lambdas of a subsidiary of ours, which are going across physical fiber owned by the provider... turtles all the way down. ;) pgp48PlDXrseP.pgp Description: PGP signature
Re: Interesting Point of view - Russian police and RIPE accused of aiding RBN
Kanak, We're not a Staminus reseller. Please do your homework: http://webtrace.info/asn/32421 . I'm not going to hold court on whether or not you or your resellers are DDoSing competitor's customers, I was merely stating my opinion. The reader can draw their own conclusion. I think your network is blackhat, you say it's not. I say your entire network has minimal legitimate traffic and you say you have a diverse customer base. The way I see it right now: - You're an anonymous BVI company with no physical location - This Computerworld article is referring to Akrino: http://www.computerworld.com/s/article/9063418/Russian_hosting_network_running_a_protection_racket_researcher_says. I was consulted on this article before it went to print and i'll put my reputation on that. - All of the sites on Akrino around early 2008 were on NEAVE LIMITED until shutdown by uplink Eltel. They all came back up under Akrino uplink to Anders (AS39792). - 91.202.60.0/22 has one actual company with legitimate commercially necessary traffic (will provide a full report if you want to push the issue) yet is responsible for hundreds of malware infections over the past 6 months (see again, http://google.com/safebrowsing/diagnostic?site=AS:44571 ) -- The aforementioned company (solidtrustpay.com) was a Black Lotus customer and had received several days of multi-Gbps DDoS that subsided only once the customer agreed to use your network --- Post-DDoS the customer's server began receiving SSH connections from some former Soviet country (forget which offhand) trying to debug a reverse proxy (not sure if you/they realize that we filter your announcements). In the real world DDoS does not stop just hours before the gaining host goes to setup a proxy. - The attacks you claim to be filtering would not be possible unless your connection to AS39792 is 10GE or they're doing the filters for you. - The above has occurred at least three times with Akrino, zero times with better known, respected providers. - A handful of respected net ops have contacted me off list to confirm much of this data and provide additional evidence. Again, these are merely *opinions* and form the foundation of why I believe Akrino is a black hat network. Perhaps if you didn't have black hat resellers you wouldn't have this reputation? Maybe you should reconsider who you allow to resell your network? I don't know for certain but you need to clean up your network so you don't end up like Atrivo. Clean up now and everyone wins. Jeff On Sun, Nov 8, 2009 at 5:27 AM, noc acrino noc.akr...@gmail.com wrote: 2009/11/6 Jeffrey Lyon jeffrey.l...@blacklotus.net The primary issue is that we receive a fair deal of customers who end up with wide scale DDoS attacks followed by an offer for protection to move to your network. In almost every case the attacks cease once the customer has agreed to pay this protection fee. Every one of these attacks was nearly identical in signature. By the way, Jeffrey, we can provide reports on HTTP-flood because our system builds it's signatures on http traffic dumps like === IP: 88.246.76.65, last receiving time: 2009-10-25T23:07:37+03:00, many identical requests (length 198): GET / HTTP/1.1 Accept: */* Accept-language: en-us User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1 Host: [censored] Connection: Keep-Alive So using this info we can map botnets, learn different attacks and in collaboration with ISPs - find CCs of new botnets. And what are your accusations of the identical signatures based on when simple Staminus resellers (like you are) do not have access to their signatures database? Kanak Akrino Abuse Team -- Jeffrey Lyon, Leadership Team jeffrey.l...@blacklotus.net | http://www.blacklotus.net Black Lotus Communications of The IRC Company, Inc. Platinum sponsor of HostingCon 2010. Come to Austin, TX on July 19 - 21 to find out how to protect your booty.
Re: Failover how much complexity will it add?
Thanks Seth and James, Things are getting a lot clearer. The BGP multihoming solution sounds like exactly what I want. I have more questions :-) Now I suppose I would get my allocation from RIPE as I am UK based? Do I also need to apply for an AS number? As the IP block is mine, it is ISP independent. i.e. I can take it with me when I decide to use two completely different ISPs? Is the obtaining of this IP block, what is referred to as PI space? Of course internally I split the /24 up however I want - /28 for untrust range and maybe a routed DMZ block etc.? Assuming I apply for IP block and AS number, whats involved and how long does it take to get these babies? I know the SSG550's have BGP capabilites. As I have two of these in HA mode, does it make sense to do the BGP on these, or should I get dedicated BGP routers? Fixing the internal routing policy so traffic is directed at the active BGP connection. Whats involved here, preferring one BGP link over the other? Thanks again, I obviously need to do some reading of my own, but all the suggestions so far have been very valuable and definitely seem to be pointing in some fruitful directions. Adel On Sun 6:31 PM , James Hess mysi...@gmail.com sent: On Sun, Nov 8, 2009 at 11:34 AM, adel@ baklawasecrets.com wrote:[..] connections from different providers I would still have issues. So I guess that if my primary Internet goes down I lose connectivity to all the publicly addressed devices on that connection. Like dmz hosts and so on. I would be interested to hear how this can be avoided if at all or do I have to use the same provider. You assign multi-homed IP address space to your publicly addressed devices,which are not specific to either ISP. You announce to both ISPs, and you accept some routes from both ISPs. You get multi-homed IPs, either by having an existing ARIN allocation, or getting a /22 from ARIN (special allocation available for multi-homing), or ask for a /24 from ISP A or ISP B for multihoming. If Link A fails, the BGP session eventually times out and dies: ISP A's BGP routers withdraw the routes, the IP addresses are then associated only with provider B. And you design your internal routing policy to direct traffic within your network to the router with an active BGP session. Link A's failure is _not_ a total non-event, but a 3-5 minute partial disruption, while the BGP session times out and updates occur in other people's routers, is minimal compared to a 3 day outage, if serious repairs to upstream fiber are required. -- -J
Re: Failover how much complexity will it add?
Hi Adel There are companies like packet exchange (www.packetexchange.net) (whom i have personally used) who will do all of the legwork for you, such as applying for the ASN, address space, transit agreements, and get the tail connections directly to your building. You just need to pay them and buy the equipment (which they can also provide). Probably easier in the long run. NOTE: I am not an employee, or paid affiliate of packet exchange... I have used them for services and am promoting them due to my own good experiences with their services. Regards, Ken 2009/11/8 a...@baklawasecrets.com: Thanks Seth and James, Things are getting a lot clearer. The BGP multihoming solution sounds like exactly what I want. I have more questions :-) Now I suppose I would get my allocation from RIPE as I am UK based? Do I also need to apply for an AS number? As the IP block is mine, it is ISP independent. i.e. I can take it with me when I decide to use two completely different ISPs? Is the obtaining of this IP block, what is referred to as PI space? Of course internally I split the /24 up however I want - /28 for untrust range and maybe a routed DMZ block etc.? Assuming I apply for IP block and AS number, whats involved and how long does it take to get these babies? I know the SSG550's have BGP capabilites. As I have two of these in HA mode, does it make sense to do the BGP on these, or should I get dedicated BGP routers? Fixing the internal routing policy so traffic is directed at the active BGP connection. Whats involved here, preferring one BGP link over the other? Thanks again, I obviously need to do some reading of my own, but all the suggestions so far have been very valuable and definitely seem to be pointing in some fruitful directions. Adel On Sun 6:31 PM , James Hess mysi...@gmail.com sent: On Sun, Nov 8, 2009 at 11:34 AM, adel@ baklawasecrets.com wrote:[..] connections from different providers I would still have issues. So I guess that if my primary Internet goes down I lose connectivity to all the publicly addressed devices on that connection. Like dmz hosts and so on. I would be interested to hear how this can be avoided if at all or do I have to use the same provider. You assign multi-homed IP address space to your publicly addressed devices,which are not specific to either ISP. You announce to both ISPs, and you accept some routes from both ISPs. You get multi-homed IPs, either by having an existing ARIN allocation, or getting a /22 from ARIN (special allocation available for multi-homing), or ask for a /24 from ISP A or ISP B for multihoming. If Link A fails, the BGP session eventually times out and dies: ISP A's BGP routers withdraw the routes, the IP addresses are then associated only with provider B. And you design your internal routing policy to direct traffic within your network to the router with an active BGP session. Link A's failure is _not_ a total non-event, but a 3-5 minute partial disruption, while the BGP session times out and updates occur in other people's routers, is minimal compared to a 3 day outage, if serious repairs to upstream fiber are required. -- -J
Re: Failover how much complexity will it add?
Hi, Thanks for the info on UKNOF. I've started a thread there with regards to RIPE and obtaining ASN numbers and so on., as this is I guess quite UK specific. Adel On Sun 8:40 PM , Arnold Nipper arn...@nipper.de wrote: Hi Adel, On 08.11.2009 21:24 Ken Gilmour wrote There are companies like packet exchange (www.packetexchange.net [1]) I could also comment on PacketExchange, but I do not. If you get more UK specific now you may perhaps want to post to UKNOF (http://lists.uknof.org.uk/cgi-bin/mailman/listinfo/uknof/) [2] as well. For _independant_ consultancy you may want to have a look at Netsumo (http://www.netsumo.com/) [3] Ask for Andy Davidson. Best regards, Arnold -- Arnold Nipper / nIPper consulting, Sandhausen, Germany email: arn...@nipper.de phone: +49 6224 9259 299 mobile: +49 172 2650958 fax: +49 6224 9259 333 Links: -- [1] http://webmail.123-reg.co.uk/parse.php?redirect=http://www.packetexchange.n et[2] http://webmail.123-reg.co.uk/parse.php?redirect=http://lists.uknof.org.uk/c gi-bin/mailman/listinfo/uknof/%29[3] http://webmail.123-reg.co.uk/parse.php?redirect=http://www.netsumo.com/%29
Re: Failover how much complexity will it add?
Don't think I sent the below to the list, so resending: Thanks Seth and James, Things are getting a lot clearer. The BGP multihoming solution sounds like exactly what I want. I have more questions :-) Now I suppose I would get my allocation from RIPE as I am UK based? Do I also need to apply for an AS number? As the IP block is mine, it is ISP independent. i.e. I can take it with me when I decide to use two completely different ISPs? Is the obtaining of this IP block, what is referred to as PI space? Of course internally I split the /24 up however I want - /28 for untrust range and maybe a routed DMZ block etc.? Assuming I apply for IP block and AS number, whats involved and how long does it take to get these babies? I know the SSG550's have BGP capabilites. As I have two of these in HA mode, does it make sense to do the BGP on these, or should I get dedicated BGP routers? Fixing the internal routing policy so traffic is directed at the active BGP connection. Whats involved here, preferring one BGP link over the other? Thanks again, I obviously need to do some reading of my own, but all the suggestions so far have been very valuable and definitely seem to be pointing in some fruitful directions. Adel On Sun 6:31 PM , James Hess mysi...@gmail.com wrote: On Sun, Nov 8, 2009 at 11:34 AM, wrote: [..] connections from different providers I would still have issues. So I guess that if my primary Internet goes down I lose connectivity to all the publicly addressed devices on that connection. Like dmz hosts and so on. I would be interested to hear how this can be avoided if at all or do I have to use the same provider. You assign multi-homed IP address space to your publicly addressed devices, which are not specific to either ISP. You announce to both ISPs, and you accept some routes from both ISPs. You get multi-homed IPs, either by having an existing ARIN allocation, or getting a /22 from ARIN (special allocation available for multi-homing), or ask for a /24 from ISP A or ISP B for multihoming. If Link A fails, the BGP session eventually times out and dies: ISP A's BGP routers withdraw the routes, the IP addresses are then associated only with provider B. And you design your internal routing policy to direct traffic within your network to the router with an active BGP session. Link A's failure is _not_ a total non-event, but a 3-5 minute partial disruption, while the BGP session times out and updates occur in other people's routers, is minimal compared to a 3 day outage, if serious repairs to upstream fiber are required. -- -J
Re: Failover how much complexity will it add?
Hi, Ok thanks for clearing that up. I'm getting some good feedback on applying for PI and ASN through Ripe LIRs over on the UKNOF so I think I have a handle on this. With regards to BGP and using separate BGP routers. I am announcing my PI space to my upstreams, but I don't need to carry a full Internet routing table, correct? So I can get away with some lightweight BGP routers not being an ISP if that makes sense? Adel On Sun 9:26 PM , Ken Gilmour ken.gilm...@gmail.com wrote: Hey, Yes you apply to RIPE for your allocation. You should ask them for a /20 since it's the same price for that as a /24 if you can justify it (at least with LACNIC where i now get my allocations)... You will also need to apply for an ASN Correct- the block belongs to you and as long as you contact the transit provider from the address listed in WHOIS then you should be able to set up a new agreement easily. Yes the block is PI space (provider independent) It can take up to 1 month to get your assignments. I would recommend getting some different routers for this. I use OpenBSD in some of my locations which is extremely easy to work with. I also have some old NS-208 devices running ScreenOS for internal BGP in one other location. I would not recommend using any router with less than 1GB of RAM for BGP. in HA Mode you can connect the two tails, one to each SSG (if they are in active active mode) and announce it that way (check out anycast), we also do this :). The way BGP works is that both connections are active at the same time, there is no primary and backup, if one goes down you just have one less to receive traffic over and more traffic on the other, but unless you stop announcing from one connection traffic will go over both. Regards, Ken 2009/11/8 : Don't think I sent the below to the list, so resending: Thanks Seth and James, Things are getting a lot clearer. The BGP multihoming solution sounds like exactly what I want. I have more questions :-) Now I suppose I would get my allocation from RIPE as I am UK based? Do I also need to apply for an AS number? As the IP block is mine, it is ISP independent. i.e. I can take it with me when I decide to use two completely different ISPs? Is the obtaining of this IP block, what is referred to as PI space? Of course internally I split the /24 up however I want - /28 for untrust range and maybe a routed DMZ block etc.? Assuming I apply for IP block and AS number, whats involved and how long does it take to get these babies? I know the SSG550's have BGP capabilites. As I have two of these in HA mode, does it make sense to do the BGP on these, or should I get dedicated BGP routers? Fixing the internal routing policy so traffic is directed at the active BGP connection. Whats involved here, preferring one BGP link over the other? Thanks again, I obviously need to do some reading of my own, but all the suggestions so far have been very valuable and definitely seem to be pointing in some fruitful directions. Adel On Sun 6:31 PM , James Hess wrote: On Sun, Nov 8, 2009 at 11:34 AM, wrote: [..] connections from different providers I would still have issues. So I guess that if my primary Internet goes down I lose connectivity to all the publicly addressed devices on that connection. Like dmz hosts and so on. I would be interested to hear how this can be avoided if at all or do I have to use the same provider. You assign multi-homed IP address space to your publicly addressed devices, which are not specific to either ISP. You announce to both ISPs, and you accept some routes from both ISPs. You get multi-homed IPs, either by having an existing ARIN allocation, or getting a /22 from ARIN (special allocation available for multi-homing), or ask for a /24 from ISP A or ISP B for multihoming. If Link A fails, the BGP session eventually times out and dies: ISP A's BGP routers withdraw the routes, the IP addresses are then associated only with provider B. And you design your internal routing policy to direct traffic within your network to the router with an active BGP session. Link A's failure is _not_ a total non-event, but a 3-5 minute partial disruption, while the BGP session times out and updates occur in other people's routers, is minimal compared to a 3 day outage, if serious repairs to upstream fiber are required. -- -J
Re: Failover how much complexity will it add?
a...@baklawasecrets.com wrote: Hi, Thanks for the info on UKNOF. I've started a thread there with regards to RIPE and obtaining ASN numbers and so on., as this is I guess quite UK specific. You will need an AS number regardless of what path you get your addresses from to multihome. In ARIN land the minimum for a multihomed end-site is /22, so if I were to do this here, I would ask one of the upstreams for a /24. I don't know the first thing about RIPE policy. ~Seth
Re: Failover how much complexity will it add?
a...@baklawasecrets.com wrote: Hi, Ok thanks for clearing that up. I'm getting some good feedback on applying for PI and ASN through Ripe LIRs over on the UKNOF so I think I have a handle on this. With regards to BGP and using separate BGP routers. I am announcing my PI space to my upstreams, but I don't need to carry a full Internet routing table, correct? So I can get away with some lightweight BGP routers not being an ISP if that makes sense? Most will give you three choices: full routes, partial routes (internal, their customers) with default, and default only. If you can't swing full routes then I would go for partial routes as it will at least send traffic for each ISP and their customers directly to them rather than randomly over the other link. It all depends on what you're going to use as your BGP speaking platform. ~Seth
Re: Failover how much complexity will it add?
I think partial routes makes perfect sense, makes sense that traffic for customers who are connected to each of my upstreams should go out of the correct BGP link as long as they are up! Now I need to start thinking of BGP router choices, sure I have a plethora of choices :-( On Sun 10:01 PM , Seth Mattinen se...@rollernet.us wrote: a...@baklawasecrets.com wrote: Hi, Ok thanks for clearing that up. I'm getting some good feedback on applying for PI and ASN through Ripe LIRs over on the UKNOF so I think I have a handle on this. With regards to BGP and using separate BGP routers. I am announcing my PI space to my upstreams, but I don't need to carry a full Internet routing table, correct? So I can get away with some lightweight BGP routers not being an ISP if that makes sense? Most will give you three choices: full routes, partial routes (internal, their customers) with default, and default only. If you can't swing full routes then I would go for partial routes as it will at least send traffic for each ISP and their customers directly to them rather than randomly over the other link. It all depends on what you're going to use as your BGP speaking platform. ~Seth
Re: Congress may require ISPs to block fraud sites H.R.3817
In message 75cb24520911060747x3556e01tbb80be8c9e0d5...@mail.gmail.com, Christ opher Morrow writes: On Thu, Nov 5, 2009 at 5:56 PM, valdis.kletni...@vt.edu wrote: On Thu, 05 Nov 2009 16:40:09 CST, Bryan King said: Did I miss a thread on this? Has anyone looked at this yet? `(2) INTERNET SERVICE PROVIDERS- Any Internet service provider that, on or through a system or network controlled or operated by the Internet service provider, transmits, routes, provides connections for, or stores any material containing any misrepresentation of the kind prohibited in paragraph (1) shall be liable for any damages caused thereby, including damages suffered by SIPC, if the Internet service provider-- routes sounds the most dangerous part there. =A0Does this mean that if we have a BGP peering session with somebody, we need to filter it? Fortunately, there's the conditions: `(A) has actual knowledge that the material contains a misrepresentation of the kind prohibited in paragraph (1), or `(B) in the absence of actual knowledge, is aware of facts or circumstances from which it is apparent that the material contains a misrepresentation of the kind prohibited in paragraph (1), and upon obtaining such knowledge or awareness, fails to act expeditiously to remove, or disable access to, the material. So the big players that just provide bandwidth to the smaller players are mostly off the hook - AS701 has no reason to be aware that some website i= n Tortuga is in violation (which raises an intresting point - what if the site *is* offshore?) mail to: ab...@uu.net Subject: Fraud through your network Hi! someone in tortuga on ip address 1.2.3.4 which I accessed through your network is fraudulently claiming to be the state-bank-of-elbonia. Just though you should know! Also, I think that HR3817 expects you'll now stop this from happening! -concerned-internet-user oops, now they have actual knowledge... I suppose this is a good reason though to: vi /etc/aliases - abuse: /dev/null There are still plenty of way to inform a company. Ring up the support line. Registered mail. I suspect a court would see the practice of sending abuse@ to /dev/null in a very poor light especially once the court learns that this is the standard address. A consumer should be able to reasonably assume that the message was delivered. If you bounce then they should be aware that it didn't get through and they can take other steps to inform you. so, is this bill helping? or hurting? :( And the immediate usptreams will fail to obtain knowledge or awareness of their customer's actions, the same way they always have. Move along, nothing to see.. ;) to my mind this is the exact same set of problems that the PA state anti-CP law brought forth... -chris -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org
Re: Pros and Cons of Cloud Computing in dealing with DDoS
On Sun, 8 Nov 2009, Dobbins, Roland wrote: if the discussion hasn't shifted from that of DDoS to EDoS, it should. All DDoS is 'EDoS' - it's a distinction without a difference, IMHO. DDoS costs opex, can cost direct revenue, can induce capex spends - it's all about economics at bottom, always has been, or nobody would care in the first place. And look at click-fraud attacks in which the miscreants either a) are committing fraud by causing botnets to make fake clicks so that they can be paid for same or b) wish to exhaust a rival's advertising budget when he's paying per-impression. Plain old packet-flooding DDoSes can cost victims/unwitting sources big money in transit costs, can cost SPs in transit and/or violating peering agreements, etc. There's no need or justification for a separate term; Chris Hoff bounced 'EDoS' around earlier this year, and the same arguments apply. The so-called EDOS is easy to solve. Just re-negotiate your contract with the cloud service provider to exclude that traffic from your bill. After all, if the cloud security provider's security is great, they shouldn't have a problem giving their customers credit for those problems which the cloud solves. No more E problems for thec customer, the DOS risk is shifted to the service provider. But now the service provider still needs to solve the same problem. Oh, the cloud service provider won't negotiate, won't give you unlimited service credits, want to charge extra for that protection, don't want to make promises it will work, and so on :-) The same unsolved problems from the 1970's mainframe/timesharing era still haven't been solved with virtualization/cloud/etc.
Re: Pros and Cons of Cloud Computing in dealing with DDoS
Sean Donelan wrote: Oh, the cloud service provider won't negotiate, won't give you unlimited service credits, want to charge extra for that protection, don't want to make promises it will work, and so on :-) The same unsolved problems from the 1970's mainframe/timesharing era still haven't been solved with virtualization/cloud/etc. I'm sorry, you must have not received the memo on how cloud computing is the bee's knees and solves all those silly problems that only affect non-cloud services. We'll get you a copy. ~Seth
Re: Failover how much complexity will it add?
So if my requirements are as follows: - BGP router capable of holding full Internet routing table. (whether I go for partial or full, I think I want something with full capability). - Capable of pushing 100meg plus of mixed traffic. What are my options? I want to exclude openbsd, or linux with quagga. Probably looking at Cisco or Juniper products, but interested in any other alternatives people suggest. I realise this is quite a broad question, but hoping this will provide a starting point. Oh and if I have missed any specs I should have included above, please let me know. Thanks Adel On Sun 10:18 PM , Seth Mattinen se...@rollernet.us wrote: a...@baklawasecrets.com wrote: I think partial routes makes perfect sense, makes sense that traffic for customers who are connected to each of my upstreams should go out of the correct BGP link as long as they are up! Now I need to start thinking of BGP router choices, sure I have a plethora of choices :-( Personally I'll always go for full routes if the router has enough memory (software based) or TCAM space (hardware based). Cheaper to do on software platforms though. An entry level Cisco 2811 can take full tables from multiple upstreams with 786MB RAM or even 512. It won't push 100 meg of mixed traffic though. ~Seth
Re: Failover how much complexity will it add?
So if my requirements are as follows: - BGP router capable of holding full Internet routing table. (whether I go for partial or full, I think I want something with full capability). - Capable of pushing 100meg plus of mixed traffic. What are my options? I want to exclude openbsd, or linux with quagga. Probably looking at Cisco or Juniper products, but interested in any other alternatives people suggest. I realise this is quite a broad question, but hoping this will provide a starting point. Oh and if I have missed any specs I should have included above, please let me know. Thanks Adel On Sun 10:18 PM , Seth Mattinen se...@rollernet.us wrote: a...@baklawasecrets.com wrote: I think partial routes makes perfect sense, makes sense that traffic for customers who are connected to each of my upstreams should go out of the correct BGP link as long as they are up! Now I need to start thinking of BGP router choices, sure I have a plethora of choices :-( Personally I'll always go for full routes if the router has enough memory (software based) or TCAM space (hardware based). Cheaper to do on software platforms though. An entry level Cisco 2811 can take full tables from multiple upstreams with 786MB RAM or even 512. It won't push 100 meg of mixed traffic though. ~Seth
What DNS Is Not
Thought-provoking article by Paul Vixie: http://queue.acm.org/detail.cfm?id=1647302 -- Alex Balashov - Principal Evariste Systems Web : http://www.evaristesys.com/ Tel : (+1) (678) 954-0670 Direct : (+1) (678) 954-0671
RE: Failover how much complexity will it add?
From: a...@baklawasecrets.com [a...@baklawasecrets.com] - BGP router capable of holding full Internet routing table. (whether I go for partial or full, I think I want something with full capability). --Capable of holding _2_ full internet routing tables if you are looking for diversity. (just being picky ;-) j.
Re: Failover how much complexity will it add?
There are any problems with quagga+BSD/Linux that you know or something like that? Or in your scenario a cisco/juniper box is a requirement? I'm asking this because I'm always running BGP with upstreams providers using quagga on BSD and everything is fine until now. -- From: a...@baklawasecrets.com Sent: Sunday, November 08, 2009 8:39 PM To: nanog@nanog.org Subject: Re: Failover how much complexity will it add? So if my requirements are as follows: - BGP router capable of holding full Internet routing table. (whether I go for partial or full, I think I want something with full capability). - Capable of pushing 100meg plus of mixed traffic. What are my options? I want to exclude openbsd, or linux with quagga. Probably looking at Cisco or Juniper products, but interested in any other alternatives people suggest. I realise this is quite a broad question, but hoping this will provide a starting point. Oh and if I have missed any specs I should have included above, please let me know. Thanks Adel
Re: Failover how much complexity will it add?
Basically the organisation that I'm working for will not have the skills in house to support a linux or bsd box. They will have trouble with supporting the BGP configuration, however I don't think they will be happy with me if I leave them with a linux box when they don't have linux/unix resource internally. At least with a Cisco or Juniper they are familiar with IOS and it won't be too foreign to them. On Sun 11:30 PM , Renato Frederick freder...@dahype.org wrote: There are any problems with quagga+BSD/Linux that you know or something like that? Or in your scenario a cisco/juniper box is a requirement? I'm asking this because I'm always running BGP with upstreams providers using quagga on BSD and everything is fine until now. -- From: Sent: Sunday, November 08, 2009 8:39 PM To: Subject: Re: Failover how much complexity will it add? So if my requirements are as follows: - BGP router capable of holding full Internet routing table. (whether I go for partial or full, I think I want something with full capability). - Capable of pushing 100meg plus of mixed traffic. What are my options? I want to exclude openbsd, or linux with quagga. Probably looking at Cisco or Juniper products, but interested in any other alternatives people suggest. I realise this is quite a broad question, but hoping this will provide a starting point. Oh and if I have missed any specs I should have included above, please let me know. Thanks Adel
Re: What DNS Is Not
Alex Balashov wrote: Thought-provoking article by Paul Vixie: http://queue.acm.org/detail.cfm?id=1647302 I doubt Henry Ford would appreciate the Mustang. -Dave
Re: What DNS Is Not
Dave Temkin wrote: Alex Balashov wrote: Thought-provoking article by Paul Vixie: http://queue.acm.org/detail.cfm?id=1647302 I doubt Henry Ford would appreciate the Mustang. I don't think that is a very accurate analogy, and in any case, the argument is not that we should immediately cease at once all the things we do with DNS today. DNS is one of the more centralised systems of the Internet at large; it works because of its reliance on intermediate caching and end-to-end accuracy. It seems to me the claim is more that DNS was not designed to handle them and that if this is what we want to do, perhaps something should supplant DNS, or, alternate methods should be used. For example, perhaps in the case of CDNs geographic optimisation should be in the province of routing (e.g. anycast) and not DNS? -- Alex -- Alex Balashov - Principal Evariste Systems Web : http://www.evaristesys.com/ Tel : (+1) (678) 954-0670 Direct : (+1) (678) 954-0671
Re: What DNS Is Not
Alex Balashov wrote: For example, perhaps in the case of CDNs geographic optimisation should be in the province of routing (e.g. anycast) and not DNS? -- Alex In most cases it already is. He completely fails to address the concept of Anycast DNS and assumes people are using statically mapped resolvers. He also assumes that DNS is some great expense and that by not allowing tons of caching we're taking money out of peoples' wallets. This is just not true with the exception of very few companies whose job it is to answer DNS requests. -Dave
Re: What DNS Is Not
On Nov 8, 2009, at 7:06 PM, Dave Temkin wrote: Alex Balashov wrote: For example, perhaps in the case of CDNs geographic optimisation should be in the province of routing (e.g. anycast) and not DNS? -- Alex In most cases it already is. He completely fails to address the concept of Anycast DNS and assumes people are using statically mapped resolvers. He also assumes that DNS is some great expense and that by not allowing tons of caching we're taking money out of peoples' wallets. This is just not true with the exception of very few companies whose job it is to answer DNS requests. This myth (that Paul mentions, not to suggest Dave T's comment is a myth) was debunked years ago: DNS Performance and the Effectiveness of Caching Jaeyeon Jung, Emil Sit, Hari Balakrishnan, and Robert Morris http://pdos.csail.mit.edu/papers/dns:ton.pdf Basically: Caching of NS records is important, particularly higher up in the hierarchy. Caching of A records is drastically less important - and, not mentioned in the article, the cost imposed by low-TTL A records is shared mostly by the client and the DNS provider, not some third party infrastructure. From the paper: Our trace-driven simulations yield two findings. First, reducing the TTLs of A records to as low as a few hundred seconds has little adverse effect on hit rates. Second, little benefit is obtained from sharing a forwarding DNS cache among more than 10 or 20 clients. This is consistent with the heavy-tailed nature of access to names. This suggests that the performance of DNS is not as dependent on aggressive caching as is commonly believed, and that the widespread use of dynamic low-TTL A-record bindings should not degrade DNS performance. The reasons for the scalability of DNS are due less to the hierarchical design of its name space or good A-record caching than seems to be widely believed; rather, the cacheability of NS records efficiently partition the name space and avoid overloading any single name server in the Internet. -Dave
Re: What DNS Is Not
DNS is NOT always defined by Paul... :) --bill On Sun, Nov 08, 2009 at 05:39:47PM -0500, Alex Balashov wrote: Thought-provoking article by Paul Vixie: http://queue.acm.org/detail.cfm?id=1647302 -- Alex Balashov - Principal Evariste Systems Web : http://www.evaristesys.com/ Tel : (+1) (678) 954-0670 Direct : (+1) (678) 954-0671
Re: What DNS Is Not
On Nov 8, 2009, at 7:30 PM, bmann...@vacation.karoshi.com wrote: On Sun, Nov 08, 2009 at 07:17:16PM -0500, David Andersen wrote: Our trace-driven simulations yield two findings. First, reducing the --- -Dave a simulation is driven from a mathmatical model, not real world constructions. Hi, Bill - The paper is worth reading. The paper also presents the results of trace-driven simulations that explore the effect of varying TTLs and varying degrees of cache sharing on DNS cache hit rates. emphasis on *trace-driven*. Now, you can argue whether or not their traces are representative (whatever that means) -- they used client DNS and TCP connection traces from MIT and KAIST, so it definitely has a .edu bias, iff there is a bias in DNS traffic for universities vs. the real world, but to the extent that their traces represent what other groups of users might see, their evaluation seems accurate. -Dave
Re: What DNS Is Not
On Sun, Nov 8, 2009 at 6:06 PM, Dave Temkin dav...@gmail.com wrote: In most cases it already is. He completely fails to address the concept of Anycast DNS and assumes people are using statically mapped resolvers. He also assumes that DNS is some great expense and that by not allowing tons of caching we're taking money out of peoples' wallets. This is just not true with the exception of very few companies whose job it is to answer DNS requests. I don't know why Paul is so concerned, just think how many F root mirrors it helps him sell to unsuspecting saps. The Henry Ford analogy was amazingly apt, imagine 'ol Henry coming back and claiming that automatic transmissions were a misuse of the automobile. Drive Slow ('cause someone left the door open at the old folks home)
Re: What DNS Is Not
On Nov 8, 2009, at 7:46 PM, bmann...@vacation.karoshi.com wrote: The paper also presents the results of trace-driven simulations that explore the effect of varying TTLs and varying degrees of cache sharing on DNS cache hit rates. I'm not debating the traces - I wonder about the simulation model. (and yes, I've read the paper) I'm happy to chat about this offline if it bores people, but I'm curious what you're wondering about. The method was pretty simple: - Record the TCP SYN/FIN packets and the DNS packets - For every SYN, figure out what name the computer had resolved to open a connection to this IP address - From the TTL of the DNS, figure out whether finding that binding would have required a DNS lookup There are some obvious potential sources of error - most particularly, name-based HTTP virtual hosting may break some of the assumptions in this - but I'd guess that with a somewhat smaller trace, not too much error is introduced by clients going to different name-based vhosts on the same IP address within a small amount of time. There are certainly some, but I'd be surprised if it was more than a %age of the accesses. Are there other methodological concerns? I'd also point out for this discussion two studies that looked at how accurately one can geo-map clients based on the IP address of their chosen DNS resolver. There are obviously potential pitfalls here (e.g., someone who travels and still uses their home resolver). In 2002: Z. M. Mao, C. D. Cranor, F. Douglis, and M. Rabinovich. A Precise and Efficient Evaluation of the Proximity between Web Clients and their Local DNS Servers. In Proc. USENIX Annual Technical Conference, Berkeley, CA, June 2002. Bottom line: It's ok but not great. We con- clude that DNS is good for very coarse-grained server selection, since 64% of the associations belong to the same Autonomous System. DNS is less useful for finer- grained server selection, since only 16% of the client and local DNS associations are in the same network-aware cluster [13] (based on BGP routing information from a wide set of routers) We did a wardriving study in Pittsburgh recently where we found that, of the access points we could connect to, 99% of them used their ISP's provided DNS server. Pretty good if your target is residential users: http://www.cs.cmu.edu/~dga/papers/han-imc2008-abstract.html (it's a small part of the paper in section 4.3). -Dave
Re: What DNS Is Not
Alex Balashov wrote: For example, perhaps in the case of CDNs geographic optimisation should be in the province of routing (e.g. anycast) and not DNS? -- Alex In most cases it already is. He completely fails to address the concept of Anycast DNS and assumes people are using statically mapped resolvers. I'm not sure that's a correct assumption. He also assumes that DNS is some great expense and that by not allowing tons of caching we're taking money out of peoples' wallets. This is just not true with the exception of very few companies whose job it is to answer DNS requests. It's kind of the same sort of thing that led to what is commonly called the Kaminsky vulnerability; the fact that it was predicted years before continues to be ignored. The reason that's relevant is because the resource consumption argument in question is the same one; in the last ten years, bandwidth, CPU, and memory resources have all moved by greater than an order of magnitude in a favorable direction for DNS operators. Paul's argument is best considered on an idealistic basis. For example, with the CDN stuff, people who muck with DNS should absolutely be aware of what Paul is saying; that does not mean that there aren't equally valid reasons to treat DNS in a different manner. The technical problems related to CDN-style use of DNS lookups are pretty well known and understood. The resource consumption issues are trivialized with the advent of high speed Internet, cheaper resources, etc. It doesn't make it idealistically *right*, but it means it is really much less damaging than ten or fifteen years ago. To classify NXDOMAIN mapping and CDN stupid DNS tricks in the same class of DNS lies is probably damaging to any debate. The former is evil for breaking a lot of things, the latter ia only handing out varied answers for questions one should have the answer to. It's the difference between being authorized to answer and just handing out answers that Paul objects to, and being unauthorized to answer and handing out answers that many people object to. My opinion is that it'd be better for Paul to avoid technical arguments that were weak even in the '90's to support his position. As it stands, people read outdated technical bits and say well, we know better, which trivializes the remaining technical and idealistic bits. That's damaging, because Paul's dead on about a lot of things. DNS is essentially the wrong level at which to be doing my web browser could not find X mapping; it'd be better to build this into web browsers instead. But that's a discussion and a half. :-) ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
Re: What DNS Is Not
On Sun, 8 Nov 2009, Alex Balashov wrote: For example, perhaps in the case of CDNs geographic optimisation should be in the province of routing (e.g. anycast) and not DNS? Well my first answer to that would be that GSLB scales down a lot further than anycast. And my first question would be what would the load on the global routing system if a couple of thousand (say) extra sites started using anycast for their content? Each would have their own AS (perhaps reused from elsewher in the company) and a small network or two. Routes would be added and withdrawn regularly and various stupid BGP tricks attempted with communitees and prefixes. I heard some anti-spam people use DNS to distribute big databases of information. I bet Vixie would have nasty things to say to the guy who first thought that up. -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ To stay awake all night adds a day to your life - Stilgar | eMT.
Re: What DNS Is Not
On 2009-11-09, at 10:35, Simon Lyall wrote: And my first question would be what would the load on the global routing system if a couple of thousand (say) extra sites started using anycast for their content? Are you asking what the impact would be of a couple of thousand extra routes in the current full table of ~250,000? That sounds like noise to me (the number, not your question :-) Joe
Re: What DNS Is Not
DNS is NOT always defined by Paul... :) I agree Bill, but Paul is right on the money about how the DNS is being misused and abused to create more smoke and mirrors in the domain name biz. I really find annoying that some ISPs (several large ones among them) are still tampering with the DNS responses just to put few more coins on their coffers from click through advertising. What I'm really afraid is that all the buzz and $$ from the domain biz will create strong resistance to any efforts to develop a real directory service or better better scheme for resource naming and location. BTW simulations != real world. Cheers Jorge
Re: Congress may require ISPs to block fraud sites H.R.3817
If you're a consumer broadband provider, and you use a DNS blackhole list so that any of your subscribers who tries to reach bigbank1.fakebanks.example.com gets redirected to fakebankwebsitelist.sipc.gov, you might be able to claim that you complied with the law, though the law's aggressive enough that it could be argued otherwise. If you're a transit ISP providing upstream bandwidth the the broadband provider, and some packets are addressed to 1.1.1.257, which is the IP address of a hosting site in Elbonia that carries bigbank1.fakebanks.example.com and innocent.bystander.example.com, the fact that the broadband ISP was using a DNS blackhole list doesn't protect you, because you're still routing packets to 1.1.0.0/16. You could set up a /32 route to send that traffic to null0, censoring innocent.bystander.example.com, or you could get fancy and route it to some squid proxy that cleans up the traffic. But of course the phisher could be using fast-flux, so 5 minutes later that trick no longer works, and by tomorrow the 100,000 phishing websites on the list have added 1,000,000 routes to your peering routers... Not pleasant, but you don't really have much alternative. -- Thanks; Bill Note that this isn't my regular email account - It's still experimental so far. And Google probably logs and indexes everything you send it.
[NANOG-announce] Communications Committee members
Kris Foster and Michael K. Smith have been chosen to fill two year terms on the Communications Committee (formerly known as the Mailing List Committee.) They join Randy Epstein and Tim Yocum, who are starting the second year of their terms, and Sue Joiner, who is the Merit appointee to the CC. The Steering Committee would also like to thank Simon Lyall for his service on the MLC. His diligence and attention to refining and documenting the committee's processes was greatly appreciated. For the SC, Steve Feldman (chair) ___ NANOG-announce mailing list nanog-annou...@nanog.org http://mailman.nanog.org/mailman/listinfo/nanog-announce
Re: What DNS Is Not
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sun, Nov 8, 2009 at 9:35 PM, David Conrad d...@virtualized.org wrote: On Nov 8, 2009, at 4:59 PM, David Andersen wrote: Z. M. Mao, C. D. Cranor, F. Douglis, and M. Rabinovich. A Precise and Efficient Evaluation of the Proximity between Web Clients and their Local DNS Servers. In Proc. USENIX Annual Technical Conference, Berkeley, CA, June 2002. Given that paper is 7 years old and the Internet has changed a bit since 2002 (and the DNS looks to change somewhat drastically in the relatively near future) it might be dangerous to rely too much on their results. This might be an interesting area of additional research... Well, the marketing folks have sure taken advantage of it. It would be nice to see the technology folks... not just lie there and take it. - - ferg -BEGIN PGP SIGNATURE- Version: PGP Desktop 9.5.3 (Build 5003) wj8DBQFK96suq1pz9mNUZTMRAni8AKDyw1NMu2FuXFVQ8vDjLSOONy8T2ACg+tNJ 2sJl1I22u18nJw0PPg1juL4= =QI6K -END PGP SIGNATURE- -- Fergie, a.k.a. Paul Ferguson Engineering Architecture for the Internet fergdawgster(at)gmail.com ferg's tech blog: http://fergdawg.blogspot.com/