as-set members
hello i have an as-set that has some members, other as-sets. can i exclude some members from my as-set members? as-set: me members: as-set-1, as-set-2, as-set-3 as-set-3 has some members that i want to exlude; let's say as-set-xxx, is a member of as-set as-set-3 is there something like members: as-set-1, as-set-2, as-set-3 and not as-set-xxx ? thanks
Re: as-set members
Hi Bogdan, If you are on Cisco, you can accomplish this using the attribute-map argument to the as-set statement. On Juniper, this is fairly easy to accomplish with routing policy (learning RegEx will make your life easier). HTHs. Stefan (sorry for the top post, I'm on my mobile...) - Reply message - From: Bogdan shos...@shoshon.ro Date: Sat, Apr 2, 2011 7:32 am Subject: as-set members To: nanog@nanog.org hello i have an as-set that has some members, other as-sets. can i exclude some members from my as-set members? as-set: me members: as-set-1, as-set-2, as-set-3 as-set-3 has some members that i want to exlude; let's say as-set-xxx, is a member of as-set as-set-3 is there something like members: as-set-1, as-set-2, as-set-3 and not as-set-xxx ? thanks
Re: as-set members
hi i am using cisco and rtconfig. On 02.04.2011 15:47, Stefan Fouant wrote: Hi Bogdan, If you are on Cisco, you can accomplish this using the attribute-map argument to the as-set statement. On Juniper, this is fairly easy to accomplish with routing policy (learning RegEx will make your life easier). HTHs. Stefan (sorry for the top post, I'm on my mobile...) - Reply message - From: Bogdan shos...@shoshon.ro Date: Sat, Apr 2, 2011 7:32 am Subject: as-set members To: nanog@nanog.org hello i have an as-set that has some members, other as-sets. can i exclude some members from my as-set members? as-set: me members: as-set-1, as-set-2, as-set-3 as-set-3 has some members that i want to exlude; let's say as-set-xxx, is a member of as-set as-set-3 is there something like members: as-set-1, as-set-2, as-set-3 and not as-set-xxx ? thanks
Re: as-set members
On 02/04/2011 12:32, Bogdan wrote: as-set-3 has some members that i want to exlude; let's say as-set-xxx, is a member of as-set as-set-3 is there something like members: as-set-1, as-set-2, as-set-3 and not as-set-xxx ? No, you can't do this in an as-set definition. What you can do is specify it in your routing policy definition in your aut-num object. So you could say, for example: import: from AS65234 accept AS-ME and not AS-SET-XXX Nick
Re: as-set members
On 02.04.2011 19:41, Nick Hilliard wrote: On 02/04/2011 12:32, Bogdan wrote: as-set-3 has some members that i want to exlude; let's say as-set-xxx, is a member of as-set as-set-3 is there something like members: as-set-1, as-set-2, as-set-3 and not as-set-xxx ? No, you can't do this in an as-set definition. What you can do is specify it in your routing policy definition in your aut-num object. So you could say, for example: import: from AS65234 accept AS-ME and not AS-SET-XXX Nick got it thanks
Re: Embratel or GVT contact
Hello, Regarding Embratel contacts, you can use: b...@embratel.net.br and netad...@embratel.net.br. Regards, Alessandro Martins On Fri, Apr 1, 2011 at 21:14, Mark Wall ospfisi...@gmail.com wrote: Is there anyone from GVT or Embratel with a clue willing to help me with a BGP route issue we are seeing? Thanks
State of QoS peering in Nanog
Folks, The Canadian telecommunications regulator, the CRTC, has just launched a public notice with possible worldwide implications IMHO, Telecom Notice of Consultation CRTC 2011-206: http://www.crtc.gc.ca/eng/archive/2011/2011-206.htm I think this is the very first regulatory inquiry into IP to IP interconnection for PSTN local interconnection. One of the postulates that I intend to defend, is that in the PSTN today, in addition to interconnecting for the purpose of exchanging voice calls, it is possible to LOCALLY (at the Local Interconnection Region, roughly a US LATA) interconnect with guaranteed QoS for ISDN video conferencing. In other words, there is more to PSTN interconnection than the support of the G.711 CODEC. Other CODECs are supported, such as H.320. This brings me to a point. Why should we loose this important feature of the PSTN, support for multiple CODECs, as we carelessly bottom level IP-IP interconnection to G.711 only. Video conferencing on the Internet, particularly at high resolution, is not a reality today to say the least, foregoing of guessing what the future will hold. Why not consider HD audio ? Therefore: A) I want to capture all instances where this issue has been addressed worldwide. B) I also want to understand what is going on, insofar as enabling guaranteed QoS peering across BGP-4 interconnections in the Nanog community. C) I also want to understand whether there is inter-service-provider RSVP or other per-session QoS establishment protocols. I call upon the Nanog community to consider this proceeding as very important and contribute to this thread. And I will try to provide a forum for discussing this outside of Nanog when required. Regards, -=Francois=-
Re: Ping - APAC Region
On Tue, Mar 29, 2011 at 11:17 AM, Matthew Palmer mpal...@hezmatt.org wrote: On Tue, Mar 29, 2011 at 06:33:07PM +0100, Robert Lusby wrote: Looking at hosting some servers in Hong Kong, to serve the APAC region. Our client is worried that this may slow things down in their Australia region, and are wondering whether hosting the servers in an Australian data-centre would be a better option. Does anyone have any statistics on this? No formal statistics, just a lot of experience. You may be unsurprised to learn that serving into Australia from outside Australia is slower than serving from within Australia. That being said, there's a fair bit less distance for the light to travel from Hong Kong or anywhere in the region than from the US. Given that the bulk of the population density in Australia is on the eastern coast near Sydney, and the *only* fiber path going anywhere near Asia from Sydney does so via Guam, the light path traveled from Sydney to Guam to La Union (PH) to Hong Kong isn't appreciably shorter than the light path from Sydney to Hawaii to the US--which is covered by roughly 6x as many fiber runs as the Guam pathway, and is thus somewhat cheaper to get onto--you might as well host on the west coast of the US as in Hong Kong. (and *that* was a horrific run-on sentence!) If I look at average data for the past five years between Sydney and Hong Kong, San Jose, Singapore, and Los Angeles, on average it's better to serve Sydney from Los Angeles than Hong Kong or Singapore: mpetach@netops:/home/mrtg/public_html/performance ~/tmp/avgperf.pl AUE HKI total daily data files read: 1559 AUE to HKI latency (min/avg/max): 134.216/173.273/1052.158 mpetach@netops:/home/mrtg/public_html/performance ~/tmp/avgperf.pl AUE SJC total daily data files read: 1558 AUE to SJC latency (min/avg/max): 149.829/176.674/308.637 mpetach@netops:/home/mrtg/public_html/performance ~/tmp/avgperf.pl AUE SG1 total daily data files read: 1558 AUE to SG1 latency (min/avg/max): 101.871/204.485/999 mpetach@netops:/home/mrtg/public_html/performance ~/tmp/avgperf.pl AUE LAX total daily data files read: 931 AUE to LAX latency (min/avg/max): 157.603/166.720/999 mpetach@netops:/home/mrtg/public_html/performance That is predicated on having good direct links, which is eye-wateringly expensive if you're used to US data costs (data going from China to Australia via San Jose... aaargh). Then again, hosting within Australia is similarly expensive, so splitting your presence isn't going to help you any from a cost PoV. It's not really a matter of eye-wateringly expensive, so much as simple basic existence; there's no direct Sydney to southern Asia fiber, at the moment; the best you can do is hop through Papua New Guinea to Guam, and then back across into southern Asia. (or overshoot up to Japan, and then bounce your way back down from there). Matt
Re: State of QoS peering in Nanog
In a message written on Sat, Apr 02, 2011 at 04:00:30PM -0400, Francois Menard wrote: One of the postulates that I intend to defend, is that in the PSTN today, in addition to interconnecting for the purpose of exchanging voice calls, it is possible to LOCALLY (at the Local Interconnection Region, roughly a US LATA) interconnect with guaranteed QoS for ISDN video conferencing. The PSTN features fixed, known bandwidth. QoS isn't really the right term. When I nail up a BRI, I know I have 128kb of bandwidth, never more, never less. There is no function on that channel similar to IP QoS. When talking about IP QoS people like to talk about guaranteed, or reserved bandwidth for particular applications. The reality is though that's not how IP QoS works. IP QoS is really about identifying which traffic can be thrown away first in th face of congestion. Guaranteeing 128kb for a video call really means making sure all other traffic is thrown away first, in the face of congestion. In other words, there is more to PSTN interconnection than the support of the G.711 CODEC. Other CODECs are supported, such as H.320. This brings me to a point. Why should we loose this important feature of the PSTN, support for multiple CODECs, as we carelessly bottom level IP-IP interconnection to G.711 only. IP networks can't tell the difference between G.711, H.320, and the SMTP packets used to deliver this e-mail. IP networks know nothing about CODECs, and operate entirely on IP address and port information. B) I also want to understand what is going on, insofar as enabling guaranteed QoS peering across BGP-4 interconnections in the Nanog community. You're looking at the wrong point in the network. In my experience, full peering circuits are very much the exception, not the rule. While almost all the exceptions hit NANOG and are the subject of fun and lively discussion, the reality is they are rare. When there is no congestion, there is no reason to drop a packet. A QoS policy would go unused, or if you want to look from the other direction everything has 100% bandwidth across that link. In an IP network, the bandwidth constraints are almost always across an administrative boundary. This means in the majority of the case across transit circuits, not peering. 80-90% of the packet loss in the network happens at the end user access port, inbound or outbound. Another 5-10% occurs where regional or non-transit free providers buy transit. Lastly, 3-5% occurs where there are geographic or geopolitical issues (oceans to cross, country boarders with restrictive governments to cross). Basically, you could mandate QoS on every peering link in the Internet and I suspect 99% of the end users would never notice any change. If you want to advocate for useful changes to end users that provide a better network experience, you need to focus your efforts in three areas: 1) Fight bufferbloat. http://en.wikipedia.org/wiki/Bufferbloat http://arstechnica.com/tech-policy/news/2011/01/understanding-bufferbloat-and-the-network-buffer-arms-race.ars http://www.bufferbloat.net/ 2) Get access ISPs to offer QoS on customer access ports, ideally in some user configurable way. 3) Get ISP's who purchase transit further up the line to implement QoS with their transit provider for their customers traffic, if they are going to run those links at full. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ pgpuyhhQROyBa.pgp Description: PGP signature
Re: State of QoS peering in Nanog
On Sat, Apr 2, 2011 at 5:56 PM, Leo Bicknell bickn...@ufp.org wrote: The PSTN features fixed, known bandwidth. QoS isn't really the right term. When I nail up a BRI, I know I have 128kb of bandwidth, never more, never less. There is no function on that channel similar to IP QoS. The PSTN also has exactly one unidirectional flow per access port. This is not true of IP networks, where an end-user access port may have dozens of flows going at once for common web browsing, and perhaps hundreds of flows when using P2P file sharing applications, etc. The lifetime of these flows may be several hours (streaming movie) or under a second (web browser.) Where the PSTN has channels between two access ports (which might be packetized within the backbone) and a relatively complex control plane for establishing flows, the IP network has little or no knowledge of flows, and if it does have any knowledge of them, it's not because a control plane exists to establish them, it's because punting from the data plane to the control plane allows flow state to be established for things like NAT. Basically, you could mandate QoS on every peering link in the Internet and I suspect 99% of the end users would never notice any change. I don't agree with this. IMO all DDoS traffic would suddenly be marked into the highest priority forwarding class that doesn't have an absurdly low policer for the DDoS source's access port, and as a result, DDoS would more easily cripple the network, either from hitting policers on the higher-priority traffic and killing streaming movies/voip/etc, or in the absence of policers, it would more easily cause significant packet loss to best-effort traffic. I think end-users would notice because their ISP would suddenly grind to a halt anytime a clever DDoS was directed their way. We will no sooner see a practical solution to this than we will one for large-scale multicast in backbone and subscriber access networks. The limitations are similar: to be effective, you need a lot more state for multicast. For a truly good QoS implementation, you need a lot more hardware counters and policers (more state.) If you don't have this, all your QoS setup will do, deployed across a large Internet subscriber access network, is work a little better under ideal conditions, and probably a lot worse when subjected to malicious traffic. 2) Get access ISPs to offer QoS on customer access ports, ideally in some user configurable way. I do agree that QoS should be available to end-users across access links, but I don't agree with pushing it further towards the core unless per-subscriber policers are available beyond those on access routers. Otherwise, all someone has to do to be mean to Netflix is send a short-term, high-volume DoS attack that looks like Netflix traffic towards an end-user IP, which would interrupt movie-viewing for a potentially larger number of users, or at least as many end-users as the same DoS would in the absence of any QoS. The case of per-subscriber policers pushed further towards the ISP core fares better. -- Jeff S Wheeler j...@inconcepts.biz Sr Network Operator / Innovative Network Concepts
Re: HIJACKED: 159.223.0.0/16 -- WTF? Does anybody care?
I may regret wading into this one Regarding posting from a Gmail account, I'm also posting from a non-work account, for two reasons. One, our company policy is to tag an annoying legal disclaimer onto every outbound message, and two, I don't want anything I say on this list to come back on the company I work for. I'm not authorized to speak for them, so I won't. When it comes to abuse complaints, we investigate and act to protect our customers and our network when we determine that abuse is indeed happening. Only after we deal with the immediate threat do we contact our customer to let them know. Although there are cases of intentional abuse, the majority of the time the customer has no idea what we're talking about. They have to get their tech people or an outside support company to look into the problem, and then they call us back when they have it fixed. Sometimes we work directly with their tech people to help them identify the source. We would NEVER out the customer to the public, even if we felt the abuse was intentional. My CEO and our lawyers would blow a gasket if we were to potentially libel a customer. There have been plenty of times when I was every bit as frustrated as some of the people on this list, but to start naming names without proof? Won't happen. Jason On 4/1/2011 11:31 AM, Atticus wrote: Please note, I'm not arguing against fixing the problem. I just think we should show each other some professional respect, and use some manners.
Re: Ping - APAC Region
Also remember, you would be serving Australia only from Australia. if I'm not mistaken, the Australia backbone is more or less volume based cahrged... http://www.aarnet.edu.au/services/aarnet-charging.aspx AARNet3 charges are different for Shareholders (Members) and for Non Shareholders (Associates and Affiliates). Billing On Net and Off Net subscriptions are calculated in October each year, and invoices must be delivered soon after to allow sufficient time for customers to pay in advance for the following calendar year. For those invoices not paid in full and in advance, On Net and Off Net Subscriptions, and Access Charges are invoiced by quarter and in advance. All Usage charges, including Excess Traffic, are invoiced retrospectively after each quarter. On 4/3/11 9:40 , Matthew Petach mpet...@netflight.com wrote: On Tue, Mar 29, 2011 at 11:17 AM, Matthew Palmer mpal...@hezmatt.org wrote: On Tue, Mar 29, 2011 at 06:33:07PM +0100, Robert Lusby wrote: Looking at hosting some servers in Hong Kong, to serve the APAC region. Our client is worried that this may slow things down in their Australia region, and are wondering whether hosting the servers in an Australian data-centre would be a better option. Does anyone have any statistics on this? No formal statistics, just a lot of experience. You may be unsurprised to learn that serving into Australia from outside Australia is slower than serving from within Australia. That being said, there's a fair bit less distance for the light to travel from Hong Kong or anywhere in the region than from the US. Given that the bulk of the population density in Australia is on the eastern coast near Sydney, and the *only* fiber path going anywhere near Asia from Sydney does so via Guam, the light path traveled from Sydney to Guam to La Union (PH) to Hong Kong isn't appreciably shorter than the light path from Sydney to Hawaii to the US--which is covered by roughly 6x as many fiber runs as the Guam pathway, and is thus somewhat cheaper to get onto--you might as well host on the west coast of the US as in Hong Kong. (and *that* was a horrific run-on sentence!) If I look at average data for the past five years between Sydney and Hong Kong, San Jose, Singapore, and Los Angeles, on average it's better to serve Sydney from Los Angeles than Hong Kong or Singapore: mpetach@netops:/home/mrtg/public_html/performance ~/tmp/avgperf.pl AUE HKI total daily data files read: 1559 AUE to HKI latency (min/avg/max): 134.216/173.273/1052.158 mpetach@netops:/home/mrtg/public_html/performance ~/tmp/avgperf.pl AUE SJC total daily data files read: 1558 AUE to SJC latency (min/avg/max): 149.829/176.674/308.637 mpetach@netops:/home/mrtg/public_html/performance ~/tmp/avgperf.pl AUE SG1 total daily data files read: 1558 AUE to SG1 latency (min/avg/max): 101.871/204.485/999 mpetach@netops:/home/mrtg/public_html/performance ~/tmp/avgperf.pl AUE LAX total daily data files read: 931 AUE to LAX latency (min/avg/max): 157.603/166.720/999 mpetach@netops:/home/mrtg/public_html/performance That is predicated on having good direct links, which is eye-wateringly expensive if you're used to US data costs (data going from China to Australia via San Jose... aaargh). Then again, hosting within Australia is similarly expensive, so splitting your presence isn't going to help you any from a cost PoV. It's not really a matter of eye-wateringly expensive, so much as simple basic existence; there's no direct Sydney to southern Asia fiber, at the moment; the best you can do is hop through Papua New Guinea to Guam, and then back across into southern Asia. (or overshoot up to Japan, and then bounce your way back down from there). Matt
Re: Ping - APAC Region
On 03/04/2011, at 8:42 AM, Franck Martin wrote: Also remember, you would be serving Australia only from Australia. if I'm not mistaken, the Australia backbone is more or less volume based cahrged... AARNET is the Academic and Research Network, it's not THE backbone. (Note: in previous incarnations many years ago it was). Australia is an island, approximately the same size as continental USA but with only about 22M people. It's not really on the way anywhere, so the submarine capacity is pretty much limited to what is needed to serve Australia. There exist various submarine cables which go North to Guam and beyond (AJC/PPC1) and East from Sydney (SCCN, Endeavour) as well as SMW3 from Perth to Singapore. SMW3 is a great path into Singapore, except it's old and capacity is limited. Another cable is meant to being built on that path - many people have tried, let's hope the next attempt will work. Connecting these we have really only four sets of land based networks (Telstra, Optus, AAPT, NextGen - not all of these have complete coverage and/or rely on others for redundancy). We're very like Canada in some ways - small population along and edge (Canada is the US border, we're along the Southern and Eastern coasts). Various providers have capacity on different sets of cables. It's difficult to generalise as, for instance, some providers use the cable into Asia to provide business customers with good connectivity but don't generalise that to residential customers. The kinds of connectivity at the end of those cables varies as well. If you want to get content into Australia then generally to get the best delivery: a) Put it on the West Coast of the USA - LA or San Jose - everyone has good connectivity to those places. Look for places you can easily get content into AS4637, AS7473/7474, AS4826 and AS4739. AS4648 for NZ and some of AU as well. (AS4739 will peer with you there :-) (*) b) Deliver it domestically in Australia in Sydney. Equinix Sydney is a good place to start. You can get domestic transit as well as good peering to most providers. It's also close to the large population centres on the East Coast (SYD, MEL). c) Failing that - try Japan first, then Hong Kong then Singapore. But you will need to combine with a) or b) to give good connectivity to all providers. Consider various acceleration things like CDNs - especially LLNW, AKAM and EdgeCast who all have delivery capability in AU already. If anyone has any specific AU questions then I'm happy to try and answer off list. (I work for AS4739 and am responsible for peering and transit so have reasonable interest in delivery of content to customers in AU - we're keen to have GOOD connectivity). (*) AS4637 has AS1221 behind it, AS7473 has AS7474 (their customers are in AS4804) - they have around 50% of the market together in terms of traffic delivered to the AU market. Tools like peeringdb.comhttp://peeringdb.com and bgp.he.nethttp://bgp.he.net will tell you how everyone's connected. MMC -- Matthew Moyle-Croft Peering Manager and Team Lead - Commercial and DSLAMs Internode /Agile Level 5, 150 Grenfell Street, Adelaide, SA 5000 Australia Email: m...@internode.com.aumailto:m...@internode.com.auWeb: http://www.on.nethttp://www.on.net/
Re: State of QoS peering in Nanog
In a message written on Sat, Apr 02, 2011 at 07:00:52PM -0400, Jeff Wheeler wrote: I don't agree with this. IMO all DDoS traffic would suddenly be marked into the highest priority forwarding class that doesn't have an absurdly low policer for the DDoS source's access port, and as a result, DDoS would more easily cripple the network, either from hitting policers on the higher-priority traffic and killing streaming movies/voip/etc, or in the absence of policers, it would more easily cause significant packet loss to best-effort traffic. Agree in part, and disagree in part. No doubt DDoS programs will try and masquerade as high priority traffic. This will create a new set of problems, and require some new solutions. Let's separate the problem into two parts. The first is best effort traffic. Provided the QoS policy only prioritizes a fraction of the bandwidth (20 to maybe 40%), the impact of a DDoS that came in prioritized would only be a few percentage points worse than a standard DDoS. Today it takes about 10x link speed to make a link completely unusable (although YMMV, and it depends a lot on your traffic mix and definition of unusable). Witha 25% priority queue, and the DDoS hitting it that may drop to 8x. I think it is both statistically interesting, but also relatively minor. The second problem is what happens to priority traffic. You are correct that if DDoS traffic can come in prioritized then you only need fill the priority queue 2x-4x to generate issues (as streaming traffic is more sensitive), assuming traffic over the limit is not dropped but rather allowed best effort. This is likely a lower threshold than filling the entire link 5x-10x, and thus easier for the attacker. But it also only affects priority queue traffic. I realize I'm making a value judgment, but many customers under DDoS would find things vastly improved if their video conferencing went down, but everything else continued to work (if slowly), compared to today when everything goes down. In closing, I want to push folks back to the buffer bloat issue though. More than once I've been asked to configure QoS on the network to support VoIP, Video Conferencing or the like. These things were deployed and failed to work properly. I went into the network and _reduced_ the buffer sizes, and _increased_ packet drops. Magically these applications worked fine, with no QoS. Video conferencing can tolerate a 1% packet drop, but can't tolerate a 4 second buffer delay. Many people today who want QoS are actually suffering from buffer bloat. :( This is very hard to explain, while people on NANOG might get it 99% of the network engineers in the world think minimizing packet loss is the goal. It is very much an uphill battle to make them understand higher packet loss often _increases_ end user performance on full links. -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ pgpPLIG1qC44R.pgp Description: PGP signature