[j-nsp] Inhibiting external announcements of routes for which a larger announcement exists
Hi, apologies for the bad subject line - couldn't think of a way to condence my question into one line in a good ay. Let me explain what I'm trying to do: I've got 87.238.32/19 allocated from the RIPE NCC, and I intend to split it between our existing Norwegian site and our up-and-coming Swedish one. Most likely I will leave 87.238.32/20 and 87.238.48/21 for Norway, and have 87.238.56/21 for Sweden. We'll and have BGP speakers with transit providers and IX connections in both countries. When everything is working fine I'd like to announce the /19 in both places, as the link between the sites should be high-speed enough to handle it. However, should the link between the two sites fail, I'd like to immediately stop announcing the /19, and instead start announcing the Norwegian /20+/21 in Norway and the Swedish /21 in Sweden, so that traffic destined for Norway won't enter my AS in Sweden and vice verca. I expect the backup link between the sites not to be fast enough to support that kind of traffic. It should be simple enough to accomplish this by creating aggregate routes for the /20 and the /21s on the routers in their respective countries, and a /19 in both places that need all the /20 and /21s as contributing members to be active. However that means that in a normal situation I'll announce /19 _and_ the longer prefixes at the same time, and I'd like not to pollute the global routing table with superfluous prefixes unless necessary (ie. if the link between the countries goes down). I want a setup that inhibits the announcement of the /20 and /21s to external neigbours if (and only if) the /19 is also announced to them at the same time. Is that possible? Regards, -- Tore Anderson ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] LDP/RSVP interop
Hey Richard, I had raised 101569 for the bypass bouncing after bandwidth related resignal, and was told by DE this was expected behavior. At the time the explanation made sense. If a bypass m is protecting lsp n, and lsp n is torn down, for any reason and in any manner (make before break or not), then the bypass shares the same fate, as for some time there is no lsp n for bypass m to protect. I also see the ~ 60 second delay to resignal the bypass, and also felt that was per design as we want to make sure the new lsp is up/stable before we go though the bypass bother. Perhaps an er can be filed if you feel this is unacceptable. Regards -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Richard A Steenbergen Sent: Sunday, September 28, 2008 6:54 PM To: Mark Tinka Cc: juniper-nsp@puck.nether.net Subject: Re: [j-nsp] LDP/RSVP interop On Mon, Sep 29, 2008 at 07:34:20AM +0800, Mark Tinka wrote: On Monday 29 September 2008 04:08:49 David Ball wrote: You can certainly use both LDP and RSVP on the Juniper box with no problems. RSVP routes in inet3 are preferred over LDP routes, so LDP can actually act as a bit of a backup in case you had a massive RSVP failure. I'm doing this in my network. Can't speak for the Cisco. Inter-op for LDP and RSVP between C and J works fine. Do take some time to look through these threads on the subject, though: http://puck.nether.net/pipermail/juniper-nsp/2008-June/010549.html http://puck.nether.net/pipermail/juniper-nsp/2008-May/010395.html http://puck.nether.net/pipermail/juniper-nsp/2008-April/010247.html On the Cisco side, note that it is generally recommended to tunnel LDP within RSVP, particularly if you're deploying l3vpn's. This is done by configuring 'mpls ip' on the RSVP Another one to add to the list of things you should consider when working with mixed Juniper/Cisco MPLS is: set protocols isis|ospf traffic-engineering ignore-lsp-metrics For an LSP with the head on Juniper and tail on Cisco, this works around Cisco's inability to set an igp cost of 0 on its loopback interface. If you ever wondered why the #$%^ your lsp cost was 1 higher than your igp cost, now you know. :) On the subject of Juniper MPLS, has anyone else noticed that every time you resignal an LSP, even with make-before-break, any associated bypass LSPs are also torn down and take around 50 secs to come back up? I specifically noticed this when auto-bandwidth runs and upates the bw reservation in rsvp, which triggers a resignaling. If you have quick auto-bw timers, this leaves a fairly substantial amount of time that a noticable percentage of your LSPs are not protected, which I consider a bad thing. Since in the case above, where your path isn't actually changing (and the only reason you're resignaling is a bw update), and the bypass LSP is 0 bandwidth anyways, shouldn't it be possible to detect this and not teardown the bypass? The 50 sec delay seems a little excessive too, and I can't find a cause or knob to speed it up. -- Richard A Steenbergen [EMAIL PROTECTED] http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC) ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] LDP/RSVP interop
On Mon, Sep 29, 2008 at 08:41:42AM -0700, Harry Reynolds wrote: Hey Richard, I had raised 101569 for the bypass bouncing after bandwidth related resignal, and was told by DE this was expected behavior. At the time the explanation made sense. If a bypass m is protecting lsp n, and lsp n is torn down, for any reason and in any manner (make before break or not), then the bypass shares the same fate, as for some time there is no lsp n for bypass m to protect. I agree it is the expected behavior as it is currently designed, I just think there is a possible optimization which can avoid tearing down the bypass lsp unnecessarily. Clearly it depends why you needed to tear down the LSP in the first place, since an error, preemption, or path change is a different animal from an autobw adjustment. And a pox on whoever designed it so you need to resignal to adjust the bw resv. :) I don't see why it wouldn't be safe to say, if you are the head of the tunnel, and you've just done a make-before-break resignal to update a bandwidth reservation, and you've seen no change in the forwarding path on the new lsp, then avoid tearing down the bypass. I also see the ~ 60 second delay to resignal the bypass, and also felt that was per design as we want to make sure the new lsp is up/stable before we go though the bypass bother. Perhaps an er can be filed if you feel this is unacceptable. I know a few other people who have been hitting this too, so perhaps an ER to speed up the bypass resignal is in order. Yes I know the big telco answer is to just run absurdly long auto-bw timers so you don't see this issue very often, and hope that overflow detection will catch any sudden changes in traffic before too much $%^* sticks to the fan, but I've had great lucky with aggressive auto-bw intervals aside from this issue. Even if you aren't running particularly aggressive timers, if you have a lot of LSPs it seems pretty likely that you are going to have some non-deterministic but sizable percentage of paths unprotected at any given moment. -- Richard A Steenbergen [EMAIL PROTECTED] http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC) ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] subscriber access on MX
Hi, Does anyone know how to activate (apply) Radius authentication for subscriber management on an MX node? I have subscribers configured for dynamic access through an external DHCP server. For some reason, I'm getting the DHCP address without being first authenticated on MX through Radius. I'm monitoring my Radius server and no requests for authentication are coming in at all. It looks like the dynamic AAA needs to be applied somewhere but I'm not sure where. The documentation (subscriber access) mention 'logical-systems' hierarchy but this hierarchy does not exist on Junos 9.2. Here is my config: # these are dynamic-profiles that should be active on the access interfaces dynamic-profiles { basic-profile { interfaces { $junos-interface-ifd-name { unit $junos-underlying-interface-unit; } } } } # these two are the access interfaces interfaces { ge-0/0/0 { vlan-tagging; unit 1 { vlan-id 1; family inet { unnumbered-address lo0.0 preferred-source-address 1.1.1.1; } } unit 2 { vlan-id 2; family inet { unnumbered-address lo0.0 preferred-source-address 1.1.1.1; } } } # this is dhcp -relay config and this works fine, I'm getting IP address assigned forwarding-options { dhcp-relay { server-group { test { 10.0.0.100; } } group test1 { active-server-group test; interface ge-0/0/0.1; interface ge-0/0/0.2; } } } # this is my Radius profile access { radius-server { 114.0.1.10 secret $9$4DZGi.PQ/9pTz9pB1rl4aZUk.; ## SECRET-DATA } profile subs { authentication-order radius; radius { authentication-server 114.0.1.10; } } } This is how I think should be applied access-profile subs; Thanks, Marlon ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] Fwd: subscriber access on MX
-- Forwarded message -- From: Marlon Duksa [EMAIL PROTECTED] Date: Mon, Sep 29, 2008 at 2:58 PM Subject: Re: [j-nsp] subscriber access on MX To: Christopher Hartley [EMAIL PROTECTED] hmm, in this case below you have the authenticator hierarchy under dot1x. But I can't find anything similar in my case, something that would tell DHCP clients to be authenticated via radius. I have the radius server and profile under the access hierarchy but I don't know how to apply this to my dynamic profiles. In this below, where is the connection between the profile 'subs' (where I defined radius server) and my DHCP clients coming inon the access interfaces? access { radius-server { 114.0.1.10 secret $9$4DZGi.PQ/9pTz9pB1rl4aZUk.; ## SECRET-DATA } profile subs { authentication-order radius; radius { authentication-server 114.0.1.10; } } } access-profile subs; forwarding-options { dhcp-relay { server-group { test { 10.0.0.100; } } group test1 { active-server-group test; dynamic-profile basic-profile; interface ge-0/0/0.1; interface ge-0/0/0.2; } } } dynamic-profiles { basic-profile { interfaces { $junos-interface-ifd-name { unit $junos-underlying-interface-unit; } } } } On Mon, Sep 29, 2008 at 1:07 PM, Christopher Hartley [EMAIL PROTECTED]wrote: How about something like the following. Note that this is for an EX, but it should be the same? I enabled system authentication-order radius so as to test prior to enabling for an authenticator EAP will pick your authentication mechanism. I'm using eapmd5... system { ... authentication-order [ radius password ]; ... radius-server { REMOVED { secret REMOVED; ## SECRET-DATA source-address REMOVED; } } ... } [EMAIL PROTECTED] show configuration protocols dot1x traceoptions { file dot1x-trace world-readable; # for debugging if necessary... } authenticator { authentication-profile-name rad1; interface { ge-0/0/0.0 { supplicant single-secure; retries 5; no-reauthentication; server-timeout 30; maximum-requests 10; guest-vlan guest1; } } } I look forward to seeing your resolution.. Marlon Duksa [EMAIL PROTECTED] 09/29/08 3:54 PM Hi, Does anyone know how to activate (apply) Radius authentication for subscriber management on an MX node? I have subscribers configured for dynamic access through an external DHCP server. For some reason, I'm getting the DHCP address without being first authenticated on MX through Radius. I'm monitoring my Radius server and no requests for authentication are coming in at all. It looks like the dynamic AAA needs to be applied somewhere but I'm not sure where. The documentation (subscriber access) mention 'logical-systems' hierarchy but this hierarchy does not exist on Junos 9.2. Here is my config: # these are dynamic-profiles that should be active on the access interfaces dynamic-profiles { basic-profile { interfaces { $junos-interface-ifd-name { unit $junos-underlying-interface-unit; } } } } # these two are the access interfaces interfaces { ge-0/0/0 { vlan-tagging; unit 1 { vlan-id 1; family inet { unnumbered-address lo0.0 preferred-source-address 1.1.1.1; } } unit 2 { vlan-id 2; family inet { unnumbered-address lo0.0 preferred-source-address 1.1.1.1; } } } # this is dhcp -relay config and this works fine, I'm getting IP address assigned forwarding-options { dhcp-relay { server-group { test { 10.0.0.100; } } group test1 { active-server-group test; interface ge-0/0/0.1; interface ge-0/0/0.2; } } } # this is my Radius profile access { radius-server { 114.0.1.10 secret $9$4DZGi.PQ/9pTz9pB1rl4aZUk.; ## SECRET-DATA } profile subs { authentication-order radius; radius { authentication-server 114.0.1.10; } } } This is how I think should be applied access-profile subs; Thanks, Marlon ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] best RE-333 OS
What's the best RE-333 version to run? Latest 9.x or 8.x or what? I deal with some re-333s on 7.5R1.12 now and it's fine. So maybe this is just a stupid question. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] BFD?
Hi, I am reading about BFD - Bidirectional Forwarding Detection. I am in confused that why some routing protocol have already the keepalive engine still need to enforce with BFD? Thanks, Fitter ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD?
On Tuesday 30 September 2008 09:25:14 Fitter wrote: I am reading about BFD - Bidirectional Forwarding Detection. I am in confused that why some routing protocol have already the keepalive engine still need to enforce with BFD? Because most routing protocols would generally employ intervals by the second - this might be too long for failure detection, much less convergence. BFD provides failure detection in the order of sub-seconds (more precisely, milliseconds). If properly configured with the intended client routing protocol, convergence can be hastened. Cheers, Mark. signature.asc Description: This is a digitally signed message part. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD?
And also, if your routing protocols have 100 neighbors? what will you do if you want to change the hello timer? From: [EMAIL PROTECTED] To: juniper-nsp@puck.nether.net Date: Tue, 30 Sep 2008 09:34:15 +0800 CC: [EMAIL PROTECTED] Subject: Re: [j-nsp] BFD? On Tuesday 30 September 2008 09:25:14 Fitter wrote: I am reading about BFD - Bidirectional Forwarding Detection. I am in confused that why some routing protocol have already the keepalive engine still need to enforce with BFD? Because most routing protocols would generally employ intervals by the second - this might be too long for failure detection, much less convergence. BFD provides failure detection in the order of sub-seconds (more precisely, milliseconds). If properly configured with the intended client routing protocol, convergence can be hastened. Cheers, Mark. _ 新版手机MSN,满足你多彩需求!参加抢鲜体验活动,领取特色奖品! http://mobile.msn.com.cn/ ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] BFD?
apply-groups come in handy for this sort of thing, if I understand your example. David 2008/9/29 zhouyifeng [EMAIL PROTECTED]: And also, if your routing protocols have 100 neighbors? what will you do if you want to change the hello timer? From: [EMAIL PROTECTED] To: juniper-nsp@puck.nether.net Date: Tue, 30 Sep 2008 09:34:15 +0800 CC: [EMAIL PROTECTED] Subject: Re: [j-nsp] BFD? On Tuesday 30 September 2008 09:25:14 Fitter wrote: I am reading about BFD - Bidirectional Forwarding Detection. I am in confused that why some routing protocol have already the keepalive engine still need to enforce with BFD? Because most routing protocols would generally employ intervals by the second - this might be too long for failure detection, much less convergence. BFD provides failure detection in the order of sub-seconds (more precisely, milliseconds). If properly configured with the intended client routing protocol, convergence can be hastened. Cheers, Mark. _ 新版手机MSN,满足你多彩需求!参加抢鲜体验活动,领取特色奖品! http://mobile.msn.com.cn/ ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp