Re: [c-nsp] ASR 1000 series replacement
On Sat, 16 Dec 2023 at 18:38, Charles Sprickman via cisco-nsp wrote: > > There are hundreds of GRE tunnels. > > I have nothing to offer, and I'm mostly out of the ISP game, but I am so > curious what the use-case is here, especially the "BGP to each CPE". I > understand this might be private info, but I'm just so very curious. The BGP > part is where I'm stumped... Any reason why you'd need hub+spoke topology, so many use cases. I've use it for two things. Mobile backup termination OOB termination In both cases with BGP, because I had 2 hubs, for redundancy. But BGP would be needed in the first case anyhow, as the customer delivers IPs. And helps in the 2nd case, even without redundancy to simplify configuration and keep hub configuration free (no touching on hub, when adding or removing spokes, due to BGP listen/allow). I mean this is what got rebranded under SDN but has existed before, it's just pragmatic and specific. -- ++ytti ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] ASR 1000 series replacement
> On Dec 16, 2023, at 4:16 AM, Dragan Jovicic via cisco-nsp > wrote: > > Greeting, > We have a somewhat unusual scenario with thousands of CPE devices each > using cellular interface and gre tunnel to connect to hub router, currently > ASR 1001x. > The hub router deploys NHRP map multicast with GRE tunnels and bgp session > to each cpe device, each tunnel different customer vrf connected to mpls > core network. > There are hundreds of GRE tunnels. I have nothing to offer, and I'm mostly out of the ISP game, but I am so curious what the use-case is here, especially the "BGP to each CPE". I understand this might be private info, but I'm just so very curious. The BGP part is where I'm stumped... Charles ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] ASR 1000 series replacement
Hi, That's great, because we had the same chassis in mind. The peculiarity comes from the way CPEs are configured, routing, NAT between vrfs, one tunnel limit per CPE, and some other things. Anyway, awesome - thank you. BR On Sat, Dec 16, 2023 at 10:35 AM Tarko Tikan via cisco-nsp < cisco-nsp@puck.nether.net> wrote: > hey, > > > We have a somewhat unusual scenario with thousands of CPE devices > > each using cellular interface and gre tunnel to connect to hub > > router, currently ASR 1001x. The hub router deploys NHRP map > > multicast with GRE tunnels and bgp session to each cpe device, each > > tunnel different customer vrf connected to mpls core network. There > > are hundreds of GRE tunnels. > > Not really so unusual in SP environment. > > > What would be logical replacement for hub router considering > > expansion and redundancy. We tried a pair of stacked Cisco 9500, and > > it performed worse than expected. > > cat8500 family (non-L models). Forget the stupid naming, this is > actually next-gen QFP and should be called asr1k+ > > > One solution we have is another > > router with same addressing scheme, and to rely on routing to migrate > > tunnels to this new router in the event of failure of original hub. > > Anycast works and this is what we did for exactly the scenario your > described earlier. But we found that we'd like it to be more hitless so > we are now deploying dual tunnels from every CPE to C8500-12X headends. > > -- > tarko > > ___ > cisco-nsp mailing list cisco-nsp@puck.nether.net > https://puck.nether.net/mailman/listinfo/cisco-nsp > archive at http://puck.nether.net/pipermail/cisco-nsp/ > ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] ASR 1000 series replacement
hey, We have a somewhat unusual scenario with thousands of CPE devices each using cellular interface and gre tunnel to connect to hub router, currently ASR 1001x. The hub router deploys NHRP map multicast with GRE tunnels and bgp session to each cpe device, each tunnel different customer vrf connected to mpls core network. There are hundreds of GRE tunnels. Not really so unusual in SP environment. What would be logical replacement for hub router considering expansion and redundancy. We tried a pair of stacked Cisco 9500, and it performed worse than expected. cat8500 family (non-L models). Forget the stupid naming, this is actually next-gen QFP and should be called asr1k+ One solution we have is another router with same addressing scheme, and to rely on routing to migrate tunnels to this new router in the event of failure of original hub. Anycast works and this is what we did for exactly the scenario your described earlier. But we found that we'd like it to be more hitless so we are now deploying dual tunnels from every CPE to C8500-12X headends. -- tarko ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] ASR 1000 series replacement
Greeting, We have a somewhat unusual scenario with thousands of CPE devices each using cellular interface and gre tunnel to connect to hub router, currently ASR 1001x. The hub router deploys NHRP map multicast with GRE tunnels and bgp session to each cpe device, each tunnel different customer vrf connected to mpls core network. There are hundreds of GRE tunnels. What would be logical replacement for hub router considering expansion and redundancy. We tried a pair of stacked Cisco 9500, and it performed worse than expected. One solution we have is another router with same addressing scheme, and to rely on routing to migrate tunnels to this new router in the event of failure of original hub. We can't modify cpe configuration (must target same ip address). Any suggestions? Thanks ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/