Re: MX204 Virtual Chassis Setup
On 8/26/23 00:54, Tom Beecher wrote: It would, sure. Instead of storing a single prefix/next-hop with flags in memory, you now have to store every prefix/next-hop that you are announcing as well. Indeed. But it has been worth it. The load balancing from PE-to-PE has been fantastic, especially when coupled with BGP Multipath. No more messing about with LOCAL_PREF for multi-homed customers, and it works just as well with different (but equal-length) AS_PATH's. Mark.
Re: MX204 Virtual Chassis Setup
> > On MX480 16GB RE's running two full BGP feeds but hundreds of customer > sessions, Add-Paths really eats into RAM. > It would, sure. Instead of storing a single prefix/next-hop with flags in memory, you now have to store every prefix/next-hop that you are announcing as well. On Fri, Aug 25, 2023 at 5:39 PM Mark Tinka wrote: > > > On 8/25/23 19:16, Tom Beecher wrote: > > > In my experience and testing with them, you have a decent bit of > > headroom past the published RIB/FIB limits before they'll fall over. > > They are holding up pretty well for us, mainly because we do a lot more > BGP on MX480's than on MX204's. We use the MX204's mainly for peering > and CDN gateways. Where we use them for edge customers, it's a handful > of BGP sessions. > > On MX480 16GB RE's running two full BGP feeds but hundreds of customer > sessions, Add-Paths really eats into RAM. We've had to upgrade some of > the busier routers from 16GB to 64GB RE's, especially on later versions > of code where ROV can also bite into memory on boxes carrying lots of > BGP sessions. > > Mark. >
Re: MX204 Virtual Chassis Setup
No VC here, unsure if it works, but yeah, we like them and deploy them in pairs for metro-e (ce) and cbh for vlans carried over mpls pw Reliable for us Aaron > On Aug 25, 2023, at 4:40 PM, Mark Tinka wrote: > > > >> On 8/25/23 19:16, Tom Beecher wrote: >> >> In my experience and testing with them, you have a decent bit of headroom >> past the published RIB/FIB limits before they'll fall over. > > They are holding up pretty well for us, mainly because we do a lot more BGP > on MX480's than on MX204's. We use the MX204's mainly for peering and CDN > gateways. Where we use them for edge customers, it's a handful of BGP > sessions. > > On MX480 16GB RE's running two full BGP feeds but hundreds of customer > sessions, Add-Paths really eats into RAM. We've had to upgrade some of the > busier routers from 16GB to 64GB RE's, especially on later versions of code > where ROV can also bite into memory on boxes carrying lots of BGP sessions. > > Mark.
Re: MX204 Virtual Chassis Setup
On 8/25/23 19:16, Tom Beecher wrote: In my experience and testing with them, you have a decent bit of headroom past the published RIB/FIB limits before they'll fall over. They are holding up pretty well for us, mainly because we do a lot more BGP on MX480's than on MX204's. We use the MX204's mainly for peering and CDN gateways. Where we use them for edge customers, it's a handful of BGP sessions. On MX480 16GB RE's running two full BGP feeds but hundreds of customer sessions, Add-Paths really eats into RAM. We've had to upgrade some of the busier routers from 16GB to 64GB RE's, especially on later versions of code where ROV can also bite into memory on boxes carrying lots of BGP sessions. Mark.
starlink deluge test and 33 (-2) engine static fire
I know it is a bit off topic for nanog, but this test was very exciting today: https://twitter.com/SpaceX/status/1695158759717474379 Happy friday! While they have filed for a launch license for august 31st, it is impossible for me to believe that date! It was also difficult to believe the deluge ("bidet") system would actually work. I look forward to a launch vehicle capable of putting up the next generation of starlink sats which are estimated to have 4x the capacity of the old, and further improvements on their wifi, backbone and satellite switching technologies. -- Podcast: https://www.youtube.com/watch?v=bxmoBr4cBKg Dave Täht CSO, LibreQos
Weekly Global IPv4 Routing Table Report
This is an automated weekly mailing describing the state of the Global IPv4 Routing Table as seen from APNIC's router in Japan. The posting is sent to APOPS, NANOG, AfNOG, SANOG, PacNOG, SAFNOG UKNOF, TZNOG, MENOG, BJNOG, SDNOG, CMNOG, LACNOG and the RIPE Routing WG. Daily listings are sent to bgp-st...@lists.apnic.net. For historical data, please see https://thyme.apnic.net. If you have any comments please contact Philip Smith . IPv4 Routing Table Report 04:00 +10GMT Sat 26 Aug, 2023 BGP Table (Global) as seen in Japan. Report Website: https://thyme.apnic.net Detailed Analysis: https://thyme.apnic.net/current/ Analysis Summary BGP routing table entries examined: 928003 Prefixes after maximum aggregation (per Origin AS): 352092 Deaggregation factor: 2.64 Unique aggregates announced (without unneeded subnets): 452650 Total ASes present in the Internet Routing Table: 74761 Prefixes per ASN: 12.41 Origin-only ASes present in the Internet Routing Table: 64180 Origin ASes announcing only one prefix: 26379 Transit ASes present in the Internet Routing Table: 10581 Transit-only ASes present in the Internet Routing Table:457 Average AS path length visible in the Internet Routing Table: 4.2 Max AS path length visible: 70 Max AS path prepend of ASN (263725) 64 Prefixes from unregistered ASNs in the Routing Table: 995 Number of instances of unregistered ASNs: 997 Number of 32-bit ASNs allocated by the RIRs: 42501 Number of 32-bit ASNs visible in the Routing Table: 35005 Prefixes from 32-bit ASNs in the Routing Table: 174578 Number of bogon 32-bit ASNs visible in the Routing Table:29 Special use prefixes present in the Routing Table:1 Prefixes being announced from unallocated address space:550 Number of addresses announced to Internet: 3056713472 Equivalent to 182 /8s, 49 /16s and 191 /24s Percentage of available address space announced: 82.6 Percentage of allocated address space announced: 82.6 Percentage of available address space allocated: 100.0 Percentage of address space in use by end-sites: 99.5 Total number of prefixes smaller than registry allocations: 309148 APNIC Region Analysis Summary - Prefixes being announced by APNIC Region ASes: 246403 Total APNIC prefixes after maximum aggregation: 70479 APNIC Deaggregation factor:3.50 Prefixes being announced from the APNIC address blocks: 240069 Unique aggregates announced from the APNIC address blocks:98774 APNIC Region origin ASes present in the Internet Routing Table: 13598 APNIC Prefixes per ASN: 17.65 APNIC Region origin ASes announcing only one prefix: 4033 APNIC Region transit ASes present in the Internet Routing Table: 1802 Average APNIC Region AS path length visible:4.4 Max APNIC Region AS path length visible: 25 Number of APNIC region 32-bit ASNs visible in the Routing Table: 8921 Number of APNIC addresses announced to Internet: 773327744 Equivalent to 46 /8s, 24 /16s and 11 /24s APNIC AS Blocks4608-4864, 7467-7722, 9216-10239, 17408-18431 (pre-ERX allocations) 23552-24575, 37888-38911, 45056-46079, 55296-56319, 58368-59391, 63488-64098, 64297-64395, 131072-153913 APNIC Address Blocks 1/8, 14/8, 27/8, 36/8, 39/8, 42/8, 43/8, 49/8, 58/8, 59/8, 60/8, 61/8, 101/8, 103/8, 106/8, 110/8, 111/8, 112/8, 113/8, 114/8, 115/8, 116/8, 117/8, 118/8, 119/8, 120/8, 121/8, 122/8, 123/8, 124/8, 125/8, 126/8, 133/8, 150/8, 153/8, 163/8, 171/8, 175/8, 180/8, 182/8, 183/8, 202/8, 203/8, 210/8, 211/8, 218/8, 219/8, 220/8, 221/8, 222/8, 223/8, ARIN Region Analysis Summary Prefixes being announced by ARIN Region ASes:271744 Total ARIN prefixes after maximum aggregation: 123766 ARIN Deaggregation factor: 2.20 Prefixes being announced from the ARIN address blocks: 273939 Unique aggregates announced from the ARIN address blocks:131069 ARIN Region origin ASes present in the Internet Routing Table:19105 ARIN Prefixes per ASN:
Re: MX204 Virtual Chassis Setup
> > On another note, the potential issue we might run into is pressure on > control plane memory on the MX204 for us that run BGP Add-Paths. You can > always upgrade the RE on an MX240/480/960, but the MX204 is fixed (and > last time I checked, fiddling with Juniper RE memory was generally > frowned upon). > In my experience and testing with them, you have a decent bit of headroom past the published RIB/FIB limits before they'll fall over. On Fri, Aug 25, 2023 at 11:35 AM Mark Tinka wrote: > > > On 8/23/23 17:14, Matt Erculiani wrote: > > > Does Fusion not make sense in this case? I've not had a ton of > > experience with it, but it does well to add a crazy port count to an > > otherwise very port limited device. > > In small edge PoP's, we attach an Arista 1U switch with tons of 1/10Gbps > ports to an MX204 via 802.1Q. Works a treat. I've never been convinced > by vendor-specific satellite systems :-). > > On another note, the potential issue we might run into is pressure on > control plane memory on the MX204 for us that run BGP Add-Paths. You can > always upgrade the RE on an MX240/480/960, but the MX204 is fixed (and > last time I checked, fiddling with Juniper RE memory was generally > frowned upon). > > Luckily, the MX10003 ships with 64GB of RAM, since it is now EoL. > > The MX304 ships with 128GB of RAM, so anybody running Add-Paths on that > box won't have an issue there. > > Mark. >
Re: Deployments of Provider Backbone Bridging (PBB)
Hi Etienne, Those replies are accurate. There are still some large PBB deployments since once you deploy technologies it’s hard to change. However, there haven’t really been new PBB deployments in many years now. Vendors are also not developing the features to support it any more. I would consider it a dead technology at this point.There is L2 PBB and flavors of PBB-VPLS and PBB-EVPN, PBB-VPLS was more widely deployed over MPLS than any other “PBB” technology. Thanks, Phil From: NANOG on behalf of Etienne-Victor Depasquale via NANOG Date: Friday, August 25, 2023 at 3:35 AM To: NANOG Subject: Re: Deployments of Provider Backbone Bridging (PBB) I've had two private replies, both of which suggest that PBB has little to no share in the overall pie of the aggregation technology space, nor in the overall pie of the core technology space. However, a third correspondent states that Bard (Google's "Chat-based AI tool") claims that PBB is deployed by AT, Verizon, China Mobile, Deutsche Telecom and Comcast. This correspondent warns that Bard "could be hallucinating" :) Any further data points/insight would be appreciated. Cheers, Etienne On Wed, Aug 23, 2023 at 10:43 AM Etienne-Victor Depasquale mailto:ed...@ieee.org>> wrote: Hello folks, Based on data I've gathered through quantitative and qualitative surveying, I can detect no application of Provider Backbone Bridging (MAC-in-MAC). Please bear with me while I clarify that I am not enquiring about Provider Bridging (QinQ). I would like to ask specifically about knowledge of deployments of PBB. If anyone would care to share data points, on- or off-list, I would love to know about them. I am open to anything on the subject of PBB's adoption that you are free to share with me. I am bound by GDPR and will anonymize any data that is not open for public disclosure. Thank you! Etienne -- Ing. Etienne-Victor Depasquale -- Ing. Etienne-Victor Depasquale
Re: MX204 Virtual Chassis Setup
On 8/23/23 17:14, Matt Erculiani wrote: Does Fusion not make sense in this case? I've not had a ton of experience with it, but it does well to add a crazy port count to an otherwise very port limited device. In small edge PoP's, we attach an Arista 1U switch with tons of 1/10Gbps ports to an MX204 via 802.1Q. Works a treat. I've never been convinced by vendor-specific satellite systems :-). On another note, the potential issue we might run into is pressure on control plane memory on the MX204 for us that run BGP Add-Paths. You can always upgrade the RE on an MX240/480/960, but the MX204 is fixed (and last time I checked, fiddling with Juniper RE memory was generally frowned upon). Luckily, the MX10003 ships with 64GB of RAM, since it is now EoL. The MX304 ships with 128GB of RAM, so anybody running Add-Paths on that box won't have an issue there. Mark.
Re: Deployments of Provider Backbone Bridging (PBB)
On 8/25/23 09:41, Tarko Tikan wrote: AFAIK this reflects the reality very well. There are huge PBB deployments in very large networks but the overall number of networks, using PBB, is very low. Even in those networks PBB is/will be phased out so don't expect any new deployments. It is still well supported by the vendors who initially invested into PBB. Most operators, especially of small-to-medium size scope, but even some larger ones, will go from 802.1Q to Q-in-Q, and then to MPLS. MPLS end-to-end is not as common as MPLS combined with 802.1Q or Q-in-Q, in my very rough anecdotal experience. This is mostly due to cost control, as well as a seemingly common preference to have a so-called Internet Gateway (IGW) pinning all those pseudowires. There have been ramblings of VXLAN as an IP-based underlay in lieu of MPLS. I don't know how well it has scaled outside of the data centre, i.e., in the Metro-E network. But the rate at which I hear about VXLAN for Metro-E deployments is also the same rate at which I don't hear about VXLAN for Metro-E deployments. In other words, it appears to be neither here nor there. Mark.
Re: Deployments of Provider Backbone Bridging (PBB)
hey, I've had two private replies, both of which suggest that PBB has little to no share in the overall pie of the aggregation technology space, nor in the overall pie of the core technology space. However, a third correspondent states that Bard (Google's "Chat-based AI tool") claims that PBB is deployed by AT, Verizon, China Mobile, Deutsche Telecom and Comcast. This correspondent warns that Bard "could be hallucinating" :) AFAIK this reflects the reality very well. There are huge PBB deployments in very large networks but the overall number of networks, using PBB, is very low. Even in those networks PBB is/will be phased out so don't expect any new deployments. It is still well supported by the vendors who initially invested into PBB. -- tarko
Re: Deployments of Provider Backbone Bridging (PBB)
I've had two private replies, both of which suggest that PBB has little to no share in the overall pie of the aggregation technology space, nor in the overall pie of the core technology space. However, a third correspondent states that Bard (Google's "Chat-based AI tool") claims that PBB is deployed by AT, Verizon, China Mobile, Deutsche Telecom and Comcast. This correspondent warns that Bard "could be hallucinating" :) Any further data points/insight would be appreciated. Cheers, Etienne On Wed, Aug 23, 2023 at 10:43 AM Etienne-Victor Depasquale wrote: > Hello folks, > > Based on data I've gathered through quantitative and qualitative > surveying, > I can detect no application of Provider Backbone Bridging (MAC-in-MAC). > Please bear with me while I clarify that I am not enquiring about Provider > Bridging (QinQ). > > I would like to ask specifically about knowledge of deployments of PBB. > If anyone would care to share data points, on- or off-list, I would love > to know about them. > I am open to anything on the subject of PBB's adoption that you are free > to share with me. > I am bound by GDPR and will anonymize any data that is not open for public > disclosure. > > Thank you! > > Etienne > > -- > Ing. Etienne-Victor Depasquale > > -- Ing. Etienne-Victor Depasquale