Weekly Global IPv4 Routing Table Report
This is an automated weekly mailing describing the state of the Global IPv4 Routing Table as seen from APNIC's router in Japan. The posting is sent to APOPS, NANOG, AfNOG, SANOG, PacNOG, SAFNOG UKNOF, TZNOG, MENOG, BJNOG, SDNOG, CMNOG, LACNOG and the RIPE Routing WG. Daily listings are sent to bgp-st...@lists.apnic.net. For historical data, please see https://thyme.apnic.net. If you have any comments please contact Philip Smith . IPv4 Routing Table Report 04:00 +10GMT Sat 20 Apr, 2024 BGP Table (Global) as seen in Japan. Report Website: https://thyme.apnic.net Detailed Analysis: https://thyme.apnic.net/current/ Analysis Summary BGP routing table entries examined: 944392 Prefixes after maximum aggregation (per Origin AS): 360904 Deaggregation factor: 2.62 Unique aggregates announced (without unneeded subnets): 460300 Total ASes present in the Internet Routing Table: 75677 Prefixes per ASN: 12.48 Origin-only ASes present in the Internet Routing Table: 64892 Origin ASes announcing only one prefix: 26586 Transit ASes present in the Internet Routing Table: 10785 Transit-only ASes present in the Internet Routing Table:517 Average AS path length visible in the Internet Routing Table: 4.3 Max AS path length visible: 121 Max AS path prepend of ASN (150315) 116 Prefixes from unregistered ASNs in the Routing Table: 1044 Number of instances of unregistered ASNs: 1046 Number of 32-bit ASNs allocated by the RIRs: 44204 Number of 32-bit ASNs visible in the Routing Table: 36243 Prefixes from 32-bit ASNs in the Routing Table: 185454 Number of bogon 32-bit ASNs visible in the Routing Table:16 Special use prefixes present in the Routing Table:1 Prefixes being announced from unallocated address space:589 Number of addresses announced to Internet: 3028224640 Equivalent to 180 /8s, 127 /16s and 10 /24s Percentage of available address space announced: 81.8 Percentage of allocated address space announced: 81.8 Percentage of available address space allocated: 100.0 Percentage of address space in use by end-sites: 99.6 Total number of prefixes smaller than registry allocations: 311370 APNIC Region Analysis Summary - Prefixes being announced by APNIC Region ASes: 250540 Total APNIC prefixes after maximum aggregation: 73742 APNIC Deaggregation factor:3.40 Prefixes being announced from the APNIC address blocks: 242704 Unique aggregates announced from the APNIC address blocks: 100466 APNIC Region origin ASes present in the Internet Routing Table: 14096 APNIC Prefixes per ASN: 17.22 APNIC Region origin ASes announcing only one prefix: 4271 APNIC Region transit ASes present in the Internet Routing Table: 1872 Average APNIC Region AS path length visible:4.4 Max APNIC Region AS path length visible:121 Number of APNIC region 32-bit ASNs visible in the Routing Table: 9479 Number of APNIC addresses announced to Internet: 761931136 Equivalent to 45 /8s, 106 /16s and 37 /24s APNIC AS Blocks4608-4864, 7467-7722, 9216-10239, 17408-18431 (pre-ERX allocations) 23552-24575, 37888-38911, 45056-46079, 55296-56319, 58368-59391, 63488-64098, 64297-64395, 131072-153913 APNIC Address Blocks 1/8, 14/8, 27/8, 36/8, 39/8, 42/8, 43/8, 49/8, 58/8, 59/8, 60/8, 61/8, 101/8, 103/8, 106/8, 110/8, 111/8, 112/8, 113/8, 114/8, 115/8, 116/8, 117/8, 118/8, 119/8, 120/8, 121/8, 122/8, 123/8, 124/8, 125/8, 126/8, 133/8, 150/8, 153/8, 163/8, 171/8, 175/8, 180/8, 182/8, 183/8, 202/8, 203/8, 210/8, 211/8, 218/8, 219/8, 220/8, 221/8, 222/8, 223/8, ARIN Region Analysis Summary Prefixes being announced by ARIN Region ASes:275322 Total ARIN prefixes after maximum aggregation: 124663 ARIN Deaggregation factor: 2.21 Prefixes being announced from the ARIN address blocks: 280196 Unique aggregates announced from the ARIN address blocks:133382 ARIN Region origin ASes present in the Internet Routing Table:19136 ARIN Prefixes per ASN:
Xfinity Engineer
My company has many issues with Xfinity users using global protection on the Xfinity network, not Comcast. Does anyone have a contact email list or phone number I can use to reach a real person or engineer who is not in support? Thanks Jason. -- Sincerely, Jason W Kuehl Cell 920-419-8983 jason.w.ku...@gmail.com
Re: Whitebox Routers Beyond the Datasheet
Fri, Apr 12, 2024 at 08:03:49AM -0500, Mike Hammett: > I'm looking at the suitability of whitebox routers for high through, low port > count, fast BGP performance applications. Power efficiency is important as > well. > > > What I've kind of come down to (based on little more than spec sheets) are > the EdgeCore AGR400 and the UfiSpace S9600-30DX. They can both accommodate at > least three directions of 400G for linking to other parts of my network and > then have enough 100G or slower ports to connect to transit, peers, and > customers as appropriate. Any other suggestions for platforms similar to > those would be appreciated. Most of the white boxes are same, in mpov, with small variations. And that is the whole idea. I would choose the NOS you want first. There are several, but few I would want in production. If it is a PoS or unmanageable, it does not matter what the h/w capabilities are. Was it created by seasoned engineers in Internet-scale routing? And, because each box will require some software specific to it, though limited in scope, the NOS will dictate which boxes are available to choose among. Beyond the hardware capabilities, also consider with whom your NOS mfg has the best working relationship. That will dictate their ability to quickly resolve h/w-specific issues in their s/w or even answer h/w-specific questions for you. Also consider what the h/w maintenance program is globally. Is it important for you to have 4hr replacements in Hong Kong? That will affect your decision greatly. ~1.5yr ago, it seemed like everyone was moving toward UfiSpace h/w, away from EdgeCore. Ask others about the reliability of the specific h/w you are considering.
Re: constant FEC errors juniper mpc10e 400g
On Fri, 19 Apr 2024 at 10:55, Mark Tinka wrote:> FEC is amazing. > At higher data rates (100G and 400G) for long and ultra long haul optical > networks, SD-FEC (Soft Decision FEC) carries a higher overhead penalty > compared to HD-FEC (Hard Decision FEC), but the net OSNR gain more than > compensates for that, and makes it worth it to increase transmission distance > without compromising throughput. Of course there are limits to this, as FEC is hop-by-hop, so in long-haul you'll know about circuit quality to the transponder, not end-to-end. Unlike in wan-phy, OTN where you know both. Technically optical transport could induce FEC errors, if there are FEC errors on any hop, so consumers of optical networks need not have access to optical networks to know if it's end-to-end clean. Much like cut-through switching can induce errors via some symbols to communicate the CRC errors happened earlier, so the receiver doesn't have to worry about problems on their end. -- ++ytti
Re: constant FEC errors juniper mpc10e 400g
On 4/19/24 08:01, Saku Ytti wrote: The frames in FEC are idle frames between actual ethernet frames. So you recall right, without FEC, you won't see this idle traffic. It's very very good, because now you actually know before putting the circuit in production, if the circuit works or not. Lot of people have processes to ping from router-to-router for N time, trying to determine circuit correctness before putting traffic on it, which looks absolutely childish compared to FEC, both in terms of how reliable the presumed outcome is and how long it takes to get to that presumed outcome. FEC is amazing. At higher data rates (100G and 400G) for long and ultra long haul optical networks, SD-FEC (Soft Decision FEC) carries a higher overhead penalty compared to HD-FEC (Hard Decision FEC), but the net OSNR gain more than compensates for that, and makes it worth it to increase transmission distance without compromising throughput. Mark.
Re: constant FEC errors juniper mpc10e 400g
On Thu, 18 Apr 2024 at 21:49, Aaron Gould wrote: > Thanks. What "all the ethernet control frame juju" might you be referring > to? I don't recall Ethernet, in and of itself, just sending stuff back and > forth. Does anyone know if this FEC stuff I see concurring is actually > contained in Ethernet Frames? If so, please send a link to show the ethernet > frame structure as it pertains to this 400g fec stuff. If so, I'd really > like to know the header format, etc. The frames in FEC are idle frames between actual ethernet frames. So you recall right, without FEC, you won't see this idle traffic. It's very very good, because now you actually know before putting the circuit in production, if the circuit works or not. Lot of people have processes to ping from router-to-router for N time, trying to determine circuit correctness before putting traffic on it, which looks absolutely childish compared to FEC, both in terms of how reliable the presumed outcome is and how long it takes to get to that presumed outcome. -- ++ytti