Re: constant FEC errors juniper mpc10e 400g

2024-04-22 Thread Mark Tinka




On 4/22/24 09:47, Vasilenko Eduard via NANOG wrote:


Assume that some carrier has 10k FBB subscribers in a particular municipality 
(without any hope of considerably increasing this number).
2Mbps is the current average per household in the busy hour, pretty uniform 
worldwide.
You could multiply it by 8/7 if you like to add wireless -> not much would 
change.
2*2*10GE (2*10GE on the ring in every direction) is 2 times than needed to 
carry 10k subscribers.
The optical ring may be less than 20 municipalities - it is very common.
Hence, the upgrade of old extremely cheap 10GE DWDM systems (for 40 lambdas) 
makes sense for some carriers.
It depends on the population density and the carrier market share.
10GE for the WAN side would not be dead in the next 5 years because 2Mbps per 
household would not grow very fast in the future - this logistic curve is close 
to a plateau.
PS: It is probably not the case for Africa where new subscribers are connected 
to the Internet at a fast rate.


As a function of how much Internet there is in Africa, there really 
aren't that many optical transport service providers. Some 
countries/cities/towns have more than they need, others have just one. 
But in general, you would say there is massive room for improvement if 
you surveyed the entire continent.


Typically, it will be the incumbents, alongside 2 or 3 competitives. In 
fact, in some African countries, only the incumbent may be large enough 
to run an optical backbone, with all the competitives leasing capacity 
from them.


It is not uncommon to find the closest competitor to an incumbent for 
terrestrial services being the mobile network operator, purely because 
they have some excess capacity left over from having to build the 
backbone for their core business, mobile. And, they are flush with cash, 
so a loss-making terrestrial backhaul business can be covered by the 
month's sales in SIM cards.


Truly independent transport providers are few and far between because 
access to dark fibre is not easy (either its lack of availability, the 
incumbent refusing to sell it, or its high price). For the few 
independent transport providers that do spring up, they will focus on a 
limited set of hot routes, and because competition on those routes may 
be wanting, prices and capacity would not be terribly attractive.


So the bulk of Africa's Internet really relies on a handful of key 
African wholesale IP Transit providers taking great effort into 
extending their network as deep into cities as they can, and using their 
size to negotiate the best prices for terrestrial backhaul from the few 
optical network operators that the market has. Those providers then sell 
to the local and regional ISP's, who don't have to worry about running a 
backbone.


All this means is that for those operators that run an optical backbone, 
especially nationally, 10G carriers are very, very legacy. If they still 
have them, it'd be a spin-off off the main core to support some old SDH 
customers that are too comfortable to move to Ethernet.


Metro backhaul and last mile FNO's (fibre network operators) who have 
invested in extending access into homes and businesses are a different 
story, with most countries having a reasonable handful of options 
customers can choose from. Like national backhaul, there is plenty of 
room for improvement - in some markets more than others - but unlike 
national backhaul, not as constrained for choice or price.


Mark.


RE: constant FEC errors juniper mpc10e 400g

2024-04-22 Thread Vasilenko Eduard via NANOG
Assume that some carrier has 10k FBB subscribers in a particular municipality 
(without any hope of considerably increasing this number).
2Mbps is the current average per household in the busy hour, pretty uniform 
worldwide.
You could multiply it by 8/7 if you like to add wireless -> not much would 
change.
2*2*10GE (2*10GE on the ring in every direction) is 2 times than needed to 
carry 10k subscribers.
The optical ring may be less than 20 municipalities - it is very common.
Hence, the upgrade of old extremely cheap 10GE DWDM systems (for 40 lambdas) 
makes sense for some carriers.
It depends on the population density and the carrier market share.
10GE for the WAN side would not be dead in the next 5 years because 2Mbps per 
household would not grow very fast in the future - this logistic curve is close 
to a plateau.
PS: It is probably not the case for Africa where new subscribers are connected 
to the Internet at a fast rate.
Ed/
-Original Message-
From: NANOG  On Behalf Of 
Tarko Tikan
Sent: Saturday, April 20, 2024 19:19
To: nanog@nanog.org
Subject: Re: constant FEC errors juniper mpc10e 400g

hey,

> That said, I don't expect any subsea cables getting built in the next 
> 3 years and later will have 10G as a product on the SLTE itself... it 
> wouldn't be worth the spectrum.

10G wavelengths for new builds died about 10 years ago when coherent 100G 
became available, submarine or not. Putting 10G into same system is not really 
feasible at all.

--
tarko



Re: constant FEC errors juniper mpc10e 400g

2024-04-21 Thread Mark Tinka



On 4/21/24 08:12, Saku Ytti wrote:


Key difference being, WAN-PHY does not provide synchronous timing, so
it's not SDH/SONET compatible for strict definition for it, but it
does have the frame format. And the optical systems which could
regenerate SONET/SDH framing, didn't care about timing, they just
wanted to be able to parse and generate those frames, which they
could, but they could not do it for ethernet frames.


Correct.

In those days, WAN-PHY was considered "SONET/SDH-Lite".

I think it is pretty clear, the driver was to support long haul
regeneration, so it was always going to be a stop-gap solution. Even
though I know some networks, who specifically wanted WAN-PHY for its
error reporting capabilities, I don't think this was majority driver,
majority driver almost certainly was 'thats only thing we can put on
this circuit'.


SONET/SDH did have similar reach as OTN back then, just less bandwidth 
for the distance. It had FEC support for STM-16, STM-64 and STM-256.


I really think the bigger driver was interface cost, because EoS had 
already been selling for 1GE alongside STM-16 for 2.5G. In those days, 
if you needed more than 1G but less than 10G, it was a toss-up between 
2x 1G EoSDH vs. 1x STM-16. Often times, you took the 2x 1G EoSDH because 
2x 1GE ports were cheaper than 1x STM-16 port, even though you ended up 
losing about 405Mbps of capacity in the process, which was a huge deal.


The backbone providers did not like selling EoSDH services, because it 
was too much admin. for them (VC container management), and they ended 
up paying more for transponders on their side than their customers did 
for Ethernet ports on theirs :-).


But by and large, the majority of networks in our market maintained SDH 
services long after coherent became commercially available. It was a 
perception thing, that SDH was more superior to Ethernet, even if that 
Ethernet was transported over a DWDM network.


In the end, SDH port costs were too hard to ignore due to router vendors 
maintaining their mark-up on them despite dying demand, which then led 
to the migration from SDH to EoDWDM growing significantly from about 
2016. Optical vendors also began de-prioritizing SDH transponder ports, 
which had a massive impact on the SLTE (submarine) side in making the 
decision to encourage customers away from SDH for wet services.


Mark.


Re: constant FEC errors juniper mpc10e 400g

2024-04-21 Thread Saku Ytti
On Sun, 21 Apr 2024 at 09:05, Mark Tinka  wrote:

> Technically, what you are describing is EoS (Ethernet over SONET, Ethernet 
> over SDH), which is not the same as WAN-PHY (although the working groups that 
> developed these nearly confused each other in the process, ANSI/ITU for the 
> former vs. IEEE for the latter).
>
> WAN-PHY was developed to be operated across multiple vendors over different 
> media... SONET/SDH, DWDM, IP/MPLS/Ethernet devices and even dark fibre. The 
> goal of WAN-PHY was to deliver a low-cost Ethernet interface that was 
> SONET/SDH-compatible, as EoS interfaces were too costly for operators and 
> their customers.
>
> As we saw in real life, 10GE ports out-sold STM-64/OC-192 ports, as networks 
> replaced SONET/SDH backbones with DWDM and OTN.

Key difference being, WAN-PHY does not provide synchronous timing, so
it's not SDH/SONET compatible for strict definition for it, but it
does have the frame format. And the optical systems which could
regenerate SONET/SDH framing, didn't care about timing, they just
wanted to be able to parse and generate those frames, which they
could, but they could not do it for ethernet frames.

I think it is pretty clear, the driver was to support long haul
regeneration, so it was always going to be a stop-gap solution. Even
though I know some networks, who specifically wanted WAN-PHY for its
error reporting capabilities, I don't think this was majority driver,
majority driver almost certainly was 'thats only thing we can put on
this circuit'.

-- 
  ++ytti


Re: constant FEC errors juniper mpc10e 400g

2024-04-21 Thread Mark Tinka



On 4/20/24 21:36, b...@uu3.net wrote:


Erm, WAN-PHY did not extend into 40G because there was not much
of those STM-256 deployment? (or customers didnt wanted to pay for those).


With SONET/SDH, as the traffic rate increased, so did the number of 
overhead bytes. With every iteration of the data rate, the overhead 
bytes quadrupled. This was one of the key reasons we did not see field 
deployment of STM-256/OC-768 and STM-1024/OC-3072. For example, if 
SONET/SDH had to transport a 100G service, it would require 160Gbps of 
bandwidth. That clearly wasn't going to work.


At the time when STM-256/OC-768 was being developed, DWDM and OTN had 
come a long way. The granularity SONET/SDH required to stand up a 
service had become too small for the growing data rate (primarily VC-3, 
VC-4 and VC-12). If you look at OTN, the smallest container is 1.25Gbps 
(ODU0), which was attractive for networks looking to migrate from E1's, 
E3's, STM-1's, STM-4's and STM-16's - carried over VC-12, VC-4 and VC-3 
SDH circuits - to 1GE, for example, rather than trying to keep their 
PDH/SDH infrastructure going.


DWDM and OTN permitted a very small control overhead, so as data rates 
increased, there wasn't the same penalty you got with SONET/SDH.

WAN-PHY was designed so people could encapsulate Ethernet frames
right into STM-64. Once world moved out of SDH/SONET stuff, there was
no more need for WAN-PHY anymore.


Technically, what you are describing is EoS (Ethernet over SONET, 
Ethernet over SDH), which is not the same as WAN-PHY (although the 
working groups that developed these nearly confused each other in the 
process, ANSI/ITU for the former vs. IEEE for the latter).


WAN-PHY was developed to be operated across multiple vendors over 
different media... SONET/SDH, DWDM, IP/MPLS/Ethernet devices and even 
dark fibre. The goal of WAN-PHY was to deliver a low-cost Ethernet 
interface that was SONET/SDH-compatible, as EoS interfaces were too 
costly for operators and their customers.


As we saw in real life, 10GE ports out-sold STM-64/OC-192 ports, as 
networks replaced SONET/SDH backbones with DWDM and OTN.


Mark.


Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread borg
Erm, WAN-PHY did not extend into 40G because there was not much
of those STM-256 deployment? (or customers didnt wanted to pay for those).

WAN-PHY was designed so people could encapsulate Ethernet frames
right into STM-64. Once world moved out of SDH/SONET stuff, there was
no more need for WAN-PHY anymore.


-- Original message --

From: Mark Tinka 
To: Dave Cohen 
Cc: nanog@nanog.org
Subject: Re: constant FEC errors juniper mpc10e 400g
Date: Sat, 20 Apr 2024 17:50:04 +0200


WAN-PHY did not extend to 40G or 100G, which can explain one of the reasons it
lost favour. For 10G, its availability also depended on the type of device, its
NOS, line card and/or pluggable at the time, which made it hard to find a
standard around this if you built multi-vendor networks or purchased backhaul
services from 3rd party providers that had non-standard support for
WAN-PHY/OTN/G.709. In other words, LAN-PHY (and plain Ethernet) became the
lowest common denominator in the majority of cases for customers.

Mark.


Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread Mark Tinka




On 4/20/24 18:19, Tarko Tikan wrote:

10G wavelengths for new builds died about 10 years ago when coherent 
100G became available, submarine or not. Putting 10G into same system 
is not really feasible at all.


I was referring to 10G services (client-side), not 10G wavelengths (line 
side).


Mark.


Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread Tarko Tikan

hey,

That said, I don't expect any subsea cables getting built in the next 3 
years and later will have 10G as a product on the SLTE itself... it 
wouldn't be worth the spectrum.


10G wavelengths for new builds died about 10 years ago when coherent 
100G became available, submarine or not. Putting 10G into same system is 
not really feasible at all.


--
tarko



Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread Mark Tinka




On 4/20/24 14:41, Dave Cohen wrote:


LAN PHY dominates in the US too. Requests for WAN PHY were almost exclusively 
for terrestrial backhaul extending off of legacy subsea systems that still 
commonly had TDM-framed services. It’s been a couple of years since I’ve been 
in optical transport directly but these requests were essentially non-existent 
after 2018 or so. OTN became somewhat more common from 2014 onward as optical 
system interop improved, but actually was more common in the enterprise space 
as providers would generally go straight to fiber in most use cases, and with 
dark fiber opex costs coming down in many markets, I see OTN requests as 
winnowing here as well.


What really changed the game was coherent detection, which breathed new 
life into legacy subsea cables that were built on dispersion-managed 
fibre. Post-2014 when uncompensated (and highly dispersed) fibre has 
been the standard for subsea builds (even for SDM cables), coherent 
optical systems are the mainstay. In fact, because linear dispersion can 
be accurately calculated for the cable span, uncompensated cables are a 
good thing because the dispersion compensation happens in very advanced 
coherent DSP's in the optical engine, rather than in the fibre itself.


WAN-PHY did not extend to 40G or 100G, which can explain one of the 
reasons it lost favour. For 10G, its availability also depended on the 
type of device, its NOS, line card and/or pluggable at the time, which 
made it hard to find a standard around this if you built multi-vendor 
networks or purchased backhaul services from 3rd party providers that 
had non-standard support for WAN-PHY/OTN/G.709. In other words, LAN-PHY 
(and plain Ethernet) became the lowest common denominator in the 
majority of cases for customers.


In 2024, I find that operators care more about bringing the circuit up 
than using its link properties to trigger monitoring, failover and 
reconvergence. The simplest way to do that is to ask for plain Ethernet 
services, particularly for 100G and 400G, but also for 10G. In practice, 
this has been reasonably reliable in the past 2 - 3 years when procuring 
100G backhaul services. So for the most part, users of these services 
seem to be otherwise happy.


Mark.


Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread Dave Cohen
LAN PHY dominates in the US too. Requests for WAN PHY were almost exclusively 
for terrestrial backhaul extending off of legacy subsea systems that still 
commonly had TDM-framed services. It’s been a couple of years since I’ve been 
in optical transport directly but these requests were essentially non-existent 
after 2018 or so. OTN became somewhat more common from 2014 onward as optical 
system interop improved, but actually was more common in the enterprise space 
as providers would generally go straight to fiber in most use cases, and with 
dark fiber opex costs coming down in many markets, I see OTN requests as 
winnowing here as well. 

Dave Cohen
craetd...@gmail.com

> On Apr 20, 2024, at 7:57 AM, Mark Tinka  wrote:
> 
> 
> 
>> On 4/20/24 13:39, Saku Ytti wrote:
>> 
>> 
>>  Oh I don't think OTN or WAN-PHY have any large deployment future, the
>> cheapest option is 'good enough'...
> 
> And what we find with EU providers is that Ethernet and OTN services are 
> priced similarly. It's a software toggle on a transponder, but even then, 
> Ethernet still continues to be preferred over OTN.
> 
> Mark.


Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread Mark Tinka



On 4/20/24 13:39, Saku Ytti wrote:


Oh I don't think OTN or WAN-PHY have any large deployment future, the
cheapest option is 'good enough'...


And what we find with EU providers is that Ethernet and OTN services are 
priced similarly. It's a software toggle on a transponder, but even 
then, Ethernet still continues to be preferred over OTN.


Mark.

Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread Mark Tinka



On 4/20/24 13:39, Saku Ytti wrote:


Oh I don't think OTN or WAN-PHY have any large deployment future, the
cheapest option is 'good enough' and whatever value you could extract
from OTN or WAN-PHY, will be difficult to capitalise, people usually
don't even capitalise the capabilities they already pay for in the
cheaper technologies.


A handful of OEM's still push OTN like it has just been invented, 
especially those still pushing "IPoDWDM" :-). Fair point, if you have a 
highly-meshed metro network with lots of drops to customers across a 
ring-mesh topology, there might be some value in OTN when delivering 
such services at low speeds (10G, 25G, 2.5G, 1G). But while the topology 
is valid, most networks aren't using high-end optical gear to drop 
low-speed services, nowadays. Even though on a per-bit basis, they might 
be cheaper than 1U IP/MPLS router looking to do the same job if all you 
are considering is traffic, and not additional services that want to eat 
packets.



Of course WAN-PHY is dead post 10GE, a big reason for it to exist was
very old optical systems which simply could not regenerate ethernet
framing, not any features or functional benefits.


In our market, we are trending toward a convergence between 10G and 100G 
orders intersecting for long haul and submarine asks. But pockets of 10G 
demand still exist in many African countries, and none of them have any 
WAN-PHY interest of any statistical significance.


That said, I don't expect any subsea cables getting built in the next 3 
years and later will have 10G as a product on the SLTE itself... it 
wouldn't be worth the spectrum.


Mark.

Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread Saku Ytti
On Sat, 20 Apr 2024 at 14:35, Mark Tinka  wrote:

> Even when our market seeks OTN from European backhaul providers to extend 
> submarine access into Europe and Asia-Pac, it is often for structured 
> capacity grooming, and not for OAM benefit.
>
> It would be interesting to learn whether other markets in the world still 
> make a preference for OTN in lieu of Ethernet, for the OAM benefit, en masse. 
> When I worked in Malaysia back in the day (2007 - 2012), WAN-PHY was 
> generally asked for for 10G services, until about 2010; when folk started to 
> choose LAN-PHY. The reason, back then, was to get that extra 1% of pipe 
> bandwidth :-).

Oh I don't think OTN or WAN-PHY have any large deployment future, the
cheapest option is 'good enough' and whatever value you could extract
from OTN or WAN-PHY, will be difficult to capitalise, people usually
don't even capitalise the capabilities they already pay for in the
cheaper technologies.
Of course WAN-PHY is dead post 10GE, a big reason for it to exist was
very old optical systems which simply could not regenerate ethernet
framing, not any features or functional benefits.



-- 
  ++ytti


Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread Mark Tinka



On 4/20/24 13:25, Saku Ytti wrote:


In most cases, modern optical long haul has a transponder, which
terminates your FEC, because clients offer gray, and you like
something a bit less depressing, like 1570.42nm.

This is not just FEC terminating, but also to a degree autonego
terminating, like RFI signal would be between you and transponder, so
these connections can be, and regularly are, provided without proper
end-to-end hardware liveliness, and even if they were delivered and
tested to have proper end-to-end HW liveliness, that may change during
operation, so line faults may or may not be propagated to both ends as
RFI assertion, and even if they are, how delayed they are, they may
suffer delay to allow for optical protection to engage, which may be
undesirable, as it eats into your convergence budget.

Of course the higher we go in the abstraction, the less likely you are
to get things like HW livelines detection, like I don't really see
anyone asking for this in their pseudowire services, even though it's
something that actually can be delivered. In Junos it's a single
config stanza in interface, to assert RFI to client port, if
pseudowire goes down in the operator network.


In our market (Africa), for both terrestrial and submarine services, 
OTN-type circuits are not typically ordered. Network operators are not 
really interested in receiving the additional link data that OTN or 
WAN-PHY provides. They truly want to leave the operation of the 
underlying transport backbone to the transport operator.


The few times we have come across the market asking for OTN is if they 
want to groom 10x 10G into 1x 100G, for example, to deliver structured 
services downstream.


Even when our market seeks OTN from European backhaul providers to 
extend submarine access into Europe and Asia-Pac, it is often for 
structured capacity grooming, and not for OAM benefit.


It would be interesting to learn whether other markets in the world 
still make a preference for OTN in lieu of Ethernet, for the OAM 
benefit, en masse. When I worked in Malaysia back in the day (2007 - 
2012), WAN-PHY was generally asked for for 10G services, until about 
2010; when folk started to choose LAN-PHY. The reason, back then, was to 
get that extra 1% of pipe bandwidth :-).


Mark.


Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread Saku Ytti
On Sat, 20 Apr 2024 at 10:00, Mark Tinka  wrote:

> This would only matter on ultra long haul optical spans where the signal 
> would need to be regenerated, where - among many other values - FEC would 
> need to be decoded, corrected and re-applied.

In most cases, modern optical long haul has a transponder, which
terminates your FEC, because clients offer gray, and you like
something a bit less depressing, like 1570.42nm.

This is not just FEC terminating, but also to a degree autonego
terminating, like RFI signal would be between you and transponder, so
these connections can be, and regularly are, provided without proper
end-to-end hardware liveliness, and even if they were delivered and
tested to have proper end-to-end HW liveliness, that may change during
operation, so line faults may or may not be propagated to both ends as
RFI assertion, and even if they are, how delayed they are, they may
suffer delay to allow for optical protection to engage, which may be
undesirable, as it eats into your convergence budget.

Of course the higher we go in the abstraction, the less likely you are
to get things like HW livelines detection, like I don't really see
anyone asking for this in their pseudowire services, even though it's
something that actually can be delivered. In Junos it's a single
config stanza in interface, to assert RFI to client port, if
pseudowire goes down in the operator network.

-- 
  ++ytti


Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread Mark Tinka



On 4/19/24 10:08, Saku Ytti wrote:


Of course there are limits to this, as FEC is hop-by-hop, so in
long-haul you'll know about circuit quality to the transponder, not
end-to-end. Unlike in wan-phy, OTN where you know both.

Technically optical transport could induce FEC errors, if there are
FEC errors on any hop, so consumers of optical networks need not have
access to optical networks to know if it's end-to-end clean.


This would only matter on ultra long haul optical spans where the signal 
would need to be regenerated, where - among many other values - FEC 
would need to be decoded, corrected and re-applied.


SD-FEC already allows for a significant improvement in optical reach for 
a given modulation. This negates the need for early regeneration, 
assuming other optical penalties and impairments are satisfactorily 
compensated for.


Of course, what a market defines as long haul or ultra long haul may 
vary; add to that the variability of regeneration spacing in such 
scenarios being quite wide, on the order of 600km - 1,000km. Much of 
this will come down to fibre, ROADM and coherent pluggable quality.


Mark.

Re: constant FEC errors juniper mpc10e 400g

2024-04-20 Thread Mark Tinka



On 4/19/24 10:08, Saku Ytti wrote:


Of course there are limits to this, as FEC is hop-by-hop, so in
long-haul you'll know about circuit quality to the transponder, not
end-to-end. Unlike in wan-phy, OTN where you know both.

Technically optical transport could induce FEC errors, if there are
FEC errors on any hop, so consumers of optical networks need not have
access to optical networks to know if it's end-to-end clean.


This would only matter on ultra long haul optical spans where the signal 
would need to be regenerated, where - among many other values - FEC 
would need to be decoded, corrected and re-applied.


SD-FEC already allows for a significant improvement in optical reach for 
a given modulation. This negates the need for early regeneration, 
assuming other optical penalties and impairments are satisfactorily 
compensated for.


Of course, what a market defines as long haul or ultra long haul may 
vary; add to that the variability of regeneration spacing in such 
scenarios being quite wide, on the order of 600km - 1,000km. Much of 
this will come down to fibre, ROADM and coherent pluggable quality.

Re: constant FEC errors juniper mpc10e 400g

2024-04-19 Thread Saku Ytti
On Fri, 19 Apr 2024 at 10:55, Mark Tinka  wrote:>
FEC is amazing.

> At higher data rates (100G and 400G) for long and ultra long haul optical 
> networks, SD-FEC (Soft Decision FEC) carries a higher overhead penalty 
> compared to HD-FEC (Hard Decision FEC), but the net OSNR gain more than 
> compensates for that, and makes it worth it to increase transmission distance 
> without compromising throughput.

Of course there are limits to this, as FEC is hop-by-hop, so in
long-haul you'll know about circuit quality to the transponder, not
end-to-end. Unlike in wan-phy, OTN where you know both.

Technically optical transport could induce FEC errors, if there are
FEC errors on any hop, so consumers of optical networks need not have
access to optical networks to know if it's end-to-end clean. Much like
cut-through switching can induce errors via some symbols to
communicate the CRC errors happened earlier, so the receiver doesn't
have to worry about problems on their end.

-- 
  ++ytti


Re: constant FEC errors juniper mpc10e 400g

2024-04-19 Thread Mark Tinka



On 4/19/24 08:01, Saku Ytti wrote:


The frames in FEC are idle frames between actual ethernet frames. So
you recall right, without FEC, you won't see this idle traffic.

It's very very good, because now you actually know before putting the
circuit in production, if the circuit works or not.

Lot of people have processes to ping from router-to-router for N time,
trying to determine circuit correctness before putting traffic on it,
which looks absolutely childish compared to FEC, both in terms of how
reliable the presumed outcome is and how long it takes to get to that
presumed outcome.


FEC is amazing.

At higher data rates (100G and 400G) for long and ultra long haul 
optical networks, SD-FEC (Soft Decision FEC) carries a higher overhead 
penalty compared to HD-FEC (Hard Decision FEC), but the net OSNR gain 
more than compensates for that, and makes it worth it to increase 
transmission distance without compromising throughput.


Mark.

Re: constant FEC errors juniper mpc10e 400g

2024-04-19 Thread Saku Ytti
On Thu, 18 Apr 2024 at 21:49, Aaron Gould  wrote:

> Thanks.  What "all the ethernet control frame juju" might you be referring 
> to?  I don't recall Ethernet, in and of itself, just sending stuff back and 
> forth.  Does anyone know if this FEC stuff I see concurring is actually 
> contained in Ethernet Frames?  If so, please send a link to show the ethernet 
> frame structure as it pertains to this 400g fec stuff.  If so, I'd really 
> like to know the header format, etc.

The frames in FEC are idle frames between actual ethernet frames. So
you recall right, without FEC, you won't see this idle traffic.

It's very very good, because now you actually know before putting the
circuit in production, if the circuit works or not.

Lot of people have processes to ping from router-to-router for N time,
trying to determine circuit correctness before putting traffic on it,
which looks absolutely childish compared to FEC, both in terms of how
reliable the presumed outcome is and how long it takes to get to that
presumed outcome.

-- 
  ++ytti


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Tom Beecher
I'm being sloppy with my verbiage, it's just been a long time since
I thought about this in detail, sorry.

The MAC layer hands bits to the Media Independent Interface, which connects
the MAC to the PHY. The PHY converts the digital 1/0 into the form required
by the media transmission type; the 'what goes over the wire' L1 stuff. The
method of encoding will always add SOME number of bits as overhead. Ex,
64b/66b means that for every 64 bits of data to transmit, 2 bits are added,
so 66 actual bits are transmitted. This encoding overhead is what I meant
when I said 'ethernet control frame juju'. This starts getting into the
weeds on symbol/baud rates and stuff as well, which I dont want to do now
cause I'm even rustier there.

When FEC is enabled, the number of overhead bits added to the transmission
increases. For 400G-FR4 for example, you start with 256b/257b , which is
doubled to 512b/514b for ($reason I cannot remember), then RS-FEC(544,514)
is applied, adding 30 more bits for FEC. Following the example, this means
544 bits are transmitted for every 512 bits of payload data. So , more
overhead. Those additional bits can correct up to 15 corrupted bits of the
payload.

All of these overhead bits are added in the PHY on the way out, and removed
on the way in. So you'll never see them on a packet capture unless you're
using something that's actually grabbing the bits off the wire.

( Pretty sure this is right, anyone please correct me if I munged any of it
up.)


On Thu, Apr 18, 2024 at 2:45 PM Aaron Gould  wrote:

> Thanks.  What "all the ethernet control frame juju" might you be referring
> to?  I don't recall Ethernet, in and of itself, just sending stuff back and
> forth.  Does anyone know if this FEC stuff I see concurring is actually
> contained in Ethernet Frames?  If so, please send a link to show the
> ethernet frame structure as it pertains to this 400g fec stuff.  If so, I'd
> really like to know the header format, etc.
>
> -Aaron
> On 4/18/2024 1:17 PM, Tom Beecher wrote:
>
> FEC is occurring at the PHY , below the PCS.
>
> Even if you're not sending any traffic, all the ethernet control frame
> juju is still going back and forth, which FEC may have to correct.
>
> I *think* (but not 100% sure) that for anything that by spec requires FEC,
> there is a default RS-FEC type that will be used, which *may* be able to be
> changed by the device. Could be fixed though, I honestly cannot remember.
>
> On Thu, Apr 18, 2024 at 1:35 PM Aaron Gould  wrote:
>
>> Not to belabor this, but so interesting... I need a FEC-for-Dummies or 
>> FEC-for-IP/Ethernet-Engineers...
>>
>> Shown below, my 400g interface with NO config at all... Interface has no 
>> traffic at all, no packets at all  BUT, lots of FEC hits.  Interesting 
>> this FEC-thing.  I'd love to have a fiber splitter and see if wireshark 
>> could read it and show me what FEC looks like...but something tells me i 
>> would need a 400g sniffer to read it, lol
>>
>> It's like FEC (fec119 in this case) is this automatic thing running between 
>> interfaces (hardware i guess), with no protocols and nothing needed at all 
>> in order to function.
>>
>> -Aaron
>>
>>
>> {master}
>> me@mx960> show configuration interfaces et-7/1/4 | display set
>>
>> {master}
>> me@mx960>
>>
>> {master}
>> me@mx960> clear interfaces statistics et-7/1/4
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep packet
>> Input packets : 0
>> Output packets: 0
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>>   Input rate : 0 bps (0 pps)
>>   Output rate: 0 bps (0 pps)
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep rror
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors28209
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate2347
>> FEC Uncorrected Errors Rate 0
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep packet
>> Input packets : 0
>> Output packets: 0
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>>   Input rate : 0 bps (0 pps)
>>   Output rate: 0 bps (0 pps)
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep rror
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors45153
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate  29
>> FEC Uncorrected 

Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Charles Polisher




On 4/18/24 11:45, Aaron Gould wrote:


Thanks.  What "all the ethernet control frame juju" might you be 
referring to?  I don't recall Ethernet, in and of itself, just sending 
stuff back and forth.  Does anyone know if this FEC stuff I see 
concurring is actually contained in Ethernet Frames?  If so, please 
send a link to show the ethernet frame structure as it pertains to 
this 400g fec stuff.  If so, I'd really like to know the header 
format, etc.


-Aaron



IEEE Std 802.3™‐2022 Standard for Ethernet
(§65.2.3.2 FEC frame format p.2943)
https://ieeexplore.ieee.org/browse/standards/get-program/page/series?id=68

Also helpful, generally:
ITU-T 2000 Recommendation G975 Forward Error Correction for Submarine 
Systems

https://www.itu.int/rec/dologin_pub.asp?lang=e=T-REC-G.975-200010-I!!PDF-E=items



Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread JÁKÓ András
> What "all the ethernet control frame juju" might you be referring
> to?  I don't recall Ethernet, in and of itself, just sending stuff back and
> forth.

I did not read the 100G Ethernet specs, but as far as I remember
FastEthernet (e.g. 100BASE-FX) uses 4B/5B coding on the line, borrowed
from FDDI. Octets of Ethernet frames are encoded to these 5-bit
codewords, and there are valid codewords for other stuff, like idle
symbols transmitted continuously between frames.

Gigabit Ethernet (1000BASE-X) uses 8B/10B code on the line (from Fibre
Channel). In GE there are also special (not frame octet) PCS codewords
used for auto-negotiation, frame bursting, etc.

So I guess these are not frames that you see, but codewords representing
other data, outside Ethernet frames.

András


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Aaron Gould
Thanks.  What "all the ethernet control frame juju" might you be 
referring to?  I don't recall Ethernet, in and of itself, just sending 
stuff back and forth.  Does anyone know if this FEC stuff I see 
concurring is actually contained in Ethernet Frames?  If so, please send 
a link to show the ethernet frame structure as it pertains to this 400g 
fec stuff.  If so, I'd really like to know the header format, etc.


-Aaron

On 4/18/2024 1:17 PM, Tom Beecher wrote:

FEC is occurring at the PHY , below the PCS.

Even if you're not sending any traffic, all the ethernet control frame 
juju is still going back and forth, which FEC may have to correct.


I *think* (but not 100% sure) that for anything that by spec requires 
FEC, there is a default RS-FEC type that will be used, which *may* be 
able to be changed by the device. Could be fixed though, I honestly 
cannot remember.


On Thu, Apr 18, 2024 at 1:35 PM Aaron Gould  wrote:

Not to belabor this, but so interesting... I need a FEC-for-Dummies or 
FEC-for-IP/Ethernet-Engineers...

Shown below, my 400g interface with NO config at all... Interface has no 
traffic at all, no packets at all  BUT, lots of FEC hits.  Interesting this 
FEC-thing.  I'd love to have a fiber splitter and see if wireshark could read 
it and show me what FEC looks like...but something tells me i would need a 400g 
sniffer to read it, lol

It's like FEC (fec119 in this case) is this automatic thing running between 
interfaces (hardware i guess), with no protocols and nothing needed at all in 
order to function.

-Aaron

{master}
me@mx960> show configuration interfaces et-7/1/4 | display set

{master}
me@mx960>

{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
     Input packets : 0
     Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
   Input rate : 0 bps (0 pps)
   Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
     Bit errors 0
     Errored blocks 0
   Ethernet FEC statistics  Errors
     FEC Corrected Errors    28209
     FEC Uncorrected Errors  0
     FEC Corrected Errors Rate    2347
     FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
     Input packets : 0
     Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
   Input rate : 0 bps (0 pps)
   Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
     Bit errors 0
     Errored blocks 0
   Ethernet FEC statistics  Errors
     FEC Corrected Errors    45153
     FEC Uncorrected Errors  0
     FEC Corrected Errors Rate  29
     FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
     Input packets : 0
     Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
   Input rate : 0 bps (0 pps)
   Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
     Bit errors 0
     Errored blocks 0
   Ethernet FEC statistics  Errors
     FEC Corrected Errors    57339
     FEC Uncorrected Errors  0
     FEC Corrected Errors Rate    2378
     FEC Uncorrected Errors Rate 0

{master}
me@mx960>


On 4/18/2024 7:13 AM, Mark Tinka wrote:



On 4/17/24 23:24, Aaron Gould wrote:


Well JTAC just said that it seems ok, and that 400g is going to
show 4x more than 100g "This is due to having to synchronize
much more to support higher data."



We've seen the same between Juniper and Arista boxes in the same
rack running at 100G, despite cleaning fibres, swapping optics,
moving ports, moving line cards, e.t.c. TAC said it's a
non-issue, and to be expected, and shared the same KB's.

It's a bit disconcerting when you plot the data on your NMS, but
it's not material.

Mark.


-- 
   

Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Tom Beecher
FEC is occurring at the PHY , below the PCS.

Even if you're not sending any traffic, all the ethernet control frame juju
is still going back and forth, which FEC may have to correct.

I *think* (but not 100% sure) that for anything that by spec requires FEC,
there is a default RS-FEC type that will be used, which *may* be able to be
changed by the device. Could be fixed though, I honestly cannot remember.

On Thu, Apr 18, 2024 at 1:35 PM Aaron Gould  wrote:

> Not to belabor this, but so interesting... I need a FEC-for-Dummies or 
> FEC-for-IP/Ethernet-Engineers...
>
> Shown below, my 400g interface with NO config at all... Interface has no 
> traffic at all, no packets at all  BUT, lots of FEC hits.  Interesting 
> this FEC-thing.  I'd love to have a fiber splitter and see if wireshark could 
> read it and show me what FEC looks like...but something tells me i would need 
> a 400g sniffer to read it, lol
>
> It's like FEC (fec119 in this case) is this automatic thing running between 
> interfaces (hardware i guess), with no protocols and nothing needed at all in 
> order to function.
>
> -Aaron
>
>
> {master}
> me@mx960> show configuration interfaces et-7/1/4 | display set
>
> {master}
> me@mx960>
>
> {master}
> me@mx960> clear interfaces statistics et-7/1/4
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep packet
> Input packets : 0
> Output packets: 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep rror
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors28209
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate2347
> FEC Uncorrected Errors Rate 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep packet
> Input packets : 0
> Output packets: 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep rror
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors45153
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate  29
> FEC Uncorrected Errors Rate 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep packet
> Input packets : 0
> Output packets: 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep rror
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors57339
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate2378
> FEC Uncorrected Errors Rate 0
>
> {master}
> me@mx960>
>
>
> On 4/18/2024 7:13 AM, Mark Tinka wrote:
>
>
>
> On 4/17/24 23:24, Aaron Gould wrote:
>
> Well JTAC just said that it seems ok, and that 400g is going to show 4x
> more than 100g "This is due to having to synchronize much more to support
> higher data."
>
>
> We've seen the same between Juniper and Arista boxes in the same rack
> running at 100G, despite cleaning fibres, swapping optics, moving ports,
> moving line cards, e.t.c. TAC said it's a non-issue, and to be expected,
> and shared the same KB's.
>
> It's a bit disconcerting when you plot the data on your NMS, but it's not
> material.
>
> Mark.
>
> --
> -Aaron
>
>


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Aaron Gould

Not to belabor this, but so interesting... I need a FEC-for-Dummies or 
FEC-for-IP/Ethernet-Engineers...

Shown below, my 400g interface with NO config at all... Interface has no 
traffic at all, no packets at all  BUT, lots of FEC hits.  Interesting this 
FEC-thing.  I'd love to have a fiber splitter and see if wireshark could read 
it and show me what FEC looks like...but something tells me i would need a 400g 
sniffer to read it, lol

It's like FEC (fec119 in this case) is this automatic thing running between 
interfaces (hardware i guess), with no protocols and nothing needed at all in 
order to function.

-Aaron

{master}
me@mx960> show configuration interfaces et-7/1/4 | display set

{master}
me@mx960>

{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
    Input packets : 0
    Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
  Input rate : 0 bps (0 pps)
  Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
    Bit errors 0
    Errored blocks 0
  Ethernet FEC statistics  Errors
    FEC Corrected Errors    28209
    FEC Uncorrected Errors  0
    FEC Corrected Errors Rate    2347
    FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
    Input packets : 0
    Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
  Input rate : 0 bps (0 pps)
  Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
    Bit errors 0
    Errored blocks 0
  Ethernet FEC statistics  Errors
    FEC Corrected Errors    45153
    FEC Uncorrected Errors  0
    FEC Corrected Errors Rate  29
    FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
    Input packets : 0
    Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
  Input rate : 0 bps (0 pps)
  Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
    Bit errors 0
    Errored blocks 0
  Ethernet FEC statistics  Errors
    FEC Corrected Errors    57339
    FEC Uncorrected Errors  0
    FEC Corrected Errors Rate    2378
    FEC Uncorrected Errors Rate 0

{master}
me@mx960>


On 4/18/2024 7:13 AM, Mark Tinka wrote:



On 4/17/24 23:24, Aaron Gould wrote:

Well JTAC just said that it seems ok, and that 400g is going to show 
4x more than 100g "This is due to having to synchronize much more to 
support higher data."




We've seen the same between Juniper and Arista boxes in the same rack 
running at 100G, despite cleaning fibres, swapping optics, moving 
ports, moving line cards, e.t.c. TAC said it's a non-issue, and to be 
expected, and shared the same KB's.


It's a bit disconcerting when you plot the data on your NMS, but it's 
not material.


Mark.


--
-Aaron


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Mark Tinka




On 4/18/24 15:45, Tom Beecher wrote:



 Just for extra clarity off those KB, probably has nothing to do with 
vendor interop as implied in at least one of those.


Yes, correct.

Mark.


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Tom Beecher
>
> We've seen the same between Juniper and Arista boxes in the same rack
> running at 100G, despite cleaning fibres, swapping optics, moving ports,
> moving line cards, e.t.c. TAC said it's a non-issue, and to be expected,
> and shared the same KB's.
>
>
 Just for extra clarity off those KB, probably has nothing to do with
vendor interop as implied in at least one of those.

You will see some volume of FEC corrected on 400G FR4 with the same router
hardware and transceiver vendor on both ends, with a 3m patch. Short of
duct taping the transceivers together, not going to get much more optimal
than that.

As far as I can suss out from my reading and what Smart People have told
me, certain combinations of modulation and lamda are just more susceptible
to transmission noise, so for those FEC is required by the standard. PAM4
modulation does seem to be a common thread, but there are some PAM2/NRZs
that FEC is also required for. ( 100GBASE-CWDM4 for example. )



On Thu, Apr 18, 2024 at 8:15 AM Mark Tinka  wrote:

>
>
> On 4/17/24 23:24, Aaron Gould wrote:
>
> > Well JTAC just said that it seems ok, and that 400g is going to show
> > 4x more than 100g "This is due to having to synchronize much more to
> > support higher data."
> >
>
> We've seen the same between Juniper and Arista boxes in the same rack
> running at 100G, despite cleaning fibres, swapping optics, moving ports,
> moving line cards, e.t.c. TAC said it's a non-issue, and to be expected,
> and shared the same KB's.
>
> It's a bit disconcerting when you plot the data on your NMS, but it's
> not material.
>
> Mark.
>


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Thomas Scott
Standard deviation is now your friend. Learned to alert on outside of SD
FEC and CRCs. Although the second should already be alerting.

On Thu, Apr 18, 2024 at 8:15 AM Mark Tinka  wrote:

> On 4/17/24 23: 24, Aaron Gould wrote: > Well JTAC just said that it seems
> ok, and that 400g is going to show > 4x more than 100g "This is due to
> having to synchronize much more to > support higher data. " > We've seen
> the same between
> 
>
>
> On 4/17/24 23:24, Aaron Gould wrote:
>
> > Well JTAC just said that it seems ok, and that 400g is going to show
> > 4x more than 100g "This is due to having to synchronize much more to
> > support higher data."
> >
>
> We've seen the same between Juniper and Arista boxes in the same rack
> running at 100G, despite cleaning fibres, swapping optics, moving ports,
> moving line cards, e.t.c. TAC said it's a non-issue, and to be expected,
> and shared the same KB's.
>
> It's a bit disconcerting when you plot the data on your NMS, but it's
> not material.
>
> Mark.
>
>


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Joel Busch via NANOG
In my reading the 400GBASE-R Physical Coding Sublayer (PCS) always 
includes the FEC. This is defined in clause 119 of IEEE Std 802.3-2022, 
and most easily seen in "Figure 119–2—Functional block diagram" if you 
don't want to get buried in the prose. Nothing there seems to imply that 
the FEC is optional.


I'd be happy to be corrected though. It may well be that there is a 
method to reading these tomes, that I have not discovered yet. It is the 
first time I dove deep into any IEEE standard.


Best regards
Joel

On 17.04.2024 21:47, Tom Beecher wrote:

Isn't FEC required by the 400G spec?

On Wed, Apr 17, 2024 at 3:45 PM Aaron Gould > wrote:


__

i did.  Usually my NANOG and J-NSP email list gets me a quicker
solution than JTAC.

-Aaron

On 4/17/2024 2:37 PM, Dominik Dobrowolski wrote:

Open a JTAC case,
That looks like a work for them


Kind Regards,
Dominik

W dniu śr., 17.04.2024 o 21:36 Aaron Gould mailto:aar...@gvtc.com>> napisał(a):

We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our 
core to 400g.  During initial testing of the 400g interface (400GBASE-FR4), I 
see constant FEC errors.  FEC is new to me.  Anyone know why this is occurring? 
 Shown below, is an interface with no traffic, but seeing constant FEC errors.  
This is (2) MX960's cabled directly, no dwdm or anything between them... just a 
fiber patch cable.



{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
---(refreshed at 2024-04-17 14:18:53 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors0
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   0
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:55 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 4302
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   8
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:57 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 8796
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 146
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:59 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors15582
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 111
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:19:01 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors20342
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 256
 FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
   Input rate : 0 bps (0 pps)
   Output rate: 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4
Physical interface: et-7/1/4, Enabled, Physical link is Up
   Interface index: 226, SNMP ifIndex: 800
 

Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Mark Tinka




On 4/17/24 23:24, Aaron Gould wrote:

Well JTAC just said that it seems ok, and that 400g is going to show 
4x more than 100g "This is due to having to synchronize much more to 
support higher data."




We've seen the same between Juniper and Arista boxes in the same rack 
running at 100G, despite cleaning fibres, swapping optics, moving ports, 
moving line cards, e.t.c. TAC said it's a non-issue, and to be expected, 
and shared the same KB's.


It's a bit disconcerting when you plot the data on your NMS, but it's 
not material.


Mark.


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Joe Antkowiak
Corrected FEC errors are pretty normal for 400G FR4

On Wednesday, April 17th, 2024 at 3:36 PM, Aaron Gould  wrote:

> We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core 
> to 400g.  During initial testing of the 400g interface (400GBASE-FR4), I see 
> constant FEC errors.  FEC is new to me.  Anyone know why this is occurring?  
> Shown below, is an interface with no traffic, but seeing constant FEC errors. 
>  This is (2) MX960's cabled directly, no dwdm or anything between them... 
> just a fiber patch cable.
>
> {master}
> me@mx960> clear interfaces statistics et-7/1/4
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
> ---(refreshed at 2024-04-17 14:18:53 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors0
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate   0
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:18:55 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors 4302
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate   8
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:18:57 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors 8796
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate 146
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:18:59 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors15582
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate 111
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:19:01 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors20342
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate 256
> FEC Uncorrected Errors Rate 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>
> {master}
> me@mx960> show interfaces et-7/1/4
> Physical interface: et-7/1/4, Enabled, Physical link is Up
>   Interface index: 226, SNMP ifIndex: 800
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
>   Flow control: Enabled
>   Pad to minimum frame size: Disabled
>   Device flags   : Present Running
>   Interface flags: SNMP-Traps Internal: 0x4000
>   Link flags : None
>   CoS queues : 8 supported, 8 maximum usable queues
>   Schedulers : 0
>   Last flapped   : 2024-04-17 13:55:28 CDT (00:36:19 ago)
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>   Active alarms  : None
>   Active defects : None
>   PCS statistics  Seconds
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC Mode  : FEC119
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors   801787
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate2054
> FEC Uncorrected Errors Rate 0
>   Link Degrade :
> Link Monitoring   :  Disable
>   Interface transmit statistics: Disabled
>
>   Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815)
> Flags: Up SNMP-Traps 0x4004000 Encapsulation: 

Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Schylar Utley
FEC on 400G is required and expected. As long as it is “corrected”, you have 
nothing to worry about. We had the same realisation recenty when upgrading to 
400G.

-Schylar

From: NANOG  on behalf of Aaron 
Gould 
Date: Wednesday, April 17, 2024 at 2:38 PM
To: nanog@nanog.org 
Subject: constant FEC errors juniper mpc10e 400g

We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 
400g.  During initial testing of the 400g interface (400GBASE-FR4), I see 
constant FEC errors.  FEC is new to me.  Anyone know why this is occurring?  
Shown below, is an interface with no traffic, but seeing constant FEC errors.  
This is (2) MX960's cabled directly, no dwdm or anything between them... just a 
fiber patch cable.







{master}

me@mx960> clear interfaces statistics et-7/1/4



{master}

me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2

---(refreshed at 2024-04-17 14:18:53 CDT)---

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

Bit errors 0

Errored blocks 0

  Ethernet FEC statistics  Errors

FEC Corrected Errors0

FEC Uncorrected Errors  0

FEC Corrected Errors Rate   0

FEC Uncorrected Errors Rate 0

---(refreshed at 2024-04-17 14:18:55 CDT)---

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

Bit errors 0

Errored blocks 0

  Ethernet FEC statistics  Errors

FEC Corrected Errors 4302

FEC Uncorrected Errors  0

FEC Corrected Errors Rate   8

FEC Uncorrected Errors Rate 0

---(refreshed at 2024-04-17 14:18:57 CDT)---

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

Bit errors 0

Errored blocks 0

  Ethernet FEC statistics  Errors

FEC Corrected Errors 8796

FEC Uncorrected Errors  0

FEC Corrected Errors Rate 146

FEC Uncorrected Errors Rate 0

---(refreshed at 2024-04-17 14:18:59 CDT)---

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

Bit errors 0

Errored blocks 0

  Ethernet FEC statistics  Errors

FEC Corrected Errors15582

FEC Uncorrected Errors  0

FEC Corrected Errors Rate 111

FEC Uncorrected Errors Rate 0

---(refreshed at 2024-04-17 14:19:01 CDT)---

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

Bit errors 0

Errored blocks 0

  Ethernet FEC statistics  Errors

FEC Corrected Errors20342

FEC Uncorrected Errors  0

FEC Corrected Errors Rate 256

FEC Uncorrected Errors Rate 0



{master}

me@mx960> show interfaces et-7/1/4 | grep "put rate"

  Input rate : 0 bps (0 pps)

  Output rate: 0 bps (0 pps)



{master}

me@mx960> show interfaces et-7/1/4

Physical interface: et-7/1/4, Enabled, Physical link is Up

  Interface index: 226, SNMP ifIndex: 800

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

  Flow control: Enabled

  Pad to minimum frame size: Disabled

  Device flags   : Present Running

  Interface flags: SNMP-Traps Internal: 0x4000

  Link flags : None

  CoS queues : 8 supported, 8 maximum usable queues

  Schedulers : 0

  Last flapped   : 2024-04-17 13:55:28 CDT (00:36:19 ago)

  Input rate : 0 bps (0 pps)

  Output rate: 0 bps (0 pps)

  Active alarms  : None

  Active defects : None

  PCS statistics  Seconds

Bit errors 0

Errored blocks 0

  Ethernet FEC Mode  : FEC119

  Ethernet FEC statistics  Errors

FEC Corrected Errors   801787

FEC Uncorrected Errors  0

FEC Corrected Errors Rate2054

FEC Uncorrected Errors Rate 0

  Link Degrade :

Link Monitoring   :  Disable

  Interface transmit statistics: Disabled



  Logical 

Re: constant FEC errors juniper mpc10e 400g

2024-04-17 Thread Aaron Gould
Well JTAC just said that it seems ok, and that 400g is going to show 4x 
more than 100g "This is due to having to synchronize much more to 
support higher data."


-Aaron



On 4/17/2024 4:04 PM, Aaron Gould wrote:


Interesting, thanks all, the JTAC rep got back to me and also pretty 
much said it's not an issue and is expected... also, JTAC rep sited 2 
KB's, shown here, both using 100g as an example... question please, 
should I understand that this is also true about 400g, even though his 
KB's speak about 100g ?


KB77305
KB35145

https://supportportal.juniper.net/s/article/What-is-the-acceptable-rate-of-FEC-corrected-errors-for-100G-interface 

https://supportportal.juniper.net/s/article/PTX-FEC-corrected-errors-increasing-on-link-between-QSFP-100GBASE-SR4-740-058734-and-QSFP-100G-SR4-T2-740-061405?language=en_US 



-Aaron


On 4/17/2024 3:58 PM, Matt Erculiani wrote:
At some point, an error rate would exceed the ability of forward 
error correction (FEC) overhead to compensate, resulting in CRC 
errors. You're not seeing those so all is technically well.


It's not so much how many packets come in with errors that causes a 
problem, but what percentage of each packet is corrupted. The former 
is usually indicative of the latter though.


Just as Tom said, we're talking about a whole new animal than the NRZ 
we're used to inside the building. Long-haul and DCI folks deal with 
this stuff pretty regularly. The secret is keep everything clean and 
mind your bend radii. We won't get away with some of what we used to 
get away with.


-Matt

On Wed, Apr 17, 2024 at 1:49 PM Aaron Gould  wrote:

fec cliff?  is there a level of fec erros that i should be
worried about then?  not sure what you mean.

-Aaron

On 4/17/2024 2:46 PM, Matt Erculiani wrote:

I'm no TAC engineer, but the purpose of FEC is to take and
correct errors when the port is going so fast that errors are
simply inevitable. Working as Intended.

Easier (read: cheaper) to build in some error correction than
make the bits wiggle more reliably.

No idea if that rate of increment is alarming or not, but you've
not yet hit your FEC cliff so you appear to be fine.

-Matt

On Wed, Apr 17, 2024 at 1:40 PM Dominik Dobrowolski
 wrote:

Open a JTAC case,
That looks like a work for them


Kind Regards,
Dominik

W dniu śr., 17.04.2024 o 21:36 Aaron Gould 
napisał(a):

We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade 
our core to 400g.  During initial testing of the 400g interface (400GBASE-FR4), 
I see constant FEC errors.  FEC is new to me.  Anyone know why this is 
occurring?  Shown below, is an interface with no traffic, but seeing constant 
FEC errors.  This is (2) MX960's cabled directly, no dwdm or anything between 
them... just a fiber patch cable.



{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
---(refreshed at 2024-04-17 14:18:53 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors0
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   0
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:55 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 4302
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   8
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:57 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 8796
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 146
 FEC Uncorrected Errors Rate

Re: constant FEC errors juniper mpc10e 400g

2024-04-17 Thread Aaron Gould
Interesting, thanks all, the JTAC rep got back to me and also pretty 
much said it's not an issue and is expected... also, JTAC rep sited 2 
KB's, shown here, both using 100g as an example... question please, 
should I understand that this is also true about 400g, even though his 
KB's speak about 100g ?


KB77305
KB35145

https://supportportal.juniper.net/s/article/What-is-the-acceptable-rate-of-FEC-corrected-errors-for-100G-interface 

https://supportportal.juniper.net/s/article/PTX-FEC-corrected-errors-increasing-on-link-between-QSFP-100GBASE-SR4-740-058734-and-QSFP-100G-SR4-T2-740-061405?language=en_US 



-Aaron


On 4/17/2024 3:58 PM, Matt Erculiani wrote:
At some point, an error rate would exceed the ability of forward error 
correction (FEC) overhead to compensate, resulting in CRC errors. 
You're not seeing those so all is technically well.


It's not so much how many packets come in with errors that causes a 
problem, but what percentage of each packet is corrupted. The former 
is usually indicative of the latter though.


Just as Tom said, we're talking about a whole new animal than the NRZ 
we're used to inside the building. Long-haul and DCI folks deal with 
this stuff pretty regularly. The secret is keep everything clean and 
mind your bend radii. We won't get away with some of what we used to 
get away with.


-Matt

On Wed, Apr 17, 2024 at 1:49 PM Aaron Gould  wrote:

fec cliff?  is there a level of fec erros that i should be worried
about then?  not sure what you mean.

-Aaron

On 4/17/2024 2:46 PM, Matt Erculiani wrote:

I'm no TAC engineer, but the purpose of FEC is to take and
correct errors when the port is going so fast that errors are
simply inevitable. Working as Intended.

Easier (read: cheaper) to build in some error correction than
make the bits wiggle more reliably.

No idea if that rate of increment is alarming or not, but you've
not yet hit your FEC cliff so you appear to be fine.

-Matt

On Wed, Apr 17, 2024 at 1:40 PM Dominik Dobrowolski
 wrote:

Open a JTAC case,
That looks like a work for them


Kind Regards,
Dominik

W dniu śr., 17.04.2024 o 21:36 Aaron Gould 
napisał(a):

We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade 
our core to 400g.  During initial testing of the 400g interface (400GBASE-FR4), 
I see constant FEC errors.  FEC is new to me.  Anyone know why this is 
occurring?  Shown below, is an interface with no traffic, but seeing constant 
FEC errors.  This is (2) MX960's cabled directly, no dwdm or anything between 
them... just a fiber patch cable.



{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
---(refreshed at 2024-04-17 14:18:53 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors0
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   0
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:55 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 4302
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   8
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:57 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 8796
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 146
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:59 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 

Re: constant FEC errors juniper mpc10e 400g

2024-04-17 Thread Matt Erculiani
At some point, an error rate would exceed the ability of forward error
correction (FEC) overhead to compensate, resulting in CRC errors. You're
not seeing those so all is technically well.

It's not so much how many packets come in with errors that causes a
problem, but what percentage of each packet is corrupted. The former is
usually indicative of the latter though.

Just as Tom said, we're talking about a whole new animal than the NRZ we're
used to inside the building. Long-haul and DCI folks deal with this stuff
pretty regularly. The secret is keep everything clean and mind your bend
radii. We won't get away with some of what we used to get away with.

-Matt

On Wed, Apr 17, 2024 at 1:49 PM Aaron Gould  wrote:

> fec cliff?  is there a level of fec erros that i should be worried about
> then?  not sure what you mean.
>
> -Aaron
> On 4/17/2024 2:46 PM, Matt Erculiani wrote:
>
> I'm no TAC engineer, but the purpose of FEC is to take and correct errors
> when the port is going so fast that errors are simply inevitable. Working
> as Intended.
>
> Easier (read: cheaper) to build in some error correction than make the
> bits wiggle more reliably.
>
> No idea if that rate of increment is alarming or not, but you've not yet
> hit your FEC cliff so you appear to be fine.
>
> -Matt
>
> On Wed, Apr 17, 2024 at 1:40 PM Dominik Dobrowolski <
> dobrowolski.dom...@gmail.com> wrote:
>
>> Open a JTAC case,
>> That looks like a work for them
>>
>>
>> Kind Regards,
>> Dominik
>>
>> W dniu śr., 17.04.2024 o 21:36 Aaron Gould  napisał(a):
>>
>>> We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core 
>>> to 400g.  During initial testing of the 400g interface (400GBASE-FR4), I 
>>> see constant FEC errors.  FEC is new to me.  Anyone know why this is 
>>> occurring?  Shown below, is an interface with no traffic, but seeing 
>>> constant FEC errors.  This is (2) MX960's cabled directly, no dwdm or 
>>> anything between them... just a fiber patch cable.
>>>
>>>
>>>
>>> {master}
>>> me@mx960> clear interfaces statistics et-7/1/4
>>>
>>> {master}
>>> me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
>>> ---(refreshed at 2024-04-17 14:18:53 CDT)---
>>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>>> filtering: Disabled,
>>> Bit errors 0
>>> Errored blocks 0
>>>   Ethernet FEC statistics  Errors
>>> FEC Corrected Errors0
>>> FEC Uncorrected Errors  0
>>> FEC Corrected Errors Rate   0
>>> FEC Uncorrected Errors Rate 0
>>> ---(refreshed at 2024-04-17 14:18:55 CDT)---
>>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>>> filtering: Disabled,
>>> Bit errors 0
>>> Errored blocks 0
>>>   Ethernet FEC statistics  Errors
>>> FEC Corrected Errors 4302
>>> FEC Uncorrected Errors  0
>>> FEC Corrected Errors Rate   8
>>> FEC Uncorrected Errors Rate 0
>>> ---(refreshed at 2024-04-17 14:18:57 CDT)---
>>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>>> filtering: Disabled,
>>> Bit errors 0
>>> Errored blocks 0
>>>   Ethernet FEC statistics  Errors
>>> FEC Corrected Errors 8796
>>> FEC Uncorrected Errors  0
>>> FEC Corrected Errors Rate 146
>>> FEC Uncorrected Errors Rate 0
>>> ---(refreshed at 2024-04-17 14:18:59 CDT)---
>>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>>> filtering: Disabled,
>>> Bit errors 0
>>> Errored blocks 0
>>>   Ethernet FEC statistics  Errors
>>> FEC Corrected Errors15582
>>> FEC Uncorrected Errors  0
>>> FEC Corrected Errors Rate 111
>>> FEC Uncorrected Errors Rate 0
>>> ---(refreshed at 2024-04-17 14:19:01 CDT)---
>>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>>> filtering: Disabled,
>>> Bit errors 0
>>> Errored blocks 0
>>>   Ethernet FEC statistics  Errors
>>> FEC Corrected Errors20342
>>> FEC Uncorrected Errors  0
>>> FEC Corrected Errors Rate 256
>>> FEC Uncorrected Errors Rate 

Re: constant FEC errors juniper mpc10e 400g

2024-04-17 Thread Tom Beecher
Notes I found that I took from smart optical people :

"PAM4 runs at much lower SNRs than NRZ, because you're trying to read 4
distinct voltage levels instead of 2.Even the cleanest system will have
some of that, so the only way to make it usable is to have FEC in place."



On Wed, Apr 17, 2024 at 4:01 PM Fredrik Holmqvist / I2B 
wrote:

> Hi.
>
> Looks like normal behavior:
>
>
> https://supportportal.juniper.net/s/article/PTX-FEC-corrected-errors-increasing-on-link-between-QSFP-100GBASE-SR4-740-058734-and-QSFP-100G-SR4-T2-740-061405?language=en_US
>
> "An incrementing FEC Corrected Errors counter is normal for a link that
> is running FEC. It just indicates that the errored bits have been
> corrected by FEC. "
>
> "Therefore, the incrementing FEC Corrected Errors counter might only be
> indicating an interoperability issue between the optics from .."
>
> ---
> Fredrik Holmqvist
> I2B & BBTA tjänster AB
> 08-590 90 000
>
> On 2024-04-17 21:36, Aaron Gould wrote:
> > We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our
> > core to 400g.  During initial testing of the 400g interface
> > (400GBASE-FR4), I see constant FEC errors.  FEC is new to me.  Anyone
> > know why this is occurring?  Shown below, is an interface with no
> > traffic, but seeing constant FEC errors.  This is (2) MX960's cabled
> > directly, no dwdm or anything between them... just a fiber patch
> > cable.
> >
> > {master}
> > me@mx960> clear interfaces statistics et-7/1/4
> >
> > {master}
> > me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
> > ---(refreshed at 2024-04-17 14:18:53 CDT)---
> >   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
> > BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
> > Source filtering: Disabled,
> > Bit errors 0
> > Errored blocks 0
> >   Ethernet FEC statistics  Errors
> > FEC Corrected Errors0
> > FEC Uncorrected Errors  0
> > FEC Corrected Errors Rate   0
> > FEC Uncorrected Errors Rate 0
> > ---(refreshed at 2024-04-17 14:18:55 CDT)---
> >   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
> > BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
> > Source filtering: Disabled,
> > Bit errors 0
> > Errored blocks 0
> >   Ethernet FEC statistics  Errors
> > FEC Corrected Errors 4302
> > FEC Uncorrected Errors  0
> > FEC Corrected Errors Rate   8
> > FEC Uncorrected Errors Rate 0
> > ---(refreshed at 2024-04-17 14:18:57 CDT)---
> >   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
> > BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
> > Source filtering: Disabled,
> > Bit errors 0
> > Errored blocks 0
> >   Ethernet FEC statistics  Errors
> > FEC Corrected Errors 8796
> > FEC Uncorrected Errors  0
> > FEC Corrected Errors Rate 146
> > FEC Uncorrected Errors Rate 0
> > ---(refreshed at 2024-04-17 14:18:59 CDT)---
> >   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
> > BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
> > Source filtering: Disabled,
> > Bit errors 0
> > Errored blocks 0
> >   Ethernet FEC statistics  Errors
> > FEC Corrected Errors15582
> > FEC Uncorrected Errors  0
> > FEC Corrected Errors Rate 111
> > FEC Uncorrected Errors Rate 0
> > ---(refreshed at 2024-04-17 14:19:01 CDT)---
> >   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
> > BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
> > Source filtering: Disabled,
> > Bit errors 0
> > Errored blocks 0
> >   Ethernet FEC statistics  Errors
> > FEC Corrected Errors20342
> > FEC Uncorrected Errors  0
> > FEC Corrected Errors Rate 256
> > FEC Uncorrected Errors Rate 0
> >
> > {master}
> > me@mx960> show interfaces et-7/1/4 | grep "put rate"
> >   Input rate : 0 bps (0 pps)
> >   Output rate: 0 bps (0 pps)
> >
> > {master}
> > me@mx960> show interfaces et-7/1/4
> > Physical interface: et-7/1/4, Enabled, Physical link is Up
> >   Interface index: 226, SNMP ifIndex: 800
> >   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
> > BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
> > Source filtering: Disabled,
> >   Flow control: Enabled
> >   Pad to minimum frame size: Disabled
> >   Device flags  

Re: constant FEC errors juniper mpc10e 400g

2024-04-17 Thread Fredrik Holmqvist / I2B

Hi.

Looks like normal behavior:

https://supportportal.juniper.net/s/article/PTX-FEC-corrected-errors-increasing-on-link-between-QSFP-100GBASE-SR4-740-058734-and-QSFP-100G-SR4-T2-740-061405?language=en_US

"An incrementing FEC Corrected Errors counter is normal for a link that 
is running FEC. It just indicates that the errored bits have been 
corrected by FEC. "


"Therefore, the incrementing FEC Corrected Errors counter might only be 
indicating an interoperability issue between the optics from .."


---
Fredrik Holmqvist
I2B & BBTA tjänster AB
08-590 90 000

On 2024-04-17 21:36, Aaron Gould wrote:

We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our
core to 400g.  During initial testing of the 400g interface
(400GBASE-FR4), I see constant FEC errors.  FEC is new to me.  Anyone
know why this is occurring?  Shown below, is an interface with no
traffic, but seeing constant FEC errors.  This is (2) MX960's cabled
directly, no dwdm or anything between them... just a fiber patch
cable.

{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
---(refreshed at 2024-04-17 14:18:53 CDT)---
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
Source filtering: Disabled,
Bit errors 0
Errored blocks 0
  Ethernet FEC statistics  Errors
FEC Corrected Errors0
FEC Uncorrected Errors  0
FEC Corrected Errors Rate   0
FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:55 CDT)---
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
Source filtering: Disabled,
Bit errors 0
Errored blocks 0
  Ethernet FEC statistics  Errors
FEC Corrected Errors 4302
FEC Uncorrected Errors  0
FEC Corrected Errors Rate   8
FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:57 CDT)---
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
Source filtering: Disabled,
Bit errors 0
Errored blocks 0
  Ethernet FEC statistics  Errors
FEC Corrected Errors 8796
FEC Uncorrected Errors  0
FEC Corrected Errors Rate 146
FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:59 CDT)---
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
Source filtering: Disabled,
Bit errors 0
Errored blocks 0
  Ethernet FEC statistics  Errors
FEC Corrected Errors15582
FEC Uncorrected Errors  0
FEC Corrected Errors Rate 111
FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:19:01 CDT)---
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
Source filtering: Disabled,
Bit errors 0
Errored blocks 0
  Ethernet FEC statistics  Errors
FEC Corrected Errors20342
FEC Uncorrected Errors  0
FEC Corrected Errors Rate 256
FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
  Input rate : 0 bps (0 pps)
  Output rate: 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4
Physical interface: et-7/1/4, Enabled, Physical link is Up
  Interface index: 226, SNMP ifIndex: 800
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps,
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled,
Source filtering: Disabled,
  Flow control: Enabled
  Pad to minimum frame size: Disabled
  Device flags   : Present Running
  Interface flags: SNMP-Traps Internal: 0x4000
  Link flags : None
  CoS queues : 8 supported, 8 maximum usable queues
  Schedulers : 0
  Last flapped   : 2024-04-17 13:55:28 CDT (00:36:19 ago)
  Input rate : 0 bps (0 pps)
  Output rate: 0 bps (0 pps)
  Active alarms  : None
  Active defects : None
  PCS statistics  Seconds
Bit errors 0
Errored blocks 0
  Ethernet FEC Mode  : FEC119
  Ethernet FEC statistics  Errors
FEC Corrected Errors   801787
FEC Uncorrected Errors  0
FEC Corrected Errors Rate

Re: constant FEC errors juniper mpc10e 400g

2024-04-17 Thread Aaron Gould
Thanks Joe and Schylar, that's reassuring.  Tom, yes, I believe fec is 
required for 400g as you see fec119 listed in that output... and i 
understand you can't (or perhaps shouldn't) change it.


-Aaron

On 4/17/2024 2:43 PM, Joe Antkowiak wrote:

Corrected FEC errors are pretty normal for 400G FR4



On Wednesday, April 17th, 2024 at 3:36 PM, Aaron Gould 
 wrote:

We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 
400g.  During initial testing of the 400g interface (400GBASE-FR4), I see 
constant FEC errors.  FEC is new to me.  Anyone know why this is occurring?  
Shown below, is an interface with no traffic, but seeing constant FEC errors.  
This is (2) MX960's cabled directly, no dwdm or anything between them... just a 
fiber patch cable.



{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
---(refreshed at 2024-04-17 14:18:53 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors0
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   0
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:55 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 4302
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   8
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:57 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 8796
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 146
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:59 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors15582
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 111
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:19:01 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors20342
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 256
 FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
   Input rate : 0 bps (0 pps)
   Output rate: 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4
Physical interface: et-7/1/4, Enabled, Physical link is Up
   Interface index: 226, SNMP ifIndex: 800
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
   Flow control: Enabled
   Pad to minimum frame size: Disabled
   Device flags   : Present Running
   Interface flags: SNMP-Traps Internal: 0x4000
   Link flags : None
   CoS queues : 8 supported, 8 maximum usable queues
   Schedulers : 0
   Last flapped   : 2024-04-17 13:55:28 CDT (00:36:19 ago)
   Input rate : 0 bps (0 pps)
   Output rate: 0 bps (0 pps)
   Active alarms  : None
   Active defects : None
   PCS statistics  Seconds
 Bit errors 0
 Errored blocks 0
   Ethernet FEC Mode  : FEC119
   Ethernet FEC statistics  Errors
 FEC Corrected Errors   801787
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate2054
 FEC Uncorrected Errors Rate 0
   Link Degrade :
 Link Monitoring   :  Disable
   Interface transmit statistics: Disabled

Re: constant FEC errors juniper mpc10e 400g

2024-04-17 Thread Aaron Gould
fec cliff?  is there a level of fec erros that i should be worried about 
then?  not sure what you mean.


-Aaron

On 4/17/2024 2:46 PM, Matt Erculiani wrote:
I'm no TAC engineer, but the purpose of FEC is to take and correct 
errors when the port is going so fast that errors are 
simply inevitable. Working as Intended.


Easier (read: cheaper) to build in some error correction than make the 
bits wiggle more reliably.


No idea if that rate of increment is alarming or not, but you've not 
yet hit your FEC cliff so you appear to be fine.


-Matt

On Wed, Apr 17, 2024 at 1:40 PM Dominik Dobrowolski 
 wrote:


Open a JTAC case,
That looks like a work for them


Kind Regards,
Dominik

W dniu śr., 17.04.2024 o 21:36 Aaron Gould 
napisał(a):

We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our 
core to 400g.  During initial testing of the 400g interface (400GBASE-FR4), I 
see constant FEC errors.  FEC is new to me.  Anyone know why this is occurring? 
 Shown below, is an interface with no traffic, but seeing constant FEC errors.  
This is (2) MX960's cabled directly, no dwdm or anything between them... just a 
fiber patch cable.



{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
---(refreshed at 2024-04-17 14:18:53 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors0
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   0
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:55 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 4302
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   8
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:57 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 8796
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 146
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:59 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors15582
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 111
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:19:01 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors20342
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 256
 FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
   Input rate : 0 bps (0 pps)
   Output rate: 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4
Physical interface: et-7/1/4, Enabled, Physical link is Up
   Interface index: 226, SNMP ifIndex: 800
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
   Flow control: Enabled
   Pad to minimum frame size: Disabled
  

Re: constant FEC errors juniper mpc10e 400g

2024-04-17 Thread Tom Beecher
Isn't FEC required by the 400G spec?

On Wed, Apr 17, 2024 at 3:45 PM Aaron Gould  wrote:

> i did.  Usually my NANOG and J-NSP email list gets me a quicker solution
> than JTAC.
>
> -Aaron
> On 4/17/2024 2:37 PM, Dominik Dobrowolski wrote:
>
> Open a JTAC case,
> That looks like a work for them
>
>
> Kind Regards,
> Dominik
>
> W dniu śr., 17.04.2024 o 21:36 Aaron Gould  napisał(a):
>
>> We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core 
>> to 400g.  During initial testing of the 400g interface (400GBASE-FR4), I see 
>> constant FEC errors.  FEC is new to me.  Anyone know why this is occurring?  
>> Shown below, is an interface with no traffic, but seeing constant FEC 
>> errors.  This is (2) MX960's cabled directly, no dwdm or anything between 
>> them... just a fiber patch cable.
>>
>>
>>
>> {master}
>> me@mx960> clear interfaces statistics et-7/1/4
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
>> ---(refreshed at 2024-04-17 14:18:53 CDT)---
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors0
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate   0
>> FEC Uncorrected Errors Rate 0
>> ---(refreshed at 2024-04-17 14:18:55 CDT)---
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors 4302
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate   8
>> FEC Uncorrected Errors Rate 0
>> ---(refreshed at 2024-04-17 14:18:57 CDT)---
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors 8796
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate 146
>> FEC Uncorrected Errors Rate 0
>> ---(refreshed at 2024-04-17 14:18:59 CDT)---
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors15582
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate 111
>> FEC Uncorrected Errors Rate 0
>> ---(refreshed at 2024-04-17 14:19:01 CDT)---
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors20342
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate 256
>> FEC Uncorrected Errors Rate 0
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>>   Input rate : 0 bps (0 pps)
>>   Output rate: 0 bps (0 pps)
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4
>> Physical interface: et-7/1/4, Enabled, Physical link is Up
>>   Interface index: 226, SNMP ifIndex: 800
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>>   Flow control: Enabled
>>   Pad to minimum frame size: Disabled
>>   Device flags   : Present Running
>>   Interface flags: SNMP-Traps Internal: 0x4000
>>   Link flags : None
>>   CoS queues : 8 supported, 8 maximum usable queues
>>   Schedulers : 0
>>   Last flapped   : 2024-04-17 13:55:28 CDT (00:36:19 ago)
>>   Input rate : 0 bps (0 pps)
>>   Output rate: 0 bps (0 pps)
>>   Active alarms  : None
>>   Active defects : None
>>   PCS statistics  Seconds
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC Mode  : FEC119
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors   801787

Re: constant FEC errors juniper mpc10e 400g

2024-04-17 Thread Matt Erculiani
I'm no TAC engineer, but the purpose of FEC is to take and correct errors
when the port is going so fast that errors are simply inevitable. Working
as Intended.

Easier (read: cheaper) to build in some error correction than make the bits
wiggle more reliably.

No idea if that rate of increment is alarming or not, but you've not yet
hit your FEC cliff so you appear to be fine.

-Matt

On Wed, Apr 17, 2024 at 1:40 PM Dominik Dobrowolski <
dobrowolski.dom...@gmail.com> wrote:

> Open a JTAC case,
> That looks like a work for them
>
>
> Kind Regards,
> Dominik
>
> W dniu śr., 17.04.2024 o 21:36 Aaron Gould  napisał(a):
>
>> We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core 
>> to 400g.  During initial testing of the 400g interface (400GBASE-FR4), I see 
>> constant FEC errors.  FEC is new to me.  Anyone know why this is occurring?  
>> Shown below, is an interface with no traffic, but seeing constant FEC 
>> errors.  This is (2) MX960's cabled directly, no dwdm or anything between 
>> them... just a fiber patch cable.
>>
>>
>>
>> {master}
>> me@mx960> clear interfaces statistics et-7/1/4
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
>> ---(refreshed at 2024-04-17 14:18:53 CDT)---
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors0
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate   0
>> FEC Uncorrected Errors Rate 0
>> ---(refreshed at 2024-04-17 14:18:55 CDT)---
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors 4302
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate   8
>> FEC Uncorrected Errors Rate 0
>> ---(refreshed at 2024-04-17 14:18:57 CDT)---
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors 8796
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate 146
>> FEC Uncorrected Errors Rate 0
>> ---(refreshed at 2024-04-17 14:18:59 CDT)---
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors15582
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate 111
>> FEC Uncorrected Errors Rate 0
>> ---(refreshed at 2024-04-17 14:19:01 CDT)---
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors20342
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate 256
>> FEC Uncorrected Errors Rate 0
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>>   Input rate : 0 bps (0 pps)
>>   Output rate: 0 bps (0 pps)
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4
>> Physical interface: et-7/1/4, Enabled, Physical link is Up
>>   Interface index: 226, SNMP ifIndex: 800
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>>   Flow control: Enabled
>>   Pad to minimum frame size: Disabled
>>   Device flags   : Present Running
>>   Interface flags: SNMP-Traps Internal: 0x4000
>>   Link flags : None
>>   CoS queues : 8 supported, 8 maximum usable queues
>>   Schedulers : 0
>>   Last flapped   : 2024-04-17 13:55:28 CDT (00:36:19 ago)
>>   Input rate : 0 bps (0 pps)
>>   Output rate: 0 bps (0 pps)
>>   Active alarms  : None
>>   Active defects : None
>>   PCS statistics  Seconds
>> 

Re: constant FEC errors juniper mpc10e 400g

2024-04-17 Thread Aaron Gould
i did.  Usually my NANOG and J-NSP email list gets me a quicker solution 
than JTAC.


-Aaron

On 4/17/2024 2:37 PM, Dominik Dobrowolski wrote:

Open a JTAC case,
That looks like a work for them


Kind Regards,
Dominik

W dniu śr., 17.04.2024 o 21:36 Aaron Gould  napisał(a):

We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core 
to 400g.  During initial testing of the 400g interface (400GBASE-FR4), I see 
constant FEC errors.  FEC is new to me.  Anyone know why this is occurring?  
Shown below, is an interface with no traffic, but seeing constant FEC errors.  
This is (2) MX960's cabled directly, no dwdm or anything between them... just a 
fiber patch cable.



{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
---(refreshed at 2024-04-17 14:18:53 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors0
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   0
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:55 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 4302
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   8
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:57 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 8796
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 146
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:59 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors15582
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 111
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:19:01 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors20342
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 256
 FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
   Input rate : 0 bps (0 pps)
   Output rate: 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4
Physical interface: et-7/1/4, Enabled, Physical link is Up
   Interface index: 226, SNMP ifIndex: 800
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
   Flow control: Enabled
   Pad to minimum frame size: Disabled
   Device flags   : Present Running
   Interface flags: SNMP-Traps Internal: 0x4000
   Link flags : None
   CoS queues : 8 supported, 8 maximum usable queues
   Schedulers : 0
   Last flapped   : 2024-04-17 13:55:28 CDT (00:36:19 ago)
   Input rate : 0 bps (0 pps)
   Output rate: 0 bps (0 pps)
   Active alarms  : None
   Active defects : None
   PCS statistics  Seconds
 Bit errors 0
 Errored blocks 0
   Ethernet FEC Mode  : FEC119
   Ethernet FEC statistics  Errors
 FEC Corrected Errors   801787
 FEC Uncorrected Errors 

Re: constant FEC errors juniper mpc10e 400g

2024-04-17 Thread Dominik Dobrowolski
Open a JTAC case,
That looks like a work for them


Kind Regards,
Dominik

W dniu śr., 17.04.2024 o 21:36 Aaron Gould  napisał(a):

> We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core 
> to 400g.  During initial testing of the 400g interface (400GBASE-FR4), I see 
> constant FEC errors.  FEC is new to me.  Anyone know why this is occurring?  
> Shown below, is an interface with no traffic, but seeing constant FEC errors. 
>  This is (2) MX960's cabled directly, no dwdm or anything between them... 
> just a fiber patch cable.
>
>
>
> {master}
> me@mx960> clear interfaces statistics et-7/1/4
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
> ---(refreshed at 2024-04-17 14:18:53 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors0
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate   0
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:18:55 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors 4302
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate   8
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:18:57 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors 8796
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate 146
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:18:59 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors15582
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate 111
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:19:01 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors20342
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate 256
> FEC Uncorrected Errors Rate 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>
> {master}
> me@mx960> show interfaces et-7/1/4
> Physical interface: et-7/1/4, Enabled, Physical link is Up
>   Interface index: 226, SNMP ifIndex: 800
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
>   Flow control: Enabled
>   Pad to minimum frame size: Disabled
>   Device flags   : Present Running
>   Interface flags: SNMP-Traps Internal: 0x4000
>   Link flags : None
>   CoS queues : 8 supported, 8 maximum usable queues
>   Schedulers : 0
>   Last flapped   : 2024-04-17 13:55:28 CDT (00:36:19 ago)
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>   Active alarms  : None
>   Active defects : None
>   PCS statistics  Seconds
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC Mode  : FEC119
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors   801787
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate2054
> FEC Uncorrected Errors Rate 0
>   Link Degrade :
> Link Monitoring   :  Disable
>   Interface transmit statistics: Disabled
>
>   Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815)
> Flags: Up SNMP-Traps