[NANOG-announce] Last Call for Presentations + Don't Forget to Register for Hackathon! + More

2024-04-18 Thread Nanog News
*LAST CALL for Presentations! *
*Submission Deadline is Monday *

Whether you are on the stage or as a live remote presenter, we want to hear
from you!

Submit a presentation proposal, including draft slides, no later than
Monday, 22 April.

*Submit Now* 

*Don't Forget to Register for Hackathon!*
Hack Your Way to New Friends, Hands-on Learning, + Fun

The NANOG Hackathons are hands-on and educational at their core — directly
supporting the most critical aspects of our mission, so participants of all
levels are welcome, and registration is free.

Hackathon sponsorships are still available. Email Shawn Winstead at
swinst...@nanog.org for more info.

*Register Now  *

*NANOG Sponsorships Still Available! *

Stand out + get your company seen by your target demographics!

Contact Shawn Winstead at swinst...@nanog.org for more info.

*NANOG Talk of the Week! *
*Off Label Use Of DNS with Fatema Bannat Wala*

Why it's worth your time: At almost 700 views on YouTube, this talk is
worth the watch.

DNS is known to be one of the most widely abused protocols by threat actors
to use in unconventional ways to hide under regular traffic. Apart from
threat actors, DNS is being used or misused by many other service
providers, vendors, etc., to provide the intended services.

*Watch Now * 
___
NANOG-announce mailing list
NANOG-announce@nanog.org
https://mailman.nanog.org/mailman/listinfo/nanog-announce


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Tom Beecher
I'm being sloppy with my verbiage, it's just been a long time since
I thought about this in detail, sorry.

The MAC layer hands bits to the Media Independent Interface, which connects
the MAC to the PHY. The PHY converts the digital 1/0 into the form required
by the media transmission type; the 'what goes over the wire' L1 stuff. The
method of encoding will always add SOME number of bits as overhead. Ex,
64b/66b means that for every 64 bits of data to transmit, 2 bits are added,
so 66 actual bits are transmitted. This encoding overhead is what I meant
when I said 'ethernet control frame juju'. This starts getting into the
weeds on symbol/baud rates and stuff as well, which I dont want to do now
cause I'm even rustier there.

When FEC is enabled, the number of overhead bits added to the transmission
increases. For 400G-FR4 for example, you start with 256b/257b , which is
doubled to 512b/514b for ($reason I cannot remember), then RS-FEC(544,514)
is applied, adding 30 more bits for FEC. Following the example, this means
544 bits are transmitted for every 512 bits of payload data. So , more
overhead. Those additional bits can correct up to 15 corrupted bits of the
payload.

All of these overhead bits are added in the PHY on the way out, and removed
on the way in. So you'll never see them on a packet capture unless you're
using something that's actually grabbing the bits off the wire.

( Pretty sure this is right, anyone please correct me if I munged any of it
up.)


On Thu, Apr 18, 2024 at 2:45 PM Aaron Gould  wrote:

> Thanks.  What "all the ethernet control frame juju" might you be referring
> to?  I don't recall Ethernet, in and of itself, just sending stuff back and
> forth.  Does anyone know if this FEC stuff I see concurring is actually
> contained in Ethernet Frames?  If so, please send a link to show the
> ethernet frame structure as it pertains to this 400g fec stuff.  If so, I'd
> really like to know the header format, etc.
>
> -Aaron
> On 4/18/2024 1:17 PM, Tom Beecher wrote:
>
> FEC is occurring at the PHY , below the PCS.
>
> Even if you're not sending any traffic, all the ethernet control frame
> juju is still going back and forth, which FEC may have to correct.
>
> I *think* (but not 100% sure) that for anything that by spec requires FEC,
> there is a default RS-FEC type that will be used, which *may* be able to be
> changed by the device. Could be fixed though, I honestly cannot remember.
>
> On Thu, Apr 18, 2024 at 1:35 PM Aaron Gould  wrote:
>
>> Not to belabor this, but so interesting... I need a FEC-for-Dummies or 
>> FEC-for-IP/Ethernet-Engineers...
>>
>> Shown below, my 400g interface with NO config at all... Interface has no 
>> traffic at all, no packets at all  BUT, lots of FEC hits.  Interesting 
>> this FEC-thing.  I'd love to have a fiber splitter and see if wireshark 
>> could read it and show me what FEC looks like...but something tells me i 
>> would need a 400g sniffer to read it, lol
>>
>> It's like FEC (fec119 in this case) is this automatic thing running between 
>> interfaces (hardware i guess), with no protocols and nothing needed at all 
>> in order to function.
>>
>> -Aaron
>>
>>
>> {master}
>> me@mx960> show configuration interfaces et-7/1/4 | display set
>>
>> {master}
>> me@mx960>
>>
>> {master}
>> me@mx960> clear interfaces statistics et-7/1/4
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep packet
>> Input packets : 0
>> Output packets: 0
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>>   Input rate : 0 bps (0 pps)
>>   Output rate: 0 bps (0 pps)
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep rror
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors28209
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate2347
>> FEC Uncorrected Errors Rate 0
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep packet
>> Input packets : 0
>> Output packets: 0
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>>   Input rate : 0 bps (0 pps)
>>   Output rate: 0 bps (0 pps)
>>
>> {master}
>> me@mx960> show interfaces et-7/1/4 | grep rror
>>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
>> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
>> filtering: Disabled,
>> Bit errors 0
>> Errored blocks 0
>>   Ethernet FEC statistics  Errors
>> FEC Corrected Errors45153
>> FEC Uncorrected Errors  0
>> FEC Corrected Errors Rate  29
>> FEC Uncorrected 

Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Charles Polisher




On 4/18/24 11:45, Aaron Gould wrote:


Thanks.  What "all the ethernet control frame juju" might you be 
referring to?  I don't recall Ethernet, in and of itself, just sending 
stuff back and forth.  Does anyone know if this FEC stuff I see 
concurring is actually contained in Ethernet Frames?  If so, please 
send a link to show the ethernet frame structure as it pertains to 
this 400g fec stuff.  If so, I'd really like to know the header 
format, etc.


-Aaron



IEEE Std 802.3™‐2022 Standard for Ethernet
(§65.2.3.2 FEC frame format p.2943)
https://ieeexplore.ieee.org/browse/standards/get-program/page/series?id=68

Also helpful, generally:
ITU-T 2000 Recommendation G975 Forward Error Correction for Submarine 
Systems

https://www.itu.int/rec/dologin_pub.asp?lang=e=T-REC-G.975-200010-I!!PDF-E=items



Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread JÁKÓ András
> What "all the ethernet control frame juju" might you be referring
> to?  I don't recall Ethernet, in and of itself, just sending stuff back and
> forth.

I did not read the 100G Ethernet specs, but as far as I remember
FastEthernet (e.g. 100BASE-FX) uses 4B/5B coding on the line, borrowed
from FDDI. Octets of Ethernet frames are encoded to these 5-bit
codewords, and there are valid codewords for other stuff, like idle
symbols transmitted continuously between frames.

Gigabit Ethernet (1000BASE-X) uses 8B/10B code on the line (from Fibre
Channel). In GE there are also special (not frame octet) PCS codewords
used for auto-negotiation, frame bursting, etc.

So I guess these are not frames that you see, but codewords representing
other data, outside Ethernet frames.

András


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Aaron Gould
Thanks.  What "all the ethernet control frame juju" might you be 
referring to?  I don't recall Ethernet, in and of itself, just sending 
stuff back and forth.  Does anyone know if this FEC stuff I see 
concurring is actually contained in Ethernet Frames?  If so, please send 
a link to show the ethernet frame structure as it pertains to this 400g 
fec stuff.  If so, I'd really like to know the header format, etc.


-Aaron

On 4/18/2024 1:17 PM, Tom Beecher wrote:

FEC is occurring at the PHY , below the PCS.

Even if you're not sending any traffic, all the ethernet control frame 
juju is still going back and forth, which FEC may have to correct.


I *think* (but not 100% sure) that for anything that by spec requires 
FEC, there is a default RS-FEC type that will be used, which *may* be 
able to be changed by the device. Could be fixed though, I honestly 
cannot remember.


On Thu, Apr 18, 2024 at 1:35 PM Aaron Gould  wrote:

Not to belabor this, but so interesting... I need a FEC-for-Dummies or 
FEC-for-IP/Ethernet-Engineers...

Shown below, my 400g interface with NO config at all... Interface has no 
traffic at all, no packets at all  BUT, lots of FEC hits.  Interesting this 
FEC-thing.  I'd love to have a fiber splitter and see if wireshark could read 
it and show me what FEC looks like...but something tells me i would need a 400g 
sniffer to read it, lol

It's like FEC (fec119 in this case) is this automatic thing running between 
interfaces (hardware i guess), with no protocols and nothing needed at all in 
order to function.

-Aaron

{master}
me@mx960> show configuration interfaces et-7/1/4 | display set

{master}
me@mx960>

{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
     Input packets : 0
     Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
   Input rate : 0 bps (0 pps)
   Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
     Bit errors 0
     Errored blocks 0
   Ethernet FEC statistics  Errors
     FEC Corrected Errors    28209
     FEC Uncorrected Errors  0
     FEC Corrected Errors Rate    2347
     FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
     Input packets : 0
     Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
   Input rate : 0 bps (0 pps)
   Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
     Bit errors 0
     Errored blocks 0
   Ethernet FEC statistics  Errors
     FEC Corrected Errors    45153
     FEC Uncorrected Errors  0
     FEC Corrected Errors Rate  29
     FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
     Input packets : 0
     Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
   Input rate : 0 bps (0 pps)
   Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
     Bit errors 0
     Errored blocks 0
   Ethernet FEC statistics  Errors
     FEC Corrected Errors    57339
     FEC Uncorrected Errors  0
     FEC Corrected Errors Rate    2378
     FEC Uncorrected Errors Rate 0

{master}
me@mx960>


On 4/18/2024 7:13 AM, Mark Tinka wrote:



On 4/17/24 23:24, Aaron Gould wrote:


Well JTAC just said that it seems ok, and that 400g is going to
show 4x more than 100g "This is due to having to synchronize
much more to support higher data."



We've seen the same between Juniper and Arista boxes in the same
rack running at 100G, despite cleaning fibres, swapping optics,
moving ports, moving line cards, e.t.c. TAC said it's a
non-issue, and to be expected, and shared the same KB's.

It's a bit disconcerting when you plot the data on your NMS, but
it's not material.

Mark.


-- 
   

Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Tom Beecher
FEC is occurring at the PHY , below the PCS.

Even if you're not sending any traffic, all the ethernet control frame juju
is still going back and forth, which FEC may have to correct.

I *think* (but not 100% sure) that for anything that by spec requires FEC,
there is a default RS-FEC type that will be used, which *may* be able to be
changed by the device. Could be fixed though, I honestly cannot remember.

On Thu, Apr 18, 2024 at 1:35 PM Aaron Gould  wrote:

> Not to belabor this, but so interesting... I need a FEC-for-Dummies or 
> FEC-for-IP/Ethernet-Engineers...
>
> Shown below, my 400g interface with NO config at all... Interface has no 
> traffic at all, no packets at all  BUT, lots of FEC hits.  Interesting 
> this FEC-thing.  I'd love to have a fiber splitter and see if wireshark could 
> read it and show me what FEC looks like...but something tells me i would need 
> a 400g sniffer to read it, lol
>
> It's like FEC (fec119 in this case) is this automatic thing running between 
> interfaces (hardware i guess), with no protocols and nothing needed at all in 
> order to function.
>
> -Aaron
>
>
> {master}
> me@mx960> show configuration interfaces et-7/1/4 | display set
>
> {master}
> me@mx960>
>
> {master}
> me@mx960> clear interfaces statistics et-7/1/4
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep packet
> Input packets : 0
> Output packets: 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep rror
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors28209
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate2347
> FEC Uncorrected Errors Rate 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep packet
> Input packets : 0
> Output packets: 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep rror
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors45153
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate  29
> FEC Uncorrected Errors Rate 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep packet
> Input packets : 0
> Output packets: 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep rror
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors57339
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate2378
> FEC Uncorrected Errors Rate 0
>
> {master}
> me@mx960>
>
>
> On 4/18/2024 7:13 AM, Mark Tinka wrote:
>
>
>
> On 4/17/24 23:24, Aaron Gould wrote:
>
> Well JTAC just said that it seems ok, and that 400g is going to show 4x
> more than 100g "This is due to having to synchronize much more to support
> higher data."
>
>
> We've seen the same between Juniper and Arista boxes in the same rack
> running at 100G, despite cleaning fibres, swapping optics, moving ports,
> moving line cards, e.t.c. TAC said it's a non-issue, and to be expected,
> and shared the same KB's.
>
> It's a bit disconcerting when you plot the data on your NMS, but it's not
> material.
>
> Mark.
>
> --
> -Aaron
>
>


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Aaron Gould

Not to belabor this, but so interesting... I need a FEC-for-Dummies or 
FEC-for-IP/Ethernet-Engineers...

Shown below, my 400g interface with NO config at all... Interface has no 
traffic at all, no packets at all  BUT, lots of FEC hits.  Interesting this 
FEC-thing.  I'd love to have a fiber splitter and see if wireshark could read 
it and show me what FEC looks like...but something tells me i would need a 400g 
sniffer to read it, lol

It's like FEC (fec119 in this case) is this automatic thing running between 
interfaces (hardware i guess), with no protocols and nothing needed at all in 
order to function.

-Aaron

{master}
me@mx960> show configuration interfaces et-7/1/4 | display set

{master}
me@mx960>

{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
    Input packets : 0
    Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
  Input rate : 0 bps (0 pps)
  Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
    Bit errors 0
    Errored blocks 0
  Ethernet FEC statistics  Errors
    FEC Corrected Errors    28209
    FEC Uncorrected Errors  0
    FEC Corrected Errors Rate    2347
    FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
    Input packets : 0
    Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
  Input rate : 0 bps (0 pps)
  Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
    Bit errors 0
    Errored blocks 0
  Ethernet FEC statistics  Errors
    FEC Corrected Errors    45153
    FEC Uncorrected Errors  0
    FEC Corrected Errors Rate  29
    FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep packet
    Input packets : 0
    Output packets: 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
  Input rate : 0 bps (0 pps)
  Output rate    : 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4 | grep rror
  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,
    Bit errors 0
    Errored blocks 0
  Ethernet FEC statistics  Errors
    FEC Corrected Errors    57339
    FEC Uncorrected Errors  0
    FEC Corrected Errors Rate    2378
    FEC Uncorrected Errors Rate 0

{master}
me@mx960>


On 4/18/2024 7:13 AM, Mark Tinka wrote:



On 4/17/24 23:24, Aaron Gould wrote:

Well JTAC just said that it seems ok, and that 400g is going to show 
4x more than 100g "This is due to having to synchronize much more to 
support higher data."




We've seen the same between Juniper and Arista boxes in the same rack 
running at 100G, despite cleaning fibres, swapping optics, moving 
ports, moving line cards, e.t.c. TAC said it's a non-issue, and to be 
expected, and shared the same KB's.


It's a bit disconcerting when you plot the data on your NMS, but it's 
not material.


Mark.


--
-Aaron


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Mark Tinka




On 4/18/24 15:45, Tom Beecher wrote:



 Just for extra clarity off those KB, probably has nothing to do with 
vendor interop as implied in at least one of those.


Yes, correct.

Mark.


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Tom Beecher
>
> We've seen the same between Juniper and Arista boxes in the same rack
> running at 100G, despite cleaning fibres, swapping optics, moving ports,
> moving line cards, e.t.c. TAC said it's a non-issue, and to be expected,
> and shared the same KB's.
>
>
 Just for extra clarity off those KB, probably has nothing to do with
vendor interop as implied in at least one of those.

You will see some volume of FEC corrected on 400G FR4 with the same router
hardware and transceiver vendor on both ends, with a 3m patch. Short of
duct taping the transceivers together, not going to get much more optimal
than that.

As far as I can suss out from my reading and what Smart People have told
me, certain combinations of modulation and lamda are just more susceptible
to transmission noise, so for those FEC is required by the standard. PAM4
modulation does seem to be a common thread, but there are some PAM2/NRZs
that FEC is also required for. ( 100GBASE-CWDM4 for example. )



On Thu, Apr 18, 2024 at 8:15 AM Mark Tinka  wrote:

>
>
> On 4/17/24 23:24, Aaron Gould wrote:
>
> > Well JTAC just said that it seems ok, and that 400g is going to show
> > 4x more than 100g "This is due to having to synchronize much more to
> > support higher data."
> >
>
> We've seen the same between Juniper and Arista boxes in the same rack
> running at 100G, despite cleaning fibres, swapping optics, moving ports,
> moving line cards, e.t.c. TAC said it's a non-issue, and to be expected,
> and shared the same KB's.
>
> It's a bit disconcerting when you plot the data on your NMS, but it's
> not material.
>
> Mark.
>


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Thomas Scott
Standard deviation is now your friend. Learned to alert on outside of SD
FEC and CRCs. Although the second should already be alerting.

On Thu, Apr 18, 2024 at 8:15 AM Mark Tinka  wrote:

> On 4/17/24 23: 24, Aaron Gould wrote: > Well JTAC just said that it seems
> ok, and that 400g is going to show > 4x more than 100g "This is due to
> having to synchronize much more to > support higher data. " > We've seen
> the same between
> 
>
>
> On 4/17/24 23:24, Aaron Gould wrote:
>
> > Well JTAC just said that it seems ok, and that 400g is going to show
> > 4x more than 100g "This is due to having to synchronize much more to
> > support higher data."
> >
>
> We've seen the same between Juniper and Arista boxes in the same rack
> running at 100G, despite cleaning fibres, swapping optics, moving ports,
> moving line cards, e.t.c. TAC said it's a non-issue, and to be expected,
> and shared the same KB's.
>
> It's a bit disconcerting when you plot the data on your NMS, but it's
> not material.
>
> Mark.
>
>


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Joel Busch via NANOG
In my reading the 400GBASE-R Physical Coding Sublayer (PCS) always 
includes the FEC. This is defined in clause 119 of IEEE Std 802.3-2022, 
and most easily seen in "Figure 119–2—Functional block diagram" if you 
don't want to get buried in the prose. Nothing there seems to imply that 
the FEC is optional.


I'd be happy to be corrected though. It may well be that there is a 
method to reading these tomes, that I have not discovered yet. It is the 
first time I dove deep into any IEEE standard.


Best regards
Joel

On 17.04.2024 21:47, Tom Beecher wrote:

Isn't FEC required by the 400G spec?

On Wed, Apr 17, 2024 at 3:45 PM Aaron Gould > wrote:


__

i did.  Usually my NANOG and J-NSP email list gets me a quicker
solution than JTAC.

-Aaron

On 4/17/2024 2:37 PM, Dominik Dobrowolski wrote:

Open a JTAC case,
That looks like a work for them


Kind Regards,
Dominik

W dniu śr., 17.04.2024 o 21:36 Aaron Gould mailto:aar...@gvtc.com>> napisał(a):

We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our 
core to 400g.  During initial testing of the 400g interface (400GBASE-FR4), I 
see constant FEC errors.  FEC is new to me.  Anyone know why this is occurring? 
 Shown below, is an interface with no traffic, but seeing constant FEC errors.  
This is (2) MX960's cabled directly, no dwdm or anything between them... just a 
fiber patch cable.



{master}
me@mx960> clear interfaces statistics et-7/1/4

{master}
me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
---(refreshed at 2024-04-17 14:18:53 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors0
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   0
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:55 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 4302
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate   8
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:57 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors 8796
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 146
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:18:59 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors15582
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 111
 FEC Uncorrected Errors Rate 0
---(refreshed at 2024-04-17 14:19:01 CDT)---
   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, 
BPDU Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
filtering: Disabled,
 Bit errors 0
 Errored blocks 0
   Ethernet FEC statistics  Errors
 FEC Corrected Errors20342
 FEC Uncorrected Errors  0
 FEC Corrected Errors Rate 256
 FEC Uncorrected Errors Rate 0

{master}
me@mx960> show interfaces et-7/1/4 | grep "put rate"
   Input rate : 0 bps (0 pps)
   Output rate: 0 bps (0 pps)

{master}
me@mx960> show interfaces et-7/1/4
Physical interface: et-7/1/4, Enabled, Physical link is Up
   Interface index: 226, SNMP ifIndex: 800
 

Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Mark Tinka




On 4/17/24 23:24, Aaron Gould wrote:

Well JTAC just said that it seems ok, and that 400g is going to show 
4x more than 100g "This is due to having to synchronize much more to 
support higher data."




We've seen the same between Juniper and Arista boxes in the same rack 
running at 100G, despite cleaning fibres, swapping optics, moving ports, 
moving line cards, e.t.c. TAC said it's a non-issue, and to be expected, 
and shared the same KB's.


It's a bit disconcerting when you plot the data on your NMS, but it's 
not material.


Mark.


Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Joe Antkowiak
Corrected FEC errors are pretty normal for 400G FR4

On Wednesday, April 17th, 2024 at 3:36 PM, Aaron Gould  wrote:

> We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core 
> to 400g.  During initial testing of the 400g interface (400GBASE-FR4), I see 
> constant FEC errors.  FEC is new to me.  Anyone know why this is occurring?  
> Shown below, is an interface with no traffic, but seeing constant FEC errors. 
>  This is (2) MX960's cabled directly, no dwdm or anything between them... 
> just a fiber patch cable.
>
> {master}
> me@mx960> clear interfaces statistics et-7/1/4
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2
> ---(refreshed at 2024-04-17 14:18:53 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors0
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate   0
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:18:55 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors 4302
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate   8
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:18:57 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors 8796
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate 146
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:18:59 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors15582
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate 111
> FEC Uncorrected Errors Rate 0
> ---(refreshed at 2024-04-17 14:19:01 CDT)---
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors20342
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate 256
> FEC Uncorrected Errors Rate 0
>
> {master}
> me@mx960> show interfaces et-7/1/4 | grep "put rate"
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>
> {master}
> me@mx960> show interfaces et-7/1/4
> Physical interface: et-7/1/4, Enabled, Physical link is Up
>   Interface index: 226, SNMP ifIndex: 800
>   Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU 
> Error: None, Loop Detect PDU Error: None, Loopback: Disabled, Source 
> filtering: Disabled,
>   Flow control: Enabled
>   Pad to minimum frame size: Disabled
>   Device flags   : Present Running
>   Interface flags: SNMP-Traps Internal: 0x4000
>   Link flags : None
>   CoS queues : 8 supported, 8 maximum usable queues
>   Schedulers : 0
>   Last flapped   : 2024-04-17 13:55:28 CDT (00:36:19 ago)
>   Input rate : 0 bps (0 pps)
>   Output rate: 0 bps (0 pps)
>   Active alarms  : None
>   Active defects : None
>   PCS statistics  Seconds
> Bit errors 0
> Errored blocks 0
>   Ethernet FEC Mode  : FEC119
>   Ethernet FEC statistics  Errors
> FEC Corrected Errors   801787
> FEC Uncorrected Errors  0
> FEC Corrected Errors Rate2054
> FEC Uncorrected Errors Rate 0
>   Link Degrade :
> Link Monitoring   :  Disable
>   Interface transmit statistics: Disabled
>
>   Logical interface et-7/1/4.0 (Index 420) (SNMP ifIndex 815)
> Flags: Up SNMP-Traps 0x4004000 Encapsulation: 

Re: constant FEC errors juniper mpc10e 400g

2024-04-18 Thread Schylar Utley
FEC on 400G is required and expected. As long as it is “corrected”, you have 
nothing to worry about. We had the same realisation recenty when upgrading to 
400G.

-Schylar

From: NANOG  on behalf of Aaron 
Gould 
Date: Wednesday, April 17, 2024 at 2:38 PM
To: nanog@nanog.org 
Subject: constant FEC errors juniper mpc10e 400g

We recently added MPC10E-15C-MRATE cards to our MX960's to upgrade our core to 
400g.  During initial testing of the 400g interface (400GBASE-FR4), I see 
constant FEC errors.  FEC is new to me.  Anyone know why this is occurring?  
Shown below, is an interface with no traffic, but seeing constant FEC errors.  
This is (2) MX960's cabled directly, no dwdm or anything between them... just a 
fiber patch cable.







{master}

me@mx960> clear interfaces statistics et-7/1/4



{master}

me@mx960> show interfaces et-7/1/4 | grep rror | refresh 2

---(refreshed at 2024-04-17 14:18:53 CDT)---

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

Bit errors 0

Errored blocks 0

  Ethernet FEC statistics  Errors

FEC Corrected Errors0

FEC Uncorrected Errors  0

FEC Corrected Errors Rate   0

FEC Uncorrected Errors Rate 0

---(refreshed at 2024-04-17 14:18:55 CDT)---

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

Bit errors 0

Errored blocks 0

  Ethernet FEC statistics  Errors

FEC Corrected Errors 4302

FEC Uncorrected Errors  0

FEC Corrected Errors Rate   8

FEC Uncorrected Errors Rate 0

---(refreshed at 2024-04-17 14:18:57 CDT)---

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

Bit errors 0

Errored blocks 0

  Ethernet FEC statistics  Errors

FEC Corrected Errors 8796

FEC Uncorrected Errors  0

FEC Corrected Errors Rate 146

FEC Uncorrected Errors Rate 0

---(refreshed at 2024-04-17 14:18:59 CDT)---

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

Bit errors 0

Errored blocks 0

  Ethernet FEC statistics  Errors

FEC Corrected Errors15582

FEC Uncorrected Errors  0

FEC Corrected Errors Rate 111

FEC Uncorrected Errors Rate 0

---(refreshed at 2024-04-17 14:19:01 CDT)---

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

Bit errors 0

Errored blocks 0

  Ethernet FEC statistics  Errors

FEC Corrected Errors20342

FEC Uncorrected Errors  0

FEC Corrected Errors Rate 256

FEC Uncorrected Errors Rate 0



{master}

me@mx960> show interfaces et-7/1/4 | grep "put rate"

  Input rate : 0 bps (0 pps)

  Output rate: 0 bps (0 pps)



{master}

me@mx960> show interfaces et-7/1/4

Physical interface: et-7/1/4, Enabled, Physical link is Up

  Interface index: 226, SNMP ifIndex: 800

  Link-level type: Ethernet, MTU: 1514, MRU: 1522, Speed: 400Gbps, BPDU Error: 
None, Loop Detect PDU Error: None, Loopback: Disabled, Source filtering: 
Disabled,

  Flow control: Enabled

  Pad to minimum frame size: Disabled

  Device flags   : Present Running

  Interface flags: SNMP-Traps Internal: 0x4000

  Link flags : None

  CoS queues : 8 supported, 8 maximum usable queues

  Schedulers : 0

  Last flapped   : 2024-04-17 13:55:28 CDT (00:36:19 ago)

  Input rate : 0 bps (0 pps)

  Output rate: 0 bps (0 pps)

  Active alarms  : None

  Active defects : None

  PCS statistics  Seconds

Bit errors 0

Errored blocks 0

  Ethernet FEC Mode  : FEC119

  Ethernet FEC statistics  Errors

FEC Corrected Errors   801787

FEC Uncorrected Errors  0

FEC Corrected Errors Rate2054

FEC Uncorrected Errors Rate 0

  Link Degrade :

Link Monitoring   :  Disable

  Interface transmit statistics: Disabled



  Logical