Re: [j-nsp] juniper.net down?

2022-10-18 Thread Aaron via juniper-nsp
Thanks, Looks good now.  

 

https://www.isitdownrightnow.com/juniper.net.html

shows it was down 8 minutes ago for everyone

 

-Aaron

 

 

From: Liam Farr  
Sent: Tuesday, October 18, 2022 1:21 PM
To: aar...@gvtc.com
Cc: juniper-nsp 
Subject: Re: [j-nsp] juniper.net down?

 

Loading fine from NZ, as is https://iam-signin.juniper.net & 
https://webdownload.juniper.net/

 

Being served off Akamai, so maybe a localised Akamai issue to you.

 

 

www.juniper.net <http://www.juniper.net>  23.43.144.179 
assets.adobedtm.com <http://assets.adobedtm.com>  131.203.7.165 Data from 
cached requests only.
consent.trustarc.com <http://consent.trustarc.com>  54.192.177.98 
d.la3-c2-ia2.salesforceliveagent.com 
<http://d.la3-c2-ia2.salesforceliveagent.com>  13.110.34.160 
d.la3-c2-ph2.salesforceliveagent.com 
<http://d.la3-c2-ph2.salesforceliveagent.com>  13.110.37.32 
juniper.secure.force.com <http://juniper.secure.force.com>  13.110.83.142 
service.force.com <http://service.force.com>  101.53.168.136 Data from cached 
requests only.
www.youtube.com <http://www.youtube.com>  172.217.24.46 

 

 

 

 

On Wed, 19 Oct 2022 at 07:13, Aaron via juniper-nsp 
mailto:juniper-nsp@puck.nether.net> > wrote:

juniper.net <http://juniper.net>  down?







Aaron

aar...@gvtc.com <mailto:aar...@gvtc.com> 



___
juniper-nsp mailing list juniper-nsp@puck.nether.net 
<mailto:juniper-nsp@puck.nether.net> 
https://puck.nether.net/mailman/listinfo/juniper-nsp




 

-- 

Kind Regards

 

 

Liam Farr

 

Maxum Data

+64-9-950-5302

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] juniper.net down?

2022-10-18 Thread Aaron via juniper-nsp
juniper.net down?

 

 

 

Aaron

aar...@gvtc.com

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Juniper CoS - Classifiers specifically

2022-03-15 Thread Aaron via juniper-nsp
Just looking to bounce this off anyone in the know.

 

As I learn more about Juniper CoS in Junos, it appears to me that a Juniper
device comes by default acting as a Behavior Aggregate classifier on each
interface that has an ip address enabled.  I'm saying this since I have IP's
on 3 interfaces, and I'm seeing Junos assign a default classifier to each of
those logical units.

 

I'm saying BA since I understand a BA classifier is one assigned using
class-of-service classifier like I see here. and not the other type MFC
(multi-field classifier) which uses a firewall filter

 

I'm wondering if the BA classifier stops working once an MFC is applied.  It
sure seems to in testing.  I feel like I've seen a diagram at some point or
document stating that MFC comes before BA in the CoS process chain. but I'm
not sure.  If anyone has that link/doc please send it.  I'd like to know for
sure.

 

Oh, btw, were in the world is all this default CoS stuff derived from?  I'd
like to think it's in a file somewhere that I can see in shell perhaps.  But
maybe not.  Maybe it's actually compiled into the Junos operating systems
itself.  Or is there a way to see "show configuration" with a special option
that shows automatic/default stuff like all this CoS info?

 

The available default classifiers.

 

root@srx-1> show class-of-service classifier | grep classifier

Classifier: dscp-default, Code point type: dscp, Index: 7

Classifier: dscp-ipv6-default, Code point type: dscp-ipv6, Index: 8

Classifier: dscp-ipv6-compatibility, Code point type: dscp-ipv6, Index: 9

Classifier: exp-default, Code point type: exp, Index: 10

Classifier: ieee8021p-default, Code point type: ieee-802.1, Index: 11

Classifier: ipprec-default, Code point type: inet-precedence, Index: 12

Classifier: ipprec-compatibility, Code point type: inet-precedence, Index:
13

Classifier: ieee8021ad-default, Code point type: ieee-802.1ad, Index: 41

 

 

The ipprec-compatibility classifier I find assigned to enabled interfaces.

 

root@srx-1> show class-of-service interface | grep
"object|classifier|logical"

  Logical interface: ge-0/0/0.0, Index: 74

Object  Name   TypeIndex

Classifier  ipprec-compatibility   ip 13

 

  Logical interface: ge-0/0/1.0, Index: 75

Object  Name   TypeIndex

Classifier  ipprec-compatibility   ip 13

 

  Logical interface: irb.0, Index: 73

Object  Name   TypeIndex

Classifier  ipprec-compatibility   ip 13

 

 

Details of the classifier I see assigned to my enabled interfaces.

 

root@srx-1> show class-of-service classifier name ipprec-compatibility

Classifier: ipprec-compatibility, Code point type: inet-precedence, Index:
13

  Code point Forwarding classLoss priority

  000best-effort low

  001best-effort high

  010best-effort low

  011best-effort high

  100best-effort low

  101best-effort high

  110network-control low

  111network-control high

 

 

(no user defined cos config is present)

root@srx-1> show configuration class-of-service | display set

 

root@srx-1>

 

 

 

 

Aaron

aar...@gvtc.com

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] vQFX cpu cores and ram

2021-06-22 Thread aaron--- via juniper-nsp
--- Begin Message ---
Hi All,

If you have experience running vQFXs, what are you setting for cpu cores and 
ram?

I'm using eve-ng to get around group_fwd_mask issues[1] such that I can have 
lacp and lldp working right out of the box.

The defaults on the github page do not seem enough, as if I set those values, I 
can't even get lacp bundles to come up, once I bump the resources, the bundles 
come up.

I've asked my SE, but I would like to know what the community has set in their 
environments?

Thanks,
Aaron
1. 
https://interestingtraffic.nl/2017/11/21/an-oddly-specific-post-about-group_fwd_mask/
--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX routers and DAC cables?

2020-06-12 Thread aaron--- via juniper-nsp
--- Begin Message ---
Seconding Eric's point, depending on the version of Junos and transceiver, fec 
will be auto-configured, can be turned off, but depending on the transceiver, 
for example CWDM4, fec must be enabled on both sides to get link-up.

Along with disabling auto-negotiation on one side or both can sometimes help.

Just make sure to wait 1min after committing as  changes to take effect can be 
delayed.

-Aaron

Jun 12, 2020, 13:55 by e...@telic.us:

> That's what I was going to chime in on.  Behaviour differences between
> software versions have done different defaults. 
>
> ekrichbaum@atl-bdr1> show interfaces et-0/0/1 | grep FEC 
>  Active defects : None
>  Ethernet FEC Mode  :   NONE
>
> eric@cht-bdr2> show interfaces et-0/0/1 | grep FEC 
>  Active defects : None
>  Ethernet FEC Mode  :  FEC91
>
> These are 204s with a difference in default from 17.4 to 18.2 somewhere.
> Manually setting FEC on both ends seems to correct and bring up the links.
>
>
> -Original Message-
> From: juniper-nsp  On Behalf Of Tobias
> Heister
> Sent: Friday, June 12, 2020 2:03 PM
> To: juniper-nsp@puck.nether.net
> Subject: Re: [j-nsp] MX routers and DAC cables?
>
> Hi,
>
> On 12.06.2020 20:39, Chris Adams wrote:
>
>> Is anybody using DAC cables on MX routers?  We have a customer with an
>> MX10003 connected to EX4600 switches with 40G DAC cables (Juniper 
>> parts, not third-party).  Upon upgrading the router JUNOS to 
>> 18.2R3-S3, none of the interfaces with a DAC cable would come up on the
>>
> router end.
>
>>
>> JTAC's response was that no DAC cables are supported on any MX routers.
>>
>> That seems a little odd to me... I thought DAC cables are a part of 
>> the various specs, so saying they're not supported is saying those 
>> aren't actually Ethernet ports to me.
>>
>
> DAC and AOC are transceivers, and officially only a specific set of
> transceivers are supported per platform.
>
> For MX10003 you can check here: 
> https://apps.juniper.net/hct/product/#prd=MX10003
>
> There are 40GE AOC supported for that box, but not 40GE DAC. For 100GE DAC
> are actually supported in later Junos version.
>
> That being said typically DAC worked in MX for 10G and even 40G on most
> noxes, but on MX10003 we had a lot of problems with 40G DACs and eventually
> replaced most/all of of them with optical transceivers.
>
> Even on 100GE you might need to set the FEC config depending on what and
> where you connect the other DAC end.
>
> While 10G mostly worked everywhere we had a fair share of trouble on 40 and
> 100GE on various vendors and platforms.
>
> --
> regards
> Tobias
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
> -- 
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>

--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp