A quick word of caution, if you use third party optics be very careful moving
to Junos 17. We have found a bunch of ours unusable in Junos 17 and while our
account team has been fantastic in trying to find out what’s changed in the
code the official response has been “non Juniper optic, go away”
Can’t remember the exact numbers but the non-RB card is targeted at MPLS core
applications where it’s just high density label switching. Won’t take a full
routing table and has reduced L3VPN numbers. Ask your AM/SE for the specifics.
Sent from my iPhone
> On 30 Apr 2018, at 10:34 am, Brijesh Pa
Unless it’s changed in newer releases there is no equivalent which is annoying.
I believe you can drop to the FPC vty and extract the information card by card
similar to the link you shared, but it’s not exactly a workable solution, nor
“officially supported” by Juniper.
The lack of this comman
e code is
> updated, and another is not, I hope I'm wrong.
>
> But if I'm right, then the only way to do this, is actually ask the
> microcode 'hey i have this packet, do a lookup for it', or like in
> CAT7600/ELAM, get lookup results for real traffic.
>
&g
cef (vrf xyz) .
>
> Isn't Junos equivalent for showing FIB/PFE "show route forwarding" ?
> This usually is good for me too
>
> Aaron
>
>> On May 14, 2018, at 6:35 PM, Nikolas Geyer wrote:
>>
>> The platforms I have used i
:22 pm, Aaron Gould
mailto:aar...@gvtc.com>> wrote:
Cisco and iOS xr has been good for me... Show cef (vrf xyz) .
Isn't Junos equivalent for showing FIB/PFE "show route forwarding" ? This
usually is good for me too
Aaron
On May 14, 2018, at 6:35 PM, Nikolas Geyer
m
I have to play devils advocate around “Right this inconsistency between
configured and operational state in my opinion is THE biggest problem of XR”
Have had this problem occur multiple times in Junos, as recently as Junos 17,
where what was being advertised did not reflect configured policy. Wo
Data Center use cases, except some PTX1 products... albeit the ones that
are dual personality with their QFX alter ego :-)
Sent from my iPhone
> On Oct 13, 2020, at 3:30 PM, Richard McGovern wrote:
>
> I am thinking (guessing) you will not see EVO on MX for some time. EVO is
> mainly t
What version did you upgrade from? Check out
https://lkhill.com/juniper-qfx10k-ipfix/ as there were some things changed in
Junos 17 that resulted in broken IPFIX.
Sent from my iPhone
On Dec 1, 2020, at 9:51 PM, Brendan Mannella wrote:
Curious if anyone else has completely broken Inline flow
vRR is basically just the VCP component of vMX without the vFP, which is why
it’s limited to Linux bridged “management” interfaces.
There is nested vMX which runs the VCP as a nested virtual machine within the
VFP, not sure if it reduces requirements and iirc it only works on KVM.
Sent from my
requirements for a
basic life mode instance.
Cheers,
Nik.
Sent from my iPhone
> On Dec 23, 2020, at 10:50 AM, Łukasz Bromirski wrote:
>
> Nikolas, Mark,
>
>> On 23 Dec 2020, at 02:47, Nikolas Geyer wrote:
>>
>> vRR is basically just the VCP component of vMX
We’re running 16.1R4 and it’s been stable for the most part, aside from a few
annoying cosmetic problems.
Running it on MX480’s and 960’s, a variety of RE’s, a variety of
MPC2/MPC3/MPC4/MPC7, usual protocols such as BGP, OSPF, MPLS, RSVP and a few
Tbps of traffic. No MC-LAG unfortunately though
You’ve probably got a layer 2 loop in your topology somewhere. OSPF probably
went down due to the RE CPU utilization going through the roof.
Sent from my iPhone
> On 22 Feb 2018, at 11:23 pm, Brijesh Patel wrote:
>
> Hello Friends,
>
> We have switch EX4300 which is connected to EX4500. .
>
Yes, I have experience with multiple layers of back-to-back MC-LAG as you are
describing with a similar product set, EX9200 (basically an MX) as core and
QFX5k as aggregation and access.
It worked “ok” for the most part however, we were pushing numbers on this setup
beyond what 99% of customers
14 matches
Mail list logo