Re: [j-nsp] Juniper publishes Release notes as pdf

2024-03-15 Thread Olivier Benghozi via juniper-nsp
Hi Joe,

Some more details:
A PDF ? OK.
A properly formatted PDF? Much better.

About regular release notes: they are not available via web for the most
recent Service releases.

Le ven. 15 mars 2024 à 19:50, Joe Horton  a écrit :

> SR (service release) release notes have been PDF for a while.
>
> Regular release notes are still available via web as previously.
>
>
>
> I’ll gladly pass along the feedback as well that SR notes need improvement.
>
>
>
> Joe
>
>
>
>  Juniper Business Use Only
>
> *From: *juniper-nsp  on behalf of
> Olivier Benghozi via juniper-nsp 
> *Date: *Friday, March 15, 2024 at 1:16 PM
> *To: *Juniper Nsp 
> *Subject: *Re: [j-nsp] Juniper publishes Release notes as pdf
>
> [External Email. Be cautious of content]
>
>
> That's right, it's ridiculous: the person in charge actually just clicks on
> «print as PDF» from its Juniper_internal_only_web_access, and uses the
> Portrait format instead of Landscape: therefore some part of the text is
> frequently eaten on the right of the document.
> No proofreading, no check, no nothing...
>
>
> Le ven. 15 mars 2024 à 18:41, Andrey Kostin via juniper-nsp <
> juniper-nsp@puck.nether.net> a écrit :
>
> >
> > Hi Juniper-NSP readers,
> >
> > Did anybody mention that recent Junos Release Notes are now published as
> > pdf, instead of usual web page?
> > Here is the example:
> > https://supportportal.juniper.net/s/article/22-2R3-S3-SRN?language=en_US
> > What do you think about it?
> > For me, it's very inconvenient. To click links to PR or copy one
> > paragraph I now have to download the pdf and open it in Acrobat. Please
> > chime in and maybe our voices will be heard.
> >
>
>

-- 
*Ce message et toutes les pièces jointes (ci-après le "message") sont 
établis à l’intention exclusive des destinataires désignés. Il contient des 
informations confidentielles et pouvant être protégé par le secret 
professionnel. Si vous recevez ce message par erreur, merci d'en avertir 
immédiatement l'expéditeur et de détruire le message. Toute utilisation de 
ce message non conforme à sa destination, toute diffusion ou toute 
publication, totale ou partielle, est interdite, sauf autorisation expresse 
de l'émetteur*
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper publishes Release notes as pdf

2024-03-15 Thread Olivier Benghozi via juniper-nsp
That's right, it's ridiculous: the person in charge actually just clicks on
«print as PDF» from its Juniper_internal_only_web_access, and uses the
Portrait format instead of Landscape: therefore some part of the text is
frequently eaten on the right of the document.
No proofreading, no check, no nothing...


Le ven. 15 mars 2024 à 18:41, Andrey Kostin via juniper-nsp <
juniper-nsp@puck.nether.net> a écrit :

>
> Hi Juniper-NSP readers,
>
> Did anybody mention that recent Junos Release Notes are now published as
> pdf, instead of usual web page?
> Here is the example:
> https://supportportal.juniper.net/s/article/22-2R3-S3-SRN?language=en_US
> What do you think about it?
> For me, it's very inconvenient. To click links to PR or copy one
> paragraph I now have to download the pdf and open it in Acrobat. Please
> chime in and maybe our voices will be heard.
>

-- 
*Ce message et toutes les pièces jointes (ci-après le "message") sont 
établis à l’intention exclusive des destinataires désignés. Il contient des 
informations confidentielles et pouvant être protégé par le secret 
professionnel. Si vous recevez ce message par erreur, merci d'en avertir 
immédiatement l'expéditeur et de détruire le message. Toute utilisation de 
ce message non conforme à sa destination, toute diffusion ou toute 
publication, totale ou partielle, est interdite, sauf autorisation expresse 
de l'émetteur*
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QSA adapters and MTU

2023-11-03 Thread Olivier Benghozi via juniper-nsp
Actually 1G ports are «10G ports operating at 1G speed».
So, configured as 10G ports on chassis side, giga-ether speed 1G on
interface side.

Le ven. 3 nov. 2023 à 16:53, Chris Wopat via juniper-nsp <
juniper-nsp@puck.nether.net> a écrit :

> I can test this on our lab box, but cant get 'supported platform' in the
> chassis config for
>
> port 15 {
> ##
> ## Warning: statement ignored: unsupported platform (mx304)
> ##
> speed 1G;
> }
>
> even though it's passing on https://apps.juniper.net/port-checker/mx304/
>  ¯\_(ツ)_/¯
>
> could perhaps work harder on getting that config right later. it
> certainly lets me commit a ge config with that MTU though. when set as
> speed 10g, I can see the 1g optic inserted, as well as DOM, but it won't
> link.
>
> --Chris
>
>
>
> On Fri, Nov 3, 2023 at 10:16 AM Ola Thoresen via juniper-nsp <
> juniper-nsp@puck.nether.net> wrote:
>
> > On 03.11.2023 16:04, Chris Wopat wrote:
> >
> > > We use them on MX304 at 10g, primarily to get DWDM SFP+ to work. MTU
> > > is fine, it's 9k as a part of LACP on a recent deployment. The adapter
> > > simply passes through lane :0 to the port when configured as QSFP+. If
> > > you insert the adapter and no optic, the device is unaware of its
> > > existence - you only see the SFP+ info.
> > >
> > This is the same use case as for us, and my understanding is exactly
> > that they are just passing a single lane through, and should not really
> > know anything about packets or ethernet frames or anything.
> >
> > But still - people more knowledgeable than me - assure me that there is
> > a limit of 2008 Bytes MTU - at least for 1G.
> >
> > I just can't find this documented anywhere, and I would have thought
> > that more people would have made more fuss about it when they start
> > using it if this is a real issue.
> >
> >
> > > We haven't tested in prod on 1g but i think we did in the lab. can
> > > probably toss something in there if you're really curious. I think the
> > > juniper supported optic page lists QSA adapter support or not. I
> > > thought it was generally supported with Junos 20+ nowadays.
> > >
> > Yes. They list QSA adapter as supported, also for 1G optics, and don't
> > write anything about any MTU limitations in the Hardware Compatibility
> > Tool.
> >
>

-- 
*Ce message et toutes les pièces jointes (ci-après le "message") sont 
établis à l’intention exclusive des destinataires désignés. Il contient des 
informations confidentielles et pouvant être protégé par le secret 
professionnel. Si vous recevez ce message par erreur, merci d'en avertir 
immédiatement l'expéditeur et de détruire le message. Toute utilisation de 
ce message non conforme à sa destination, toute diffusion ou toute 
publication, totale ou partielle, est interdite, sauf autorisation expresse 
de l'émetteur*
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] GRE tunnels on a QFX10002-60C

2022-06-27 Thread Olivier Benghozi via juniper-nsp
I guess that the right thing to do would be to provide a licence based model 
for MX304 with an entry level capacity licence priced as the MX204 currently 
is...


> Le 27 juin 2022 à 18:15, Giuliano C. Medalha via juniper-nsp 
>  a écrit :
> 
> MX204 was announced at EoS
> 
> We have used MX204 for last 5 years. It was a huge success for any function 
> it was designed.
> 
> Juniper only recommend to us ( about gre projects ) in mx product line
> 
> Are you going to use mx304 after this year ?
> 
> Even this box is more expensive ( 5x at least )
> 
> Or Juniper is thinking to launch a new router with TRIO 6 ?


-- 
*Ce message et toutes les pièces jointes (ci-après le "message") sont 
établis à l’intention exclusive des destinataires désignés. Il contient des 
informations confidentielles et pouvant être protégé par le secret 
professionnel. Si vous recevez ce message par erreur, merci d'en avertir 
immédiatement l'expéditeur et de détruire le message. Toute utilisation de 
ce message non conforme à sa destination, toute diffusion ou toute 
publication, totale ou partielle, est interdite, sauf autorisation expresse 
de l'émetteur*
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] upgrading an antique 240

2021-07-23 Thread Olivier Benghozi via juniper-nsp
Just in case, did you try a request system storage cleanup, before trying the 
recovery snapshot ?

I suppose you had a look at
https://kb.juniper.net/InfoCenter/index?page=content=KB33892
but it doesn't seem to match your problem, actually.

What about a
show chassis hardware detail | match " MB"
that would show the size of the second disk? Just to check whether there's a 
chance to have the snapshot done on it or not :)

> Le 18 juil. 2021 à 18:49, Randy Bush via juniper-nsp 
>  a écrit :
> 
> made it from 14.2 to 15.1 so far.  a silly question.  to where are folk
> uploading the to-be-installed images?  my old fingers want /var/tmp, but
> it gets blown away.
> 
> and there ain't a lot of space
> 
> root@r1:/packages # df -h
> Filesystem  SizeUsed   Avail Capacity  Mounted on
> /dev/gpt/junos   32G1.1G 29G 4%/.mount
> 
> r...@r1.dfw> request system snapshot recovery
> Creating image ...
> Compressing image ...
> Image size is 905MB
> ERROR: The OAM volume is too small to store a snapshot

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] fpc1 user.notice logrotate: ALERT exited abnormally with [1]

2021-06-15 Thread Olivier Benghozi
Hi,

this has nothing to do with the RE actually (and nothing to do with the router 
configuration).
These messages come from the embedded Linux inside the MPC in slot 1 («fpc1»), 
when its logrotates is executed. This kind of infra inside the MPCs (a Linux 
running a Juniblob) exists since MPC7 I think.
This bug is described in PR1471006 and not harmful («Two cronjobs on fpc are 
executed at the same time.»).


To see the original logs by logging inside the embedded Linux on an MPC7 from 
the CLI of an RE:

> start shell user root
# rsh -Ji fpc3
# tail -n 5 /var/log/user.log

2021-05-31T04:02:01.064586+02:00 cr-co-01-pareq2-re1-fpc3 logrotate: ALERT 
exited abnormally with [1]
2021-06-02T04:02:01.491175+02:00 cr-co-01-pareq2-re1-fpc3 logrotate: ALERT 
exited abnormally with [1]
2021-06-06T04:02:01.404865+02:00 cr-co-01-pareq2-re1-fpc3 logrotate: ALERT 
exited abnormally with [1]
2021-06-12T04:02:01.262606+02:00 cr-co-01-pareq2-re1-fpc3 logrotate: ALERT 
exited abnormally with [1]
2021-06-13T04:02:01.455588+02:00 cr-co-01-pareq2-re1-fpc3 logrotate: ALERT 
exited abnormally with [1]
 


> Le 15 juin 2021 à 15:51, Thomas Mann  a écrit :
> 
> Hi Alexandre,
> 
> The log files on the active RE are rotated periodically and there are
> more than 45% available on the /.mount/var fs (5G free).
> I read the syslog messages as they are generated by FPC1, or I'm wrong ?
> 
> Thank you
> Thomas
> 
> 
> On Tue, Jun 15, 2021 at 3:25 PM Alexandre Guimaraes
>  wrote:
>> 
>> Hello Thomas,
>> 
>> Probably, there ir no space to allocate new files from logrotate service.
>>Remove all files >   request system storage cleanup
>> 
>>Check this configuration lines to keep log files under control:
>> 
>>> show configuration system syslog
>> user * {
>>any emergency;
>> }
>> file messages {
>>any notice;
>>authorization info;
>> }
>> file interactive-commands {
>>interactive-commands any;
>> }
>> file LINKS-UP-DOWN {
>>daemon info;
>>match "(SNMP_TRAP|VCCPD_PROTOCOL)";
>>archive size 1m files 10;   
>> <===<
>> }
>> file LOGS-DE-CLI {
>>interactive-commands info;
>>match .*CMDLINE.*;
>>archive size 1m files 10; 
>> <===<
>> }
>> 
>> Att
>> Alexandre
>> 
>> Em 15/06/2021 10:07, "juniper-nsp em nome de Thomas Mann" 
>> > thomas.richard.m...@gmail.com> escreveu:
>> 
>>Hi,
>> 
>>Today one of our fpcs started throwing every minute logrotate syslog 
>> alerts.
>>Did you faced something like this before ?
>> 
>>Jun 15 12:38:01 core1-re1 fpc1 user.notice logrotate: ALERT exited
>>abnormally with [1]
>>Jun 15 12:39:01 core1-re1 fpc1 user.notice logrotate: ALERT exited
>>abnormally with [1]
>>Jun 15 12:40:01 core1-re1 fpc1 user.notice logrotate: ALERT exited
>>abnormally with [1]
>>Jun 15 12:41:01 core1-re1 fpc1 user.notice logrotate: ALERT exited
>>abnormally with [1]
>> 
>>Thank you
>>Thomas

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX300 stuck in loader

2021-06-07 Thread Olivier Benghozi
Right, there are no USB images for SXR300.
What is proabably needed is another SRX300 for creating an USB bootable 
snapshot (to boot the unbootable SRX300), then a snapshot from the USB to the 
flash (on the unbootable SRX300, once booted).
Or someone with an SRX300 might create and make available an image of such 
snapshot (in order to dd it to another USB key).

> Le 7 juin 2021 à 16:04, Kody Vicknair  a écrit :
> 
> Disregard. Apparently there are no *.img files for the srx300
> 
> -KV
> 
> -Original Message-
> From: juniper-nsp  > On Behalf Of Kody Vicknair
> Sent: Monday, June 7, 2021 8:59 AM
> To: Mohammad Khalil mailto:eng.m...@gmail.com>>; Juniper 
> List mailto:juniper-nsp@puck.nether.net>>
> Subject: Re: [j-nsp] SRX300 stuck in loader
> 
> *External Email: Use Caution*
> 
> Download the *.img file. Use rufus/win32diskimager to image the usb. Insert 
> usb into the chassis. >request system halt at now. unplug/reinsert power. 
> Upon boot process select "install junos" via console menu.
> 
> -KV
> 
> 
> -Original Message-
> From: juniper-nsp  On Behalf Of Mohammad 
> Khalil
> Sent: Saturday, June 5, 2021 5:34 PM
> To: Juniper List 
> Subject: [j-nsp] SRX300 stuck in loader
> 
> *External Email: Use Caution*
> 
> Greetings
> My firewall SRX300 stuck in loader , I brought a large USB , formatted it and 
> downloaded the image.
> https://link.edgepilot.com/s/47767aeb/otxTCglVtkmY8LXTXsNVyQ?u=https://www.juniper.net/documentation/en_US/junos12.1x44/topics/task/installation/security-junos-os-boot-loader-usb-storage-device-srx-series-device-installing.html
> The issue is that it does not work and mention target media is to small.
> What should I do ?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 Maximum Packet Rates

2021-05-20 Thread Olivier Benghozi
By the way this one is public (not sure if relevant or not though):
https://kb.juniper.net/InfoCenter/index?page=content=KB33477


> Le 20 mai 2021 à 14:00, Tobias Heister  a écrit :
> 
> Hi,
> 
> MX204 has some limitations in terms of pps rates for smaller packet sizes if 
> inline-flow is configured compared to e.g. MX10003 not only but also related 
> to the pfe/fabric layout (no fabric in 204). So even if they are the same pfe 
> they might behave differently.
> 
> The details are not public, so you might want to reach out to your partner/SE.
> 
> regards
> Tobias
> 
> On 20.05.2021 12:39, Peter Sievers wrote:
>> Hi Leon,
>> both MX204 und MX10003/LC2103 use
>> eagle forwarding ASIC, LC2103 Linecard has 3xASIC,
>> MX204 has 1xASIC, WAN Output Rate for eagle
>> pfe is for 100G Interface ~110 MPPS.
>> Assumption is, that you got the traffic on the
>> MX10003 over more than one PFE/ASIC incoming.
>> BR,
>> .peter
>> On 20.05.21 11:49, Leon Kramer wrote:
>>> Hello,
>>> 
>>> during an approximate 240 Mpps / 80 Gbps UDP DDOS attack to one target IP
>>> we have experienced a massive and immediate packet loss at an MX204 router.
>>> 
>>> The attack was coming in through MX10003 and MX204. The MX204 was not able
>>> to forward more than 120 Mpps during the attack. The MX10003 forwarded 180
>>> Mpps without any issue.
>>> 
>>> Both routers are running Juniper 18.4R2-S3. The MX204 has all 4 x 100 Gbps
>>> interfaces active in use.
>>> 
>>> Any idea if 120 Mpps for Juniper MX204 is already the hardware limitation?
>>> This would equal to only roughly 41 Gbps of the attacks packet size of 43
>>> bytes. We are certain that no policer or firewall filter lead to the packet
>>> drops.
>>> 
>>> Anyone has a recommendation what could be done to increase performance?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] JunOS 18, ELS vs non-ELS QinQ native vlan handling.

2021-05-19 Thread Olivier Benghozi
Hi,

actually Juniper published PR1568533 about this (as it should have worked like 
KB35261 says but it was not) – the PR says it's fixed in 19.4R3-S3 too, by the 
way.

Olivier

> Le 19 mai 2021 à 13:26, Antti Ristimäki  a écrit :
> 
> Hi list,
> 
> Just as a follow-up and for possible future reference, 18.4R3-S7 indeed sends 
> the untagged customer frames with only SVLAN tag to QinQ uplink, but 
> 18.4R3-S8 again reverts the behaviour such that those frames are sent 
> double-tagged with both SVLAN and native-vlan. Tested with QFX5110-48S.
> 
> The hidden command "input-native-vlan-push " also seems to 
> work in S8, whereas in S7 it doesn't seem to have any impact.
> 
> Antti
> 
> - On 9 Apr, 2021, at 13:17, Antti Ristimäki antti.ristim...@csc.fi 
>  wrote:
> 
>> Hi Karsten,
>> 
>> Thanks for the link, I wasn't aware of such KB article, although there are
>> several other articles related to native-vlan handling.
>> 
>> The QFX5110 in question was previously running 17.3R3-S3 and there the
>> native-vlan was indeed double-tagged on the uplink, just like the table says.
>> But despite that the table says, at least 18.4R3-S7 sends the frames
>> single-tagged, no matter whether or not "input-native-vlan-push" is 
>> configured.
>> 
>> I will try to sort this out with JTAC.
>> 
>> Antti
>> 
>> - On NaN , 0NaN, at NaN:NaN, Karsten Thomann karsten_thom...@linfre.de
>> wrote:
>> 
>>> Hi,
>>> 
>>> I haven't tested the behvaior, but to avoid the big surprises you should at
>>> least check the tables in
>>> the kb:
>>> https://kb.juniper.net/InfoCenter/index?page=content=KB35261=RSS
>>> 
>>> As you're not writing the exact release, there was a change if you upgraded 
>>> from
>>> R1 or R2.
>>> 
>>> Kind regards
>>> Karsten
>>> 
>>> Am Freitag, 9. April 2021, 10:02:21 schrieb Antti Ristimäki:
 Hi list,
 
 Returning to this old thread. It seems that the behaviour has again 
 changed,
 because after upgrading QFX5110 to 18.4R3-S7 the switch does not add the
 native-vlan tag when forwarding the frame to QinQ uplink. Previously with
 version 17.3 the switch did add the native-vlan tag along with the S-tag.
 It seems that "input-native-vlan-push " is available as a
 hidden command in 18.4R3-S7, but it doesn't seem to have any impact on the
 behaviour.
 
 Any experience from others?
 
 Antti
 
 - On 22 Mar, 2019, at 19:03, Alexandre Snarskii s...@snar.spb.ru wrote:
> Hi!
> 
> Looks like JunOS 18.something introduced an incompatibility of native
> vlan handling in QinQ scenario between ELS (qfx, ex2300) and non-ELS
> switches: when ELS switch forwards untagged frame to QinQ, it now adds
> two vlan tags (one specified as native for interface and S-vlan) instead
> of just S-vlan as it is done by both non-ELS and 'older versions'.
> 
> As a result, if the other end of tunnel is non-ELS (or third-party)
> switch, it strips only S-vlan and originally untagged frame is passed
> with vlan tag :(
> 
> Are there any way to disable this additional tag insertion ?
> 
> PS: when frames sent in reverse direction, non-ELS switch adds only
> S-vlan and this frame correctly decapsulated and sent untagged.
> 
> ELS-side configuration (ex2300, 18.3R1-S1.4. also tested with
> qfx5100/5110):
> 
> [edit interfaces ge-0/0/0]
> flexible-vlan-tagging;
> native-vlan-id 1;
> mtu 9216;
> encapsulation extended-vlan-bridge;
> unit 0 {
> 
>   vlan-id-list 1-4094;
>   input-vlan-map push;
>   output-vlan-map pop;
> 
> }
> 
> (when native-vlan-id is not configured, untagged frames are not
> accepted at all).

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] TCP-MSS adjust does not work on MPC10E

2021-04-16 Thread Olivier Benghozi
I guess you'll have to invoke the JTAC :-/

Interested by their answer.

> Le 16 avr. 2021 à 19:55, Louis Ho  a écrit :
> 
> I disabled hyper mode but still does not work
> 
> 
> user-lo...@mx960.ol.lax.rnbn.net.RE0# show forwarding-options hyper-mode
> no-hyper-mode;
> {master}[edit]
> 
> user-lo...@mx960.ol.lax.rnbn.net.RE0# run show forwarding-options hyper-mode
> Current mode: normal mode
> Configured mode: normal mode
> {master}[edit]
> user-lo...@mx960.ol.lax.rnbn.net.RE0#
> 
> 
> 
> 
>> On Apr 17, 2021, at 12:52 AM, Olivier Benghozi > <mailto:olivier.bengh...@wifirst.fr>> wrote:
>> 
>> MPC10. SCBE3? SCBE3 => hyper-mode by default.
>> As there's no hypermode firmware in MPC-16x10, only the MPC10 would really 
>> be in hyper mode, and therefore the firmware loaded in the MPC10 would be 
>> the faster hypermode one, while the firmware in the 16x10 will still be the 
>> standard one with all the features.
>> 
>> I wonder if tcp-mss is expected to work in Hyper-Mode, in fact. It's not 
>> documented... but this would explain a difference of behaviour between both 
>> cards.
>> 
>> You should try:
>> show forwarding-options hyper-mode
>> 
>> 
>> Of course if you're in standard mode and/or not using SCBE3, these thinkings 
>> are to be thrown away :)
>> 
>> 
>>> Le 16 avr. 2021 à 05:53, Louis Ho via juniper-nsp 
>>> mailto:juniper-nsp@puck.nether.net>> a écrit :
>>> 
>>> 
>>> De: Louis Ho mailto:lo...@rootglobal.com>>
>>> Objet: TCP-MSS adjust does not work on MPC10E
>>> Date: 16 avril 2021 à 05:53:13 UTC+2
>>> À: "juniper-nsp@puck.nether.net <mailto:juniper-nsp@puck.nether.net>" 
>>> mailto:juniper-nsp@puck.nether.net>>
>>> 
>>> 
>>> 
>>> 
>>> Does MPC10E support changing TCP-MSS on transit packets?
>>> 
>>> 
>>> 
>>> Today I need to set TCP-MSS to 1400,
>>> 
>>> 
>>> My topological map
>>> 
>>> 
>>> Server A > EX4300 A > MX960 A xe-4/0/0> > EX4300 B > server B
>>> 
>>> Serve A > EX4300 A > MX960 A et-5/0/0 > QFX-5110  > EX4300 C > server C
>>> 
>>> 
>>> There are two line cards on my MX960 router, MPC 3D 16x 10GE and MPC10E 3D 
>>> MRATE-10xQSFPP
>>> 
>>> I have configured tcp-mss on the two MPC interfaces, but the mss will only 
>>> be adjusted if the traffic passes through xe-4/0/0,
>>> but the traffic through et-5/0/0 does not work, the mss still keep at 1460.
>>> 
>>> 
>>> 
>>> 
>>> show interfaces xe-4/0/0 | match mss
>>> tcp-mss 1400;
>>> 
>>> tcpdump by serverB
>>> https: Flags [S], seq 3239034622, win 29200, options [mss 1400,sackOK,TS
>>> 
>>> 
>>> show interfaces et-5/0/0 | match mss
>>> tcp-mss 1400;
>>> 
>>> tcpdump by serverC
>>> Flags [S], seq 3355130726, win 29200, options [mss 1460,sackOK,TS val
>>> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] TCP-MSS adjust does not work on MPC10E

2021-04-16 Thread Olivier Benghozi
MPC10. SCBE3? SCBE3 => hyper-mode by default.
As there's no hypermode firmware in MPC-16x10, only the MPC10 would really be 
in hyper mode, and therefore the firmware loaded in the MPC10 would be the 
faster hypermode one, while the firmware in the 16x10 will still be the 
standard one with all the features.

I wonder if tcp-mss is expected to work in Hyper-Mode, in fact. It's not 
documented... but this would explain a difference of behaviour between both 
cards.

You should try:
show forwarding-options hyper-mode


Of course if you're in standard mode and/or not using SCBE3, these thinkings 
are to be thrown away :)


> Le 16 avr. 2021 à 05:53, Louis Ho via juniper-nsp 
>  a écrit :
> 
> 
> De: Louis Ho 
> Objet: TCP-MSS adjust does not work on MPC10E
> Date: 16 avril 2021 à 05:53:13 UTC+2
> À: "juniper-nsp@puck.nether.net" 
> 
> 
> 
> 
> Does MPC10E support changing TCP-MSS on transit packets?
> 
> 
> 
> Today I need to set TCP-MSS to 1400,
> 
> 
> My topological map
> 
> 
> Server A > EX4300 A > MX960 A xe-4/0/0> > EX4300 B > server B
> 
> Serve A > EX4300 A > MX960 A et-5/0/0 > QFX-5110  > EX4300 C > server C
> 
> 
> There are two line cards on my MX960 router, MPC 3D 16x 10GE and MPC10E 3D 
> MRATE-10xQSFPP
> 
> I have configured tcp-mss on the two MPC interfaces, but the mss will only be 
> adjusted if the traffic passes through xe-4/0/0,
> but the traffic through et-5/0/0 does not work, the mss still keep at 1460.
> 
> 
> 
> 
> show interfaces xe-4/0/0 | match mss
> tcp-mss 1400;
> 
> tcpdump by serverB
> https: Flags [S], seq 3239034622, win 29200, options [mss 1400,sackOK,TS
> 
> 
> show interfaces et-5/0/0 | match mss
> tcp-mss 1400;
> 
> tcpdump by serverC
> Flags [S], seq 3355130726, win 29200, options [mss 1460,sackOK,TS val
> 
> 
> 
> Cheers
> 
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] qfx5100 help with Q in Q

2020-08-19 Thread Olivier Benghozi
Hi,
I posted some working config last week in this ML (working for EX4600 and 
therefore QFX5100 – but on 18.4R3).

> Le 19 août 2020 à 14:40, John Brown  a écrit :
> 
> Switch A is running 18.1R3.3
> Switch B is running 18.3R2.7
> Both are qfx5100-48s-6q.
> 
> [...]
> 
> I am trying to QinQ traffic between Switch A and B.
> 
> [...]
> 
> I've tried "All-in-one Bundling" and several other configs and have
> looked at docs on Juniper site.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper EX/QFX vlan-id-list limitation

2020-08-13 Thread Olivier Benghozi
Our QinQ usage model is many UNIs toward one NNI, therefore we never have 
several QinQ stuff per UNI. Maybe it's the difference between your usage and 
ours ?
If you need to have several QinQ svlans on one UNI port, I guess you'll be 
bitten again by the number of IDs limit (but maybe several ranges are 
possible?).


Anyway, here are the QinQ configs we use on EX4600 (so: ELS style):


NNI interface (also using real vlans on unit 0, completely independent of QinQ 
ones – this mix works only on 4600, not on smaller switches):

ae0 {
flexible-vlan-tagging;
mtu 9216;
encapsulation flexible-ethernet-services;
unit 0 {
family ethernet-switching {
interface-mode trunk;
vlan {
members [ some vlan we use as real vlans, having nothing to 
do with QinQ ];
}
}
}
unit 3000 {
description "Q-in-Q My Customer 1";
encapsulation vlan-bridge;
vlan-id 3000;
}
unit 3001 {
description "Q-in-Q  My Customer 2";
encapsulation vlan-bridge;
vlan-id 3001;
}
}



UNI interfaces:

ae3 {
description "My Customer 1";
flexible-vlan-tagging;
mtu 9216;
encapsulation extended-vlan-bridge;
unit 3000 {
description "Q-in-Q My Customer 1";
vlan-id-list 2-4094;
input-vlan-map push;
output-vlan-map pop;
}
}
ae4 {
description "My Customer 2";
flexible-vlan-tagging;
native-vlan-id 1;
mtu 9216;
encapsulation extended-vlan-bridge;
unit 3001 {
description "Q-in-Q My Customer 2";
vlan-id-list 1-4094;
input-vlan-map push;
output-vlan-map pop;
}
}


QinQ vlans:

vlans {
qinq-3000 {
description "Q-in-Q My Customer 1";
interface ae0.3000;
interface ae3.3000;
switch-options {
no-mac-learning;
}
}
qinq-3001 {
description "Q-in-Q My Customer 2";
interface ae0.3001;
interface ae4.3001;
switch-options {
no-mac-learning;
}
}
}


> Le 13 août 2020 à 23:04, Robin Williams  a écrit :
> 
> Hi Olivier,
> 
> Thanks for the reply - it does seem rather odd that I can't do on a new high 
> end EX or QFX switch, what I used to be able to do on a bottom end EX2200 
> with the dot1q-tunnelling stanza.
> 
> Regarding your workaround - were you running this config on the same physical 
> interface?  As that won't commit in this scenario (as it presumably doesn't 
> know which vlans to push into which outer..)
> 
> flexible-vlan-tagging;
> encapsulation extended-vlan-bridge;
> unit 3104 {
>vlan-id-list 1-4094;
>input-vlan-map push;
>output-vlan-map pop;
> }
> unit 3107 {
>vlan-id-list 1-4094;
>input-vlan-map push;
>output-vlan-map pop;
> }
> 
> {master:0}[edit interfaces ge-0/0/1]
> # commit check
> [edit interfaces ge-0/0/1]
>  'unit 3107'
>    duplicate VLAN-ID on interface
> error: configuration check-out failed
> 
> Cheers,
> Rob
> 
> 
> 
> 
> 
> -Original Message-
> From: juniper-nsp  On Behalf Of Olivier 
> Benghozi
> Sent: 12 August 2020 19:12
> To: juniper-nsp@puck.nether.net
> Subject: Re: [j-nsp] Juniper EX/QFX vlan-id-list limitation
> 
> Hi,
> 
> We miraculously found this doc before implementing such QinQ conf on EX4600 
> (that are low end QFX5100).
> So we didn't try to test the switch with this case, and we directly used such 
> config: instead of vlan-id-list [some ids], we (nearly) always use the same 
> one everywhere: vlan-id-list 2-4094. Problem fixed before it appeared.
> 
> Sometimes we use vlan-id-list 1-4094 and native-vlan 1, when some untagged 
> traffic must be carried too – in this case the untagged traffic is 
> double-tagged on the NNI port with dot1q tag 1 as cvlan – there's a thread 
> about that in this mailing-list by the way.
> 
> 
>> Le 12 août 2020 à 18:18, Robin Williams via juniper-nsp 
>>  a écrit :
>> 
>> Has anyone come across PR1395312 before?
>> 
>> “On ACX/EX/QFX platforms, if VLAN ID lists are configured under a single 
>> physical interface, Q-in-Q might stop working for certain VLAN ID lists”.
>> 
>> [...]
>> 
>> interfaces {
>>   xe-0/1/0 {
>>   flexible-vlan-tagging;
>>   encapsulation extended-vlan-bridge;
>>   unit 3104 {
>>   vlan-id-list [ 1102 1128 1150 1172 4000 4001 4002 4003];
>>   input-vlan-map push;
>>   ou

Re: [j-nsp] Juniper EX/QFX vlan-id-list limitation

2020-08-12 Thread Olivier Benghozi
Hi,

We miraculously found this doc before implementing such QinQ conf on EX4600 
(that are low end QFX5100).
So we didn't try to test the switch with this case, and we directly used such 
config: instead of vlan-id-list [some ids], we (nearly) always use the same one 
everywhere: vlan-id-list 2-4094. Problem fixed before it appeared.

Sometimes we use vlan-id-list 1-4094 and native-vlan 1, when some untagged 
traffic must be carried too – in this case the untagged traffic is 
double-tagged on the NNI port with dot1q tag 1 as cvlan – there's a thread 
about that in this mailing-list by the way.


> Le 12 août 2020 à 18:18, Robin Williams via juniper-nsp 
>  a écrit :
> 
> Has anyone come across PR1395312 before?
> 
> “On ACX/EX/QFX platforms, if VLAN ID lists are configured under a single 
> physical interface, Q-in-Q might stop working for certain VLAN ID lists”.
> 
> [...]
> 
> interfaces {
>xe-0/1/0 {
>flexible-vlan-tagging;
>encapsulation extended-vlan-bridge;
>unit 3104 {
>vlan-id-list [ 1102 1128 1150 1172 4000 4001 4002 4003];
>input-vlan-map push;
>output-vlan-map pop;
>}
> 
> The docs page for ‘vlan-id-lists’ does mention:
> https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/vlan-id-list-edit-bridge-domains.html
> 
> “WARNING On some EX and QFX Series switches, if VLAN identifier list 
> (vlan-id-list) is used for Q-in-Q tunnelling, you can apply no more than 
> eight VLAN identifier lists to a physical interface.”

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] track-igp-metric in LDP

2020-08-02 Thread Olivier Benghozi
That's right: if you want your LDP labeled traffic to follow your IGP costs 
instead of using unexpected paths (and you probably want it in fact, as if you 
want to do something else you usually use RSVP/MPLS-TE or Segment Routing), you 
just need track-igp-metric (and therefore it's always useful/needed/best 
practice).
As Michael wrote, by default (without this knob) you would have all LSP/LDP 
paths at the same cost (which is probably stupid).

> Le 2 août 2020 à 16:21, Chen Jiang  a écrit :
> 
> Hi! Dave
> 
> Thanks for clarifying. So you mean BGP path selection tie breaker for IGP
> metric is happening in inet.3(LDP) but not inet.0(IGP like OSPF/IOSIS), so
> we need "track-igp-metric" to introduce metric into LDP for
> track-igp-metric. is it correct?
> 
> BR!
> 
> James Chen
> 
> On Sun, Aug 2, 2020 at 8:17 PM Dave Bell  > wrote:
> 
>> If you are running next hop self on your BGP routes at the edge, the best
>> path will be via a loopback in an  LSP.
>> 
>> If you have two identical routes, one of the tie breakers is IGP
>> preference. If that knob isn’t set the IGP cost will be 1 for everything,
>> and you will progress down to less helpful tue breakers like router id.
>> 
>> Regards,
>> Dave
>> 
>> On Sun, 2 Aug 2020 at 11:58, Chen Jiang  wrote:
>> 
>>> Hi! Michael
>>> 
>>> Thanks for your clarification.
>>> 
>>> Sure, it will let LDP use IGP metric, but is there any benefit?
>>> 
>>> Cause per my understanding LDP only works for label distributing, not path
>>> selection, and LDP always follows the IGP path, so what is the difference
>>> or benefit to add additional configuration knob to let LDP use IGP metric?
>>> 
>>> BR!
>>> 
>>> Chen Jiang
>>> 
>>> On Sun, Aug 2, 2020 at 6:32 PM Michael Hallgren  wrote:
>>> 
 Hi James,
 
 From memory, Junos assigns metric 1 by default to "LDP routes", not IGP
 metric, unless you push this button.
 
 Cheers,
 mh
 --
 *De :* Chen Jiang 
 *Envoyé :* dimanche 2 août 2020 12:10
 *À :* Juniper List
 *Objet :* [j-nsp] track-igp-metric in LDP
 
 Hi! Experts
 
 Sorry for disturbing, I am curious about track-igp-metric knob under
>>> LDP,
 is there any scenarios it will be useful? I think ldp is just a label
 distribution protocol, the forwarding path always follows the IGP
>>> shortest
 path, is there any benefit for using track-igp-metric?
 
 Thanks for your help!

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] How to shut down laser on any optics

2020-06-24 Thread Olivier Benghozi
This would prove once again that vendor_endorsed_and_overcharged optics are 
just a useless scam.

This being said, we didn't experience this neither with Skylane nor Cubeoptics 
transceivers (currently on MPC7-MRATE / 18.4R[2-3]-[S*]). It «just works» as we 
expect (laser is switched off when the channel is disabled in the config). I 
would only expect to see another behaviour from some buggy/cheap/crappy network 
gears, whatever how lite the official specifications are, and therefore see 
this issue as a bug in a Juniper case.

Example of 2 different channels of a QSFP+ PLR4 (MTP):

Physical interface: xe-3/1/1:0 (disabled)
  Lane 0
Laser bias current:  0.000 mA
Laser output power:  0.000 mW / -40.04 dBm
Tx laser disabled alarm   :  On
Physical interface: xe-3/1/1:1 (in use)
  Lane 1
Laser bias current:  32.453 mA
Laser output power:  0.501 mW / -3.00 dBm
Tx laser disabled alarm   :  Off
[...]




> Le 24 juin 2020 à 18:59, Jared Mauch  a écrit :
> 
> They would either need to have the GPIO wired for this pin on the PCBs and/or 
> implement disable sets this bit AND the optic would need to honor it.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Subscriber DHCPv6 lease time for IA_NA from Radius Server

2020-03-12 Thread Olivier Benghozi
Maybe
liveness-detection method layer2-liveness-detection
and/or
overrides client-negotiation-match incoming-interface
so the binding just disappears quicker on the MX side?


> Le 11 mars 2020 à 11:29, Sebastian Wiesinger  a écrit :
> 
> I'm currently testing IPv6 subscriber termination (PPP/L2TP) on an
> MX204 (18.4R2) and I have a bit of a problem with DHCPv6 IA_NA address
> allocation.
> 
> By default the lease time for the address is one day (86400 seconds)
> when the address is received by Radius.
> 
> The Cisco CPE configures this address on the Dialer interface which
> does not go down when the PPP session is cleared. So the address stays
> there for a day at least which is suboptimal.
> 
> We want to reduce the lease time so that it is detected sooner that
> the address is invalid and can be released / reused.
> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Next-table, route leaking, etc.

2020-02-10 Thread Olivier Benghozi
I realise I wrote by mistake «next-hop» instead of «next-table» about 
everywhere :)

> Le 10 févr. 2020 à 05:51, Olivier Benghozi  a 
> écrit :
> 
> To deal with this on MX stuff a way that looked like we did previously on 
> Redback gears (old beast but at least on them this «just works» with double 
> lookup), we use a «third part« VRF. This is a dedicated empty VRF on each 
> router with only a bunch of static next-table routes. It is a 
> no-vrf-advertise VRF (config knob normally used in hub and spoke 
> architectures), so it doesn't export its content toward other PEs , but only 
> locally (auto-export).
> 
> Each time we have a discard route in an origin VRF that we need to import in 
> some other VRF (local or not, but let's say local), we create a copy of this 
> route in this special VRF, with a next-hop attribute instead of the discard, 
> and a special community.
> This is this route that is finally imported by other various local VRFs using 
> auto-export (so the import policy is the same for all routes for any MPLS 
> VPN, local routes or not).
> 
> Additionally:
> - all the normal VRFs have a first term in their export policy to prevent the 
> re-export of those special imported routes (based on the special community – 
> this is because no-vrf-advertise imported routes become exportable to other 
> PEs once locally imported in another VRF using auto-export).
> - in all our import policies, importing static (but also aggregate, in fact 
> all but bgp), get its preference changed to more than 5 (default static 
> preference – we use 168 but whatever), so once the next-hop route is imported 
> in the VRF that contains the former discard route, no route loop ensues (the 
> next-hop route is Inactive because of Route Preference).
> 
> Toward the other PEs, the «true» discard route is exported from its VRF.
> 
> 
> In importing VRFs, the imported next-hop route wins over the imported discard 
> route (Inactive reason: Number of gateways).
> 
> 
> The goals behind all this stuff were to:
> - avoid creating a next-hop route in each VRF that needed the route
> - keep the same import/export policy standards for about all the VRFs
> - use the same conf whatever the null/discard route is to be imported in 
> local or distant VRFs (no difference like on IOS/SEOS and so on)
> - use auto-export, so no need for ribgroups (much closer to what was done on 
> other vendors gears)
> 
> 
>> Le 10 févr. 2020 à 04:50, Nathan Ward > <mailto:juniper-...@daork.net>> a écrit :
>> 
>> Sure - there’s a number of solutions like that available. LT, next-table 
>> routes, etc. LT means more processing than a next-table, but in some ways is 
>> a bit less fiddly.
>> 
>> I’m hoping there’s a way to bypass this entirely - making packets following 
>> imported routes work the same whether the exporter of the route is local or 
>> remote.
>> 
>>> On 10/02/2020, at 4:27 PM, Larry Jones >> <mailto:ljo...@bluphisolutions.com>> wrote:
>>> 
>>> Try a tunnel (lt) interface.
>>> 
>>> 
>>>  Original message 
>>> From: Nathan Ward mailto:juniper-...@daork.net>>
>>> Date: 2/9/20 6:08 PM (GMT-09:00)
>>> To: Juniper NSP >> <mailto:juniper-nsp@puck.nether.net>>
>>> Subject: [j-nsp] Next-table, route leaking, etc.
>>> 
>>> Hi all,
>>> 
>>> Something that’s always bugged me about JunOS, is when you import a route 
>>> from another VRF on JunOS, the attributes follow it - i.e. if it is a 
>>> discard route, you get a discard route imported.
>>> (Maybe this happens on other platforms, I honestly can’t remember, it’s 
>>> been a while..)
>>> 
>>> This is an issue where you have a VRF with say a full table in it, and want 
>>> to generate a default discard for other VRFs to import if they want 
>>> internet access. Works great if the VRF importing it is on a different PE, 
>>> but, if it’s local it simply gets a discard route, so packets get dropped 
>>> rather than doing a second lookup.
>>> 
>>> You can solve this, sort of, with a next-table route, but things can get a 
>>> little messy, so hoping for something more elegant.
>>> 
>>> I’m trying to figure out if there’s a better way to do this, i.e. to make 
>>> it as though packets following leaked routes behave as though they are from 
>>> a different router.
>>> 
>>> Anyone got any magic tricks I’ve somehow missed?
> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Next-table, route leaking, etc.

2020-02-09 Thread Olivier Benghozi
To deal with this on MX stuff a way that looked like we did previously on 
Redback gears (old beast but at least on them this «just works» with double 
lookup), we use a «third part« VRF. This is a dedicated empty VRF on each 
router with only a bunch of static next-table routes. It is a no-vrf-advertise 
VRF (config knob normally used in hub and spoke architectures), so it doesn't 
export its content toward other PEs , but only locally (auto-export).

Each time we have a discard route in an origin VRF that we need to import in 
some other VRF (local or not, but let's say local), we create a copy of this 
route in this special VRF, with a next-hop attribute instead of the discard, 
and a special community.
This is this route that is finally imported by other various local VRFs using 
auto-export (so the import policy is the same for all routes for any MPLS VPN, 
local routes or not).

Additionally:
- all the normal VRFs have a first term in their export policy to prevent the 
re-export of those special imported routes (based on the special community – 
this is because no-vrf-advertise imported routes become exportable to other PEs 
once locally imported in another VRF using auto-export).
- in all our import policies, importing static (but also aggregate, in fact all 
but bgp), get its preference changed to more than 5 (default static preference 
– we use 168 but whatever), so once the next-hop route is imported in the VRF 
that contains the former discard route, no route loop ensues (the next-hop 
route is Inactive because of Route Preference).

Toward the other PEs, the «true» discard route is exported from its VRF.


In importing VRFs, the imported next-hop route wins over the imported discard 
route (Inactive reason: Number of gateways).


The goals behind all this stuff were to:
- avoid creating a next-hop route in each VRF that needed the route
- keep the same import/export policy standards for about all the VRFs
- use the same conf whatever the null/discard route is to be imported in local 
or distant VRFs (no difference like on IOS/SEOS and so on)
- use auto-export, so no need for ribgroups (much closer to what was done on 
other vendors gears)


> Le 10 févr. 2020 à 04:50, Nathan Ward  a écrit :
> 
> Sure - there’s a number of solutions like that available. LT, next-table 
> routes, etc. LT means more processing than a next-table, but in some ways is 
> a bit less fiddly.
> 
> I’m hoping there’s a way to bypass this entirely - making packets following 
> imported routes work the same whether the exporter of the route is local or 
> remote.
> 
>> On 10/02/2020, at 4:27 PM, Larry Jones > > wrote:
>> 
>> Try a tunnel (lt) interface.
>> 
>> 
>>  Original message 
>> From: Nathan Ward 
>> Date: 2/9/20 6:08 PM (GMT-09:00)
>> To: Juniper NSP 
>> Subject: [j-nsp] Next-table, route leaking, etc.
>> 
>> Hi all,
>> 
>> Something that’s always bugged me about JunOS, is when you import a route 
>> from another VRF on JunOS, the attributes follow it - i.e. if it is a 
>> discard route, you get a discard route imported.
>> (Maybe this happens on other platforms, I honestly can’t remember, it’s been 
>> a while..)
>> 
>> This is an issue where you have a VRF with say a full table in it, and want 
>> to generate a default discard for other VRFs to import if they want internet 
>> access. Works great if the VRF importing it is on a different PE, but, if 
>> it’s local it simply gets a discard route, so packets get dropped rather 
>> than doing a second lookup.
>> 
>> You can solve this, sort of, with a next-table route, but things can get a 
>> little messy, so hoping for something more elegant.
>> 
>> I’m trying to figure out if there’s a better way to do this, i.e. to make it 
>> as though packets following leaked routes behave as though they are from a 
>> different router.
>> 
>> Anyone got any magic tricks I’ve somehow missed?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX3xx VPN Client - NCP alternatives?

2019-11-08 Thread Olivier Benghozi
We were on 3.2 until last week, then updated to 4.0 this week. 

> Le 8 nov. 2019 à 02:26, Nathan Ward  a écrit :
> 
>> On 8/11/2019, at 2:13 PM, Olivier Benghozi  
>> wrote:
>> 
>> Using split tunneling (and split DNS) with this here, on several macs (and 
>> good^H^Hold SRX2xx).
>> It usually works properly (the routes to VPNize are configured statically 
>> within the profile config).
>> Never seen such /1 routes.
>> I know that «here it works» isn't that helpful, but at least this is how our 
>> mileage varies…
> 
> You on 4.0? Came out a few days ago.
> 
> The NCP support are saying they can’t reproduce, so, time to fire up a VM to 
> test it in I suppose…

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX3xx VPN Client - NCP alternatives?

2019-11-07 Thread Olivier Benghozi
Using split tunneling (and split DNS) with this here, on several macs (and 
good^H^Hold SRX2xx).
It usually works properly (the routes to VPNize are configured statically 
within the profile config).
Never seen such /1 routes.
I know that «here it works» isn't that helpful, but at least this is how our 
mileage varies...

> Le 8 nov. 2019 à 01:31, Nathan Ward  a écrit :
> 
> We’re using the NCP Secure Entry client for Mac.

> 
> 
> They’ve come out with a version 4.0 recently, which supposedly has better 
> compatibility with OS X 10.15. I’ve installed it.
> In “take all the traffic” mode, it installs a couple of /1 routes so they 
> longest prefix match instead of default. Fine.
> In “split tunneling” mode, it *still* installs those /1 routes, but with a 
> next hop of 0.0.0.1, so all of your non-VPN traffic is just dumped on the 
> floor. Unlike split tunnelling mode, when you turn off the VPN connection, it 
> leaves the broken routes in the table.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX10008 and sFlow

2019-10-14 Thread Olivier Benghozi
Hi,
you probably don't really want to configure the older sFlow monitoring those 
days (with its various limitations), what you probably really need is to 
configure inline IPFIX flow monitoring, as it is supported by QFX10k devices.

> Le 14 oct. 2019 à 19:49, Tim Vollebregt  a écrit :
> 
> I’m toying around with some QFX10008’s in a customer PoC.
> Having some issues with sFlow and can’t find proper documentation in regards 
> to this.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX5100 and BGP graceful-shutdown in 19.1

2019-08-23 Thread Olivier Benghozi
The most amazing is that in 19.1 they support another «new» «on the edge» 
feature at last: bgp session shutdown (not just deactivate), 21 years later :)

> Le 20 août 2019 à 10:39, Sebastian Wiesinger  a écrit :
> 
> JunOS 19.1 brings support for the BGP graceful shutdown mechanism
> (RFC8326):
> 
> set protocols bgp graceful-shutdown receiver

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RSVP-TE broken between pre and post 16.1 code?

2019-08-15 Thread Olivier Benghozi
Looks like the PR about this is now available: PR1443811 «RSVP refresh-timer 
interoperability between 15.1 and 16.1+».

«Path message with long refresh interval (equal to or more than 20 minutes) 
from a node that does not support Refresh-interval Independent RSVP (RI-RSVP) 
is dropped by the receiver with RI-RSVP.»


> Le 2 juil. 2019 à 07:22, Simon Dixon  a écrit :
> 
> I had that issue between QFX5110's and MX's.Some feature at the time
> forced me to run 17.4 on the QFX's and they wouldn't establish LSP's with
> older MX80's in our fleet that were still running 14.2.
> 
> I had to either downgrade the QFX's to 15.1 or upgrade the MX's to 16.1 or
> greater.  I ended up grading the MX's as they were overdue anyway.
> 
> Simon.
> 
> 
> On Fri, 28 Jun 2019 at 22:15,  wrote:
> 
>> Hi gents,
>> 
>> Just wondering if anyone experienced RSVP-TE incompatibility issues when
>> moving from pre 16.1 code to post 16.1 code.
>> Didn't get much out of Juniper folks thus far so I figured I'll ask here as
>> well.
>> 
>> The problem we're facing is that in case 17 code is LSP head-end and 15
>> code
>> is tail-end works, but in the opposite direction 17/15-to-17 (basically
>> cases where 17 is the LSP tail-end) the LSP signalling fails.
>> Trace reveals that the 17 gets the PATH message for bunch of LSPs, accepts
>> it (yes reduction and acks are used), creates the session, then deletes it
>> right away for some reason.
>> Our testing suggests there are two workarounds for this:
>> You might be aware that in 16.1 among other RSVP-TE changes the default
>> refresh-time (governing generation of successive refresh messages
>> Path/Resv)
>> changed to 1200s -so no what you think making it 1200 on 15 side wont do,
>> it
>> has to be less (e.q. 1999s).
>> If you want to keep refresh time at 1200 or higher then another option
>> strangely enough is to disable CSPF on the affected LSPs (didn't know that
>> SPF/CSPF changes contents of the PATH msg that in one case 17 code is cool
>> with PATH msg in other case not).
>> 
>> Would appreciate any pointers.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] JunOS 18, ELS vs non-ELS QinQ native vlan handling.

2019-07-23 Thread Olivier Benghozi
Can you confirm Arista behaviour on this point? :)

On our side we now have a working config on EX4600 18.4R2 here (one NNI 
interface with simultaneous ethernet-switching-family unit 0 plus multiple QinQ 
vlan-bridge tagged units, and a bunch of extended-vlan-bridge UNIs interfaces), 
but the native-vlan isn't a problem here.

> Le 23 juil. 2019 à 07:57, Mark Tinka  a écrit :
> 
> On 23/Jul/19 01:45, Olivier Benghozi wrote:
> 
>> So if I understand well, they suddenly chose compatibility with Cisco & MX 
>> instead of compat with old EX (whereas an option would have been fine).
>> The problem is: I'm not sure at all that it's really the case on Cisco 
>> gears...
> 
> The main reason we dropped all our Juniper EX's and went to Arista.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] JunOS 18, ELS vs non-ELS QinQ native vlan handling.

2019-07-22 Thread Olivier Benghozi
4 months old thread, but (since I'm starting to test some QinQ stuff just now), 
I found both this thread and its «solution»:

PR1413700
«Untagged traffic is single-tagged in Q-in-Q scenario on EX4300 platforms»
«On EX4300 platforms except for EX4300-48MP with Q-in-Q configured, untagged 
traffic over S-VLAN is forwarded with a single tag, whose processing behavior 
is not in line with other products (e.g., the MX platforms) and other providers 
(e.g., Cisco). If Q-in-Q is configured between these devices with different 
processing behavior of untagged traffic, this might cause the untagged traffic 
loss.»

So if I understand well, they suddenly chose compatibility with Cisco & MX 
instead of compat with old EX (whereas an option would have been fine).
The problem is: I'm not sure at all that it's really the case on Cisco gears...


> Le 24 mars 2019 à 16:36, Alexandre Snarskii  a écrit :
> 
> On Fri, Mar 22, 2019 at 04:21:47PM -0400, Andrey Kostin wrote:
>> Hi Alexandre,
>> 
>> Did it pass frames without C-tag in Junos versions < 18?
> 
> Yes. 
> 
> tcpdump from upstream side when switch running 17.4R1-S6.1:
> 
> tcpdump: listening on ix1, link-type EN10MB (Ethernet), capture size 262144 
> bytes
> 17:59:15.742379 0c:c4:7a:93:a6:8e > ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
> (0x8100), length 64: vlan 171, p 0, ethertype ARP, Ethernet (len 6), IPv4 
> (len 4), Request who-has 10.11.1.2 tell 10.11.1.1, length 46
> 17:59:16.773827 0c:c4:7a:93:a6:8e > ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
> (0x8100), length 64: vlan 171, p 0, ethertype ARP, Ethernet (len 6), IPv4 
> (len 4), Request who-has 10.11.1.2 tell 10.11.1.1, length 46
> 
> exactly the same setup, switch upgraded to 18.3R1-S2.1:
> 
> 18:19:28.535143 0c:c4:7a:93:a6:8e > ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
> (0x8100), length 68: vlan 171, p 0, ethertype 802.1Q, vlan 1, p 0, ethertype 
> ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.11.1.2 tell 
> 10.11.1.1, length 46
> 18:19:29.598700 0c:c4:7a:93:a6:8e > ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
> (0x8100), length 68: vlan 171, p 0, ethertype 802.1Q, vlan 1, p 0, ethertype 
> ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10.11.1.2 tell 
> 10.11.1.1, length 46
> 
> as you see, packets that were transferred with only S-vlan tag
> now encapsulated with both S-vlan and 'native' vlan..
> 
> 
>> 
>> Kind regards,
>> Andrey
>> 
>> Alexandre Snarskii писал 2019-03-22 13:03:
>>> Hi!
>>> 
>>> Looks like JunOS 18.something introduced an incompatibility of native
>>> vlan handling in QinQ scenario between ELS (qfx, ex2300) and non-ELS
>>> switches: when ELS switch forwards untagged frame to QinQ, it now adds
>>> two vlan tags (one specified as native for interface and S-vlan) 
>>> instead
>>> of just S-vlan as it is done by both non-ELS and 'older versions'.
>>> 
>>> As a result, if the other end of tunnel is non-ELS (or third-party)
>>> switch, it strips only S-vlan and originally untagged frame is passed
>>> with vlan tag :(
>>> 
>>> Are there any way to disable this additional tag insertion ?
>>> 
>>> PS: when frames sent in reverse direction, non-ELS switch adds only
>>> S-vlan and this frame correctly decapsulated and sent untagged.
>>> 
>>> ELS-side configuration (ex2300, 18.3R1-S1.4. also tested with
>>> qfx5100/5110):
>>> 
>>> [edit interfaces ge-0/0/0]
>>> flexible-vlan-tagging;
>>> native-vlan-id 1;
>>> mtu 9216;
>>> encapsulation extended-vlan-bridge;
>>> unit 0 {
>>>vlan-id-list 1-4094;
>>>input-vlan-map push;
>>>output-vlan-map pop;
>>> }
>>> 
>>> (when native-vlan-id is not configured, untagged frames are not
>>> accepted at all).
>>> 
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 802.3ad LAG between ASR 1002-X and Juniper MX204

2019-07-20 Thread Olivier Benghozi
We usually prefer LAGs here (with microBFD on backbone links) ; but, any horror 
stories to share?


> Le 20 juil. 2019 à 12:06, Mark Tinka  a écrit :
> 
> We now restrict LAG's to router-switch 802.1Q trunks.
> On backbone links, we've found regular IP ECMP to be more reliable than
> LAG's.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 802.3ad LAG between ASR 1002-X and Juniper MX204

2019-07-19 Thread Olivier Benghozi
Yes, you'd better drop all the hash+loadbalance+linkindex conf (by the way, on 
MX the "hash-key" knob is only for DPC cards, 10+ years old).

However about the LAG itself, if you want something reliable you really should 
use LACP instead of static LAG.
Static LAGs, a good way to get your traffic lost...

> Le 19 juil. 2019 à 22:02, Gert Doering  a écrit :
> 
> On Fri, Jul 19, 2019 at 07:56:47PM +, Eric Van Tol wrote:
>> On 7/19/19, 3:40 PM, "Gert Doering"  wrote:
>>That sounds a bit weird... why should the device care how the other
>>end balances its packets?  Never heard anyone state this, and I can't
>>come up with a reason why.
>> 
>> *sigh* 
>> 
>> I'd been focusing way too much on the config portion of the documentation 
>> that I completely skimmed over the very first paragraph:
>> 
>> "MX Series routers with Aggregated Ethernet PICs support symmetrical
>> load balancing on an 802.3ad LAG. This feature is significant when
>> two MX Series routers are connected transparently through deep
>> packet inspection (DPI) devices over an LAG bundle.
> 
> Yes, *that* makes total sense :-)  (I was thinking about "is it something
> with stateful inspection?" but since this - inside MX or Cisco - usually
> operates "on the ae/port-channel level" and not the individual member,
> it didn't make sense either)

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] PE-CE BGP announcements

2019-03-07 Thread Olivier Benghozi
Really sure of your export policy when removed from the neighbour (that is, any 
policy under the protocol or the group) ?

show bgp neighbor exact-instance foo 10.108.35.254 | match export


Any NO-EXPORT community attached on the route?

> Le 7 mars 2019 à 20:04, Jason Lixfeld  a écrit :
> 
> My understanding is that since this is a BGP prefix, it’s default export 
> policy is to advertise all active BGP routes to all BGP speakers.  But, to 
> try and work through whether it was an export policy issue anyway, I 
> deactivated the export policy on the session to 10.108.35.254, which was 
> ineffective.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hyper Mode on MX

2019-03-07 Thread Olivier Benghozi
By the way HyperMode is only useful if you expect some very high throughput 
with very small packets (none of the MPCs are linerate using very small 
packets, but HyperMode brings it closer).
Your Junirepresentative may show you a linerate performance/packet size graph 
with/without HyperMode to help the decision.

Note: not your case, but HyperMode is useless on MX204, though (that has some 
other throughput limitations, discussed in this mailing-list and in Juniper KB).

> Le 7 mars 2019 à 12:18, Nathan Ward  a écrit :
> 
>> On 7/03/2019, at 10:40 PM, Franz Georg Köhler  wrote:
>> 
>> I wonder if it is gererally a good idea to enable HyperMode on MX or if
>> there are reasons not do do so?
>> 
>> We are currently running MX960 with FPC7.
> 
> https://www.juniper.net/documentation/en_US/junos/topics/concept/forwarding-options-hyper-mode-overview.html
>  
> 
> 
> There are a bunch of features you cannot use if you enable hyper mode.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Finding drops

2019-01-31 Thread Olivier Benghozi
Interesting to see that Hyper-mode is useless on MX204, by the way (it's 
expected to do something on MPC7).

> Le 31 janv. 2019 à 16:46, adamv0...@netconsultings.com a écrit :
> 
> Hmm interesting, so it's capped at the WA block then not on the ASIC, good to 
> know.
> On MPC7s we did not run into this issue but rather to the chip overload 
> problem
> From my notes:
> MPC7 24x10GE per PFE that is 2x212.773Mpps (425.546Mpps two directions) -> 
> 17.73Mpps per 10GE
> MPC7 20x10GE per PFE that is 2x212.773Mpps (425.546Mpps two directions) -> 
> 21.28Mpps per 10GE
> Just for reference:
> Line-rate 10Gbps@L2(64B)@L1(84B) is 29.761Mpps (two directions
> 
> Under NDA you can get what juniper tested internally, some graphs etc..
> But it's better to do your own testing (in a specific use case)
> 
> See the full linerate performance is not there so that you can send 64b 
> packets at 100G no one would do that -but it's terher to give you some 
> headroom to send packets of normal size and do some processing while 
> maintaining the performance.
> So yes with 100GE capable chips you have to be careful nowadays on what 
> features you enable. 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DNS Flag Day

2019-01-25 Thread Olivier Benghozi
It would mean that they run something older than 10.2 JunOS, that is a 
prehistoric release, which would be criminal in term of security.
Anyway, putting stateful firewalls in front of DNS servers is a nonsense from 
the beginning.

> Le 25 janv. 2019 à 13:06, Christian Scholz  a écrit :
> 
> What they told you sounds like bullshit to me. From 10.2 on there are no 
> special settings required. Maybe they don’t know how to do it?
> 
> So I guess they are just very lazy or don’t know better and blame the 
> firewall... I pray for you that they don’t run Code below 10.2...
> 
> https://kb.juniper.net/InfoCenter/index?page=content=KB23569=SRX_5600_1=LIST
> 
> 
> Am 25.01.2019 um 12:53 schrieb sth...@nethelp.no:
> 
>>> When doing some investigation for the upcoming DNS Flag Day 
>>> (https://dnsflagday.net: February 1st 2019) I got some bad news from one of 
>>> the service providers: they use Juniper SRX firewalls, and claim that they 
>>> can't properly support EDNS because of a bug in their SRX firewalls. This 
>>> seems outrageous to me. Is this just because they haven't upgraded their 
>>> JunOS for years, they're running ancient DNS server software, or is there 
>>> really a problem?
>> 
>> See
>> 
>> https://mailman.nanog.org/pipermail/nanog/2019-January/099180.html
>> 
>> "Juniper and Checkpoint have newer code that doesn't do this."

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Ex2300 for branch office

2019-01-23 Thread Olivier Benghozi
A few elements:

EX2300: Broadcom instead of Marvel, CPU and memory are now decent (no more slow 
commits). Fans seem to be just a little more noisy than the 2200.

- Worse (compared to EX2200): no VRF ; Virtual-Chassis now needs a licence 
(honour based) ; less space for ACL/firewall-filters (Broadcom TCAM)

- Better: of course the 4 x SFP/SFP+ 1G/10G


> Le 23 janv. 2019 à 16:16, Event Script  a écrit :
> 
> Anyone used ex2300 switches?  Curious if they are as painfully slow as the
> 2200 for commits?  Any other feedback?  Do they seem to be noisy or quiet?
> Any issues with them?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Finding drops

2019-01-22 Thread Olivier Benghozi
My 2 cents: it could be interesting to check if running the system in 
hyper-mode makes a difference (that should normally be expected).

> Le 22 janv. 2019 à 20:42, adamv0...@netconsultings.com a écrit :
> 
> That sort of indicates that for the 64B stream the packets are dropped by the 
> platform -do you get the confirmation on the RX end of the tester about the 
> missing packets? Not sure if this is about misaligned counters or actually 
> about dropped packets?
> 
> How I read your test is that presumably this is 40G in and 40G out the same 
> PFE (back to back packets @ 64B or 100B packets) 
> So we should just consider single PFE performance but still the resulting PPS 
> rate is nwhere near the theoretical PPS budget.
> How are the PFEs on 204 linked together (any sacrifices in the PFE BW/PPS 
> budget to account for the fabric)? On MPC7E all 4 PFEs would be connected via 
> fabric.  
> So nothing really adds up here...  shouldn't be happening -not at these rates

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Running MX480 without craft interface

2019-01-16 Thread Olivier Benghozi
It is.

https://www.juniper.net/documentation/en_US/release-independent/junos/topics/concept/mx480-fru-overview.html
 


https://www.juniper.net/documentation/en_US/release-independent/junos/topics/task/installation/craft-interface-mx480-installing.html
 



It seems like it's just a panel with leds and connectors and nothing to be 
detected at all...

> Le 16 janv. 2019 à 13:16, Alex D.  a écrit :
> 
> i have to upgrade some router chassis from MX240 to MX480. Therefore i bought 
> some MX480 spare chassis with high-capacity fantrays and additional 
> power-suplies. I erroneously assumed, that the MX480 craft interface is 
> included in the spare chassis, but unfortunately it wasn't and had to be 
> ordered separately. Does anybody know if the craft interface is "hot 
> swapable" and i can initialy run the MX480 without it ???
> I would order the missing craft interfaces as soon as possible and install 
> them afterwards without rebooting the router (if possible).

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SNMP_EVLIB_FAILURE - snmp not working anymore

2018-12-20 Thread Olivier Benghozi
PR1270686

restart statistics-service

> Le 21 déc. 2018 à 01:23, Jeff Meyers  a écrit :
> 
> Dec 21 01:20:40  fra4-cr2 mib2d[67435]: SNMP_EVLIB_FAILURE: PFED ran out of 
> transfer credits with PFE.Failed to get stats. ifl index: 373
> 
> I already did a snmp process restart without any success. Google doesn't 
> reveal anything helpful to me. Next thing I want to try is a manual RE 
> failover but with maintenance notifications, I cannot do that instantly. Did 
> anyone here see this before and does by any chance know a solution for this?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] deleting ntp server from config, perhaps a bug?

2018-09-27 Thread Olivier Benghozi
Works as expected here (16.1R7)...

> Le 27 sept. 2018 à 13:43, Drew Weaver  a écrit :
> 
> I added 0.pool.ntp.org, 1.pool.ntp.org, 2.pool.ntp.org, 3.pool.ntp.org to 
> system ntp on an MX80 running JunOS 15.
> 
> [edit system ntp]
> drew@charlie# show
> server 216.230.228.242;
> server 45.79.109.111;
> server 172.98.193.44;
> server 69.195.159.158;
> 
> I need to deactivate/delete a few of these:
> 
> [edit system ntp]
> drew@charlie# delete server 216.230.228.242
> warning: statement not found
> 
> drew@charlie# deactivate server 216.230.228.242
> warning: statement not found
> 
> Is there any way to do this other than simply deleting the entire block and 
> starting over?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ether-options vs gigether-options in MX series

2018-09-18 Thread Olivier Benghozi
On MX, SRX, PTX, ACX, it's «gigether-options» for all ports.
On EX, QFX, it's «ether-options» for all ports.

Ridiculous.

> Le 18 sept. 2018 à 17:58, Drew Weaver  a écrit :
> 
> Does that mean they're the same, even for xe interfaces?
> 
> -Original Message-
> From: juniper-nsp  On Behalf Of Olivier 
> Benghozi
> Sent: Tuesday, September 18, 2018 11:57 AM
> To: Juniper List 
> Subject: Re: [j-nsp] ether-options vs gigether-options in MX series
> 
> ether-options -> gigether-options
> 
>> Le 18 sept. 2018 à 17:47, Drew Weaver  a écrit :
>> 
>> Greetings,
>> 
>> I am attempting to create a link aggregation on an MX80.
>> 
>> Reading the documentation it indicates:
>> 
>> ether-options {
>>   802.3ad ae0;
>>   }
>> 
>> To an interface will add that physical link to an aggregation.
>> 
>> However, when I use that and look at the configuration it says:
>> 
>> ether-options {
>>   ##
>>   ## Warning: This feature can be configured only in Ethernet LAN services 
>> mode
>>   ## Warning: This feature can be configured only in Ethernet LAN services 
>> mode
>>   ## Warning: This feature can be configured only in Ethernet LAN services 
>> mode
>>   ## Warning: This feature can be configured only in Ethernet LAN services 
>> mode
>>   ## Warning: This feature can be configured only in Ethernet LAN services 
>> mode
>>   ##
>>   802.3ad ae0;
>> }
>> If I use gigether-options it accepts it.
>> 
>> What is the correct way of doing this on an MX80 running v 15?
>> 
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ether-options vs gigether-options in MX series

2018-09-18 Thread Olivier Benghozi
ether-options -> gigether-options

> Le 18 sept. 2018 à 17:47, Drew Weaver  a écrit :
> 
> Greetings,
> 
> I am attempting to create a link aggregation on an MX80.
> 
> Reading the documentation it indicates:
> 
> ether-options {
>802.3ad ae0;
>}
> 
> To an interface will add that physical link to an aggregation.
> 
> However, when I use that and look at the configuration it says:
> 
> ether-options {
>##
>## Warning: This feature can be configured only in Ethernet LAN services 
> mode
>## Warning: This feature can be configured only in Ethernet LAN services 
> mode
>## Warning: This feature can be configured only in Ethernet LAN services 
> mode
>## Warning: This feature can be configured only in Ethernet LAN services 
> mode
>## Warning: This feature can be configured only in Ethernet LAN services 
> mode
>##
>802.3ad ae0;
> }
> If I use gigether-options it accepts it.
> 
> What is the correct way of doing this on an MX80 running v 15?
> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] flow sampling aggregated interfaces

2018-09-06 Thread Olivier Benghozi
Flow sampling works on the address-family of layer3 subinterface, so it's under 
the "unit x family y", whether the unit is on an ae or a physical layer1/2 
interface (since you want to sample all the traffic):

set interfaces ae4 unit 0 family inet sampling input
set interfaces ae5 unit 0 family inet sampling input
...

However the inline sampling «engine» must be activated on both fpc under 
chassis, and you'll probably activate it on all fpc of your chassis using an 
apply-group:

set groups chassis-fpc-netflow chassis fpc <*> sampling-instance sample-1
set groups chassis-fpc-netflow chassis fpc <*> inline-services flex-flow-sizing
set chassis fpc 0 apply-groups chassis-fpc-netflow
set chassis fpc 1 apply-groups chassis-fpc-netflow
set chassis fpc 2 apply-groups chassis-fpc-netflow
...

Flex-flow-sizing exists since 15.1F5, with previous versions you must partition 
statically and manually the inline-sampling space.

> Le 6 sept. 2018 à 17:53, Nelson, Brian  a écrit :
> 
> With an MX480 running 15.1, can flow sampling be configured on an
> aggregated interface?
> All the examples I find are only applied to logical units of physical
> interfaces. The documentation implies an ae interface is supported.
> 
> Since the aggregated interfaces use physical interfaces from mic/fpc 0/0
> and 1/0, will I have to configure the same sampling instance on both fpc?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LSP's with IPV6 on Juniper

2018-08-29 Thread Olivier Benghozi
For 6PE you have to:
- delete the iBGP ipv6 groups
- add family ipv6 labeled-unicast explicit-null to the IPv4 iBGP groups
- add ipv6-tunneling to protocol mpls.
- make sure your IGP is not advertising IPv6 addresses

This is the way it's configured, with either RSVP-TE or LDP.

> Le 29 août 2018 à 15:10, craig washington  a 
> écrit :
> 
> When I set this up in the lab (logical systems) and followed the Juniper 
> documentation for setting up 6PE the IPV6 prefixes didn't resolve to LSP's.
> The documentation says to add labeled unicast with explicit null and 
> tunneling.
> I had 2 groups, one for v4 and one for v6. I added the commands to v4 group 
> and didn't see a change.
> I removed it all and tried adding it to v6 group and no change.
> The only way I got it to work was with mpls tunneling for v6 and on the 
> export policy for the v6 group I changed the next hop from self to the v4 
> address of the advertising PE.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LSP's with IPV6 on Juniper

2018-08-27 Thread Olivier Benghozi
In global we have 6PE.
In VRF we have 6VPE.
Just works so far.

An yes, the MPLS control-plane uses only IPv4: (the intercos between routers 
are in IPv4, LDP uses IPv4, IGP uses IPv4, and IPv6 is really announced over 
specific AFI/SAFI (labeled unicast IPv6 for 6PE, VPNv6 for 6VPE) in IPv4 
MP-iBGP sessions ; but it doesn't matter.

Of course the actual IPv6 loopbacks won't go to inet6.3 since they are not used 
to resolve the routes (you will see your IPv4 mapped over IPv6 addressing). The 
next-hops are IPv4, but again, it doesn't matter, only the results matter: it 
works :)

You don't explicitly "change" the next-hop of IPv6 using policies, you just use 
nexthopself and that's it.

> Le 27 août 2018 à 18:39, craig washington  a 
> écrit :
> 
> Hello all.
> 
> Wondering if anyone is using MPLS with IPV6?
> 
> I have read on 6PE and the vpn counterpart but these all seem to take into 
> account that the CORE isn't running IPV6?
> 
> My question is how can we get the ACTUAL IPV6 loopback addresses into inet6.3 
> table? Would I need to do a rib import for directly connected?
> 
> If you run "ipv6-tunneling" this seems to only work if the next-hop is an 
> IPV4 address. (next-hop self)
> 
> I also messed around with changing the next-hop on the v6 export policy to 
> the IPV4 loopback and this works too but figured there should be a different 
> way?
> 
> So overall, I am trying to find a way for v6 routes to use the same LSP's as 
> v4 without changing the next hop to a v4 address.
> 
> Hope this makes sense 
> 
> 
> Any feedback is much appreciated.
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Carrier interfaces and hold timers

2018-08-15 Thread Olivier Benghozi
That's not the point here ; the point here is:
«to deal with their link constantly flapping».

A constantly flapping link must be either fixed or cancelled.


> On 16 aug 2018 at 03:23, Luis Balbinot  wrote :
> 
> Sometimes carriers protect optical circuits using inexpensive optical
> switches that have longer switching delays (>50ms). In these cases I'd
> understand their request for a longer hold-time. But 3 seconds is a lot.
> 
> On Wed, 15 Aug 2018 at 20:02 Jonathan Call  wrote:
> 
>> For the first time in my experience I have a carrier asking me to
>> implement 3 second hold timers on their interface to deal with their link
>> constantly flapping.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Carrier interfaces and hold timers

2018-08-15 Thread Olivier Benghozi
In some cases, we have used holdtimers to wait before setting up the interface, 
but never before setting down it (if it's down, it's down, there are 
technologies to fast reroute).
But a link is not expected to flap in normal case. If it flaps, it's broken 
(and we all know it happens).

This being said, if you have a carrier explaining that you should expect only 
crap from them, you should:
- name and shame ;
- throw their services away ;
because they are clearly playing you for a sucker.


About the MX80: it doesn't lack buffer space, it lacks RAM and CPU.


> On 16 aug 2018 at 01:01, Jonathan Call  wrote :
> 
> Anyone have experience with hold timers?
> 
> For the first time in my experience I have a carrier asking me to implement 3 
> second hold timers on their interface to deal with their link constantly 
> flapping. They're citing this document as proof that it needs to be done:
> 
> https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/hold-time-edit-interfaces.html
> 
> I'm extremely dubious of this requirement since I've never had a carrier ask 
> for this and our router is a pretty old MX80 which does not have a lot of 
> buffer space.  But then again, maybe packet drops due to buffer overflow are 
> better than carrier transitions and the resulting BGP teardown and rebuild.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ACL for lo0 template/example comprehensive list of 'things to think about'?

2018-07-11 Thread Olivier Benghozi
Yes, I was really talking about "payload-protocol", not "protocol" :)
And this is the point, it didn't work on lo0 whereas it works on "physical" 
interfaces.

> Le 11 juil. 2018 à 21:14, Jay Ford  a écrit :
> 
> You might want "payload-protocol" for IPv6, except where you really want 
> "next-header".  This is a case where there's not a definite single functional 
> mapping from IPv4 to IPv6.
> 
> 
> Jay Ford, Network Engineering Group, Information Technology Services
> University of Iowa, Iowa City, IA 52242
> email: jay-f...@uiowa.edu, phone: 319-335-
> 
> On Wed, 11 Jul 2018, Olivier Benghozi wrote:
>> One thing to think about, in IPv6:
>> On MX, one can use "match protocol" (with Trio / MPC cards).
>> But it's not supported on lo0 filters, where you were / probably still are 
>> restricted to "match next-header", in order to have a filter working as 
>> expected.
>> 
>>> Le 11 juil. 2018 à 20:17, Drew Weaver  a écrit :
>>> 
>>> Is there a list of best practices or 'things to think about' when 
>>> constructing a firewall filter for a loopback on an MX series router 
>>> running version 15 of Junos?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ACL for lo0 template/example comprehensive list of 'things to think about'?

2018-07-11 Thread Olivier Benghozi
One thing to think about, in IPv6:
On MX, one can use "match protocol" (with Trio / MPC cards).
But it's not supported on lo0 filters, where you were / probably still are 
restricted to "match next-header", in order to have a filter working as 
expected.

> Le 11 juil. 2018 à 20:17, Drew Weaver  a écrit :
> 
> Is there a list of best practices or 'things to think about' when 
> constructing a firewall filter for a loopback on an MX series router running 
> version 15 of Junos?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] inline jflow/srrd memory use/size

2018-05-31 Thread Olivier Benghozi
SRRD mem size should be related to the route table size, from what I 
understood...

On an MX480 in 16.1R with DFZ in VRF:

> show system processes extensive | match srrd 
 5174 root 1  200  1220M   509M select  3  30:36   0.00% srrd


Not sure an MX104 is the best gear to run DFZ + inlinejflow...

> Le 31 mai 2018 à 20:40, Darrell Budic  a 
> écrit :
> 
> How much memory should I expect srrd to use? Should it be related to the 
> route table size, the # of flows, or something?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] advertise-from-main-vpn-tables and Hub VRFs (was: KB20870 workaround creates problems with Hub and Spoke) downstream hubs?

2018-05-29 Thread Olivier Benghozi
I guess you have an explicit match for those routes in your VRF export policy 
for the downstream VRF instance ?

> On 29 may 2018 at 11:15, Sebastian Wiesinger  wrote :
> b) with advertise-from-main-vpn-tables
> 
> [Hub instance] -> [Downstream hub instance] --> [bgp.l3vpn.0] -> MP-BGP 
> neighbors
> 
> And there it breaks. Routes from the hub instance that get import into
> the downstream hub instance are not exported to the bgp.l3vpn.0 table
> and thus do not get advertised to other MP-BGP neighbors.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204

2018-05-16 Thread Olivier Benghozi
And additionally, 24x10g is the default when you unpack and plug the box.

> Le 16 mai 2018 à 18:28, Olivier Benghozi <olivier.bengh...@wifirst.fr> a 
> écrit :
> 
> That port config tool sux ; but you can have 24x10g if you turn on the « per 
> PIC» small selector.
> 
>> Le 16 mai 2018 à 18:15, Bill Blackford <bblackf...@gmail.com> a écrit :
>> 
>> So that port config tool. It looks like I can't do 24 10g. However, I can do 
>> 20 10g and a single 100g which makes no sense to me, but then again I know 
>> nothing about the intricacies of modern ASIC design.
>> 
>> Sent from my iPhone
>> 
>>> On May 16, 2018, at 07:43, Mark Tinka <mark.ti...@seacom.mu> wrote:
>>> 
>>> 
>>> 
>>>> On 16/May/18 16:31, Luca Salvatore via juniper-nsp wrote:
>>>> 
>>>> It is feasible that we'll push more than 200Gb/s
>>>> Any idea what performance is like above that level?
>>> 
>>> Should be fine.
>>> 
>>> The chipset is the 3rd generation Trio EA NPU. Same one used in the
>>> MX10003 MPC; good for 400Gbps.
>>> 
>>> Mark.
>>> ___
>>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204

2018-05-16 Thread Olivier Benghozi
That port config tool sux ; but you can have 24x10g if you turn on the « per 
PIC» small selector.

> Le 16 mai 2018 à 18:15, Bill Blackford  a écrit :
> 
> So that port config tool. It looks like I can't do 24 10g. However, I can do 
> 20 10g and a single 100g which makes no sense to me, but then again I know 
> nothing about the intricacies of modern ASIC design.
> 
> Sent from my iPhone
> 
>> On May 16, 2018, at 07:43, Mark Tinka  wrote:
>> 
>> 
>> 
>>> On 16/May/18 16:31, Luca Salvatore via juniper-nsp wrote:
>>> 
>>> It is feasible that we'll push more than 200Gb/s
>>> Any idea what performance is like above that level?
>> 
>> Should be fine.
>> 
>> The chipset is the 3rd generation Trio EA NPU. Same one used in the
>> MX10003 MPC; good for 400Gbps.
>> 
>> Mark.
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204

2018-05-14 Thread Olivier Benghozi
Looks like it will work, in « PIC Level » configuration (both PICs configured 
as « 10GE » – and it seems to be the default).
The doc is crappy and the port checker tool is a nice piece of junk, however.

> On 15 may 2018 at 00:15, Bill Blackford  wrote :
> 
> I'm looking at cost effective replacements for MX80s with fairly pedestrian 
> features. No BNG, CGNAT, etc. GRE and Flow yes. 
> That port config fsckery is odd though. Their docs show you can get 24 XE 
> ports. (8 SFPP + 4*4 on the multi rate QFSP-28's). Is this not the case?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Difference between MPC4E-3D-32XGE-RB and MPC4E-3D-32XGE-SFPP ?

2018-05-02 Thread Olivier Benghozi
https://www.juniper.net/documentation/en_US/junos/topics/concept/chassis-license-mode-overview.html
 

but not very clear...

> Le 1 mai 2018 à 12:32, Nikolas Geyer  a écrit :
> 
> Can’t remember the exact numbers but the non-RB card is targeted at MPLS core 
> applications where it’s just high density label switching. Won’t take a full 
> routing table and has reduced L3VPN numbers. Ask your AM/SE for the specifics.
> 
> Sent from my iPhone
> 
>> On 30 Apr 2018, at 10:34 am, Brijesh Patel  wrote:
>> 
>> Hello Members,
>> 
>> Any idea what is Difference between MPC4E-3D-32XGE-RB  and
>> MPC4E-3D-32XGE-SFPP ?
>> 
>> Juniper PDf says :
>> 
>> MPC4E-3D-32XGE-SFPP 32x10GbE, full scale L2/L2.5 and *reduced scale L3
>> features*
>> and
>> MPC4E-3D-32XGE-RB 32XGbE SFPP ports, full scale L2/L2.5,
>> * L3 and L3VPN features*
>> 
>> now question is *what is reduced scale L3 featurs and L3vpn features ?*
>> 
>> *Many Thanks,*
>> 
>> *Brijesh Patel*

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX104 and NetFlow - Any horror story to share?

2018-04-30 Thread Olivier Benghozi
Hi Alain,

While you seem to already be kind of suicidal (5 full tables peers on an 
MX104), on an MX you must not use netflow v9 (CPU based) but use inline IPFIX 
(Trio / PFE based).
I suppose that Netflow-v9 on an MX104 could be quickly an interesting horror 
story with real traffic due to its ridiculously slow CPU, by the way.
With inline IPFIX it should just take some more RAM, and FIB update could be a 
bit slower.

By the way on MX104 you don't configure «fpc» (bigger MXs) of «tfeb» (MX80) in 
chassis hierarchy, but «afeb», so you can remove your fpc line and fix your 
tfeb line.

So you'll need something like that in services, instead of version9:
set services flow-monitoring version-ipfix template ipv4 template-refresh-rate
set services flow-monitoring version-ipfix template ipv4 option-refresh-rate
set services flow-monitoring version-ipfix template ipv4 ipv4-template

And these ones too, to allocate some memory for the flows in the Trio and to 
define how it will speaks with the collector:
set chassis afeb slot 0 inline-services flex-flow-sizing
set forwarding-options sampling instance NETFLOW-SI family inet output 
inline-jflow source-address a.b.c.d

Of course you'll remove the line with «output flow-server  source ».



I don't see why you quoted the mail from Brijesh Patel about the Routing 
licences, by the way :P


Olivier

> On 30 apr. 2018 at 21:34, Alain Hebert  wrote :
> 
> 
> Anyone has any horror stories with something similar to what we're about to 
> do?

> We're planning to turn up the following Netflow config (see below) on our 
> MX104s (while we wait for our new MX960 =D), it worked well with everything 
> else (SRX mostly), the "*s**et chassis"* are making us wonder how high would 
> be the possibility to render those system unstable, at short and long term.
> 
> Thanks again for your time.
> 
> PS: We're using Elastiflow, and its working great for our needs atm.
> 
> 
> -- A bit of context
> 
> Model: mx104
> Junos: 16.1R4-S1.3
> 
> They're routing about 20Gbps atm, with 5 full tables peers, ~0.20 load 
> average, and 700MB mem free.
> 
> 
> -- The Netflow config
> 
> *set chassis tfeb0 slot 0 sampling-instance NETFLOW-SI*
> 
> *set chassis fpc 1 sampling-instance NETFLOW-SI*
> 
> set services flow-monitoring version9 template FM-V9 option-refresh-rate 
> seconds 25
> set services flow-monitoring version9 template FM-V9 template-refresh-rate 
> seconds 15
> set services flow-monitoring version9 template FM-V9 ipv4-template
> 
> set forwarding-options sampling instance NETFLOW-SI input rate 1 run-length 0
> set forwarding-options sampling instance NETFLOW-SI family inet output 
> flow-server  port 2055
> set forwarding-options sampling instance NETFLOW-SI family inet output 
> flow-server  source 
> set forwarding-options sampling instance NETFLOW-SI family inet output 
> flow-server  version9 template FM-V9
> set forwarding-options sampling instance NETFLOW-SI family inet output 
> inline-jflow source-address 
> 
> set interfaces  unit  family inet sampling input
> set interfaces  unit  family inet sampling output

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] mx960 junos upgrade fail

2018-04-06 Thread Olivier Benghozi
Not sure you really can ISSU update between such versions...

About the OS file, you want vmhost 64bits.
The Net version is to freshinstall using PXE Netboot.

And the "64 Bit-MX High-End Series" is the one you would use with a RE-1800 
(directly running JunOS over FreeBSD over its hardware), while you have a more 
recent one (like RE-MX-Xsomething), running JunOS over FreeBSD over KVM over 
Linux. In your case, this image can update only the JunOS+FreeBSD within the 
KVM, but it won't update the Linux/KVM stuff (named "vmhost").
https://www.juniper.net/documentation/en_US/junos/topics/concept/re-mx-x6-x8-ptx-x8-overview.html
 


> Le 6 avr. 2018 à 17:54, Aaron Gould  a écrit :
> 
> I like the idea of being able to do a software upgrade in-service, so I might 
> go ahead and skip 16.1R3-S7 and go with MPC7E issu capable version 17.4R1   
> This is a brand new 100 gig ring of (5) MX960's that are not in production 
> yet... so now is the time for me to decide the starting junos version, etc.
> 
> Btw, as a side-note, I do not have the free space issue, if I do software 
> upgrade without issu option space issue is only when I use 
> in-service-upgrade 
> 
> Oliver, you mentioned "By the way, why one would want to updade to 16.1R3S* 
> whereas 16.1R6S2 is available"  ...but I'm actually trying to upgrade to 
> 16.1R3-S7...however, now alex had informed me that my MPC7E modules 
> aren't issu capable until 17.4R1, so I might go ahead with that version 
> instead.
> 
> Oliver, I was trying to use a file from juniper titled "64 Bit-MX High-End 
> Series" but are you telling me that I should be using one of the files titled 
> " VMHost 64-Bit"  ?
> 
> Also, what is the difference in " VMHost 64-Bit" and "VMHost Net 64-Bit" ?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] mx960 junos upgrade fail

2018-04-06 Thread Olivier Benghozi
https://www.juniper.net/documentation/en_US/junos/topics/concept/installation_upgrade.html
 


« Host upgrade—Use the junos-vmhost-install-x.tgz image upgrade. When you 
upgrade the host OS, you must specify the regular package in the request vmhost 
software add command. This is the recommended mode of upgrade because this 
method installs the host image along with the compatible Junos OS. »


By the way, why one would want to updade to 16.1R3S* whereas 16.1R6S2 is 
available ?


> Le 6 avr. 2018 à 16:32, Aaron Gould  a écrit :
> 
> ...then the vmhost is seen as too old ... wonder what that is all about ?
> 
> Checking vmhost version compatibility
> VMHost version too old for Junos
> ERROR: package junos-x86-64-16.1R3-S7.1 fails requirements

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper UDP Amplification Attack - UDP port 111 ?

2018-03-16 Thread Olivier Benghozi
So it most probably comes with "upgraded Junos with FreeBSD 10", that is 15.1+ 
on MX with intel CPUs.

There's something fun described on PR1167786 about similar behaviour: "Due to 
Junos Release 15.1 enabling process rpcbind in FreeBSD by default, port 646 
might be grabbed by rpcbind on startup, which causes LDP sessions failing to 
come up."

My understanding: "we left everything by default"...

> Le 16 mars 2018 à 20:33, Aaron Gould  a écrit :
> 
> I see udp/tcp listening on 111 on MX960, but not on MX104 nor on ACX5048...

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] LCP keeps renegotiating on L2TP tunnel

2018-02-22 Thread Olivier Benghozi
You need to trace the L2TP packets on both sides.
"AVP" deals here with AVPs within L2TP control packets, not in radius.
It's about the AsyncMap missing in L2TP SIL packets (made to exchange 
Asyncmaps), in Asyncmode PPP/L2TP.

> On 22 feb 2018 at 07:23, Drikus Brits  wrote :
> 
> Heya Experts,
> 
> Need some input. We're changing houses.yeay..or not. We're in the process of 
> changing from Cisco PE's to Juniper, and as such DSL and 4G services are 
> first up on the migration. I'm stuck trying to get an MX104 to terminate l2tp 
> sessions with Cisco CPEs.
> 
> We have 2x scenarios where we have Cisco & Huawei CPE's doing normal dialer 
> sessions and Virtual-ppp interface configuration as per below to Cisco PEs 
> within VRFs and global etc. Scenario 1 is working with just DSL dialer 
> services terminating from carriers using L2TP tunnels from the carrier 
> LAC/BRAS boxes to our MX104s and then terminating pppoe subscribers via the 
> dialer interfaces. This works like a charm.
> 
> The second one isn't working so well, where we have 4g services out there on 
> an internet facing APN with each CPE terminating individual l2tp sessions to 
> our MX104s. On the Cisco CPE the debugs shows LCP trying to negotiate and 
> eventually fails, however on the MX, it shows what appears to be some 
> success. We've got radius servers pushing the AV pairs with the necessary 
> IP's, routes & vrfs, but rad requests aren't even hitting me yet.
> 
> MX:
> 
> drikusb@SYD-BB-01> show services l2tp tunnel
>   Local ID  Remote ID  Remote IP   Sessions  State
>   7991  25256  61.41.122.32:1701 1  Established
> 
> drikusb@SYD-BB-01> show subscribers
> Interface   IP Address/VLAN ID  User Name 
>  LS:RI
> si-0/0/0.3221229393   
> default:default
> 
> Cisco CPE:
> 
> Testing-dsltest-cpe#
> 
> 
> interface Virtual-PPP1
> ip address negotiated
> ip virtual-reassembly in
> keepalive 30
> ppp chap hostname 123456789011@4gL2TP
> ppp chap password keepmesecret
> no cdp enable
> pseudowire 192.168.100.10 123 encapsulation l2tpv2 pw-class pwclass1
> 
> 1187080: Feb 20 14:27:35: ppp0 PPP: Phase is ESTABLISHING
> 1187081: Feb 20 14:27:35: Vp1 PPP: Using default call direction
> 1187082: Feb 20 14:27:35: Vp1 PPP: Treating connection as a dedicated line
> 1187083: Feb 20 14:27:35: Vp1 PPP: Session handle[F23D] Session id[0]
> 1187084: Feb 20 14:27:35: Vp1 LCP: Event[OPEN] State[Initial to Starting]
> 1187085: Feb 20 14:27:35: Vp1 LCP: O CONFREQ [Starting] id 1 len 10
> 1187086: Feb 20 14:27:35: Vp1 LCP:MagicNumber 0x7A1DCA64 (0x05067A1DCA64)
> 1187087: Feb 20 14:27:35: Vp1 LCP: Event[UP] State[Starting to REQsent]
> 1187088: Feb 20 14:27:35: Vp1 LCP: I CONFREQ [REQsent] id 210 len 15
> 1187089: Feb 20 14:27:35: Vp1 LCP:AuthProto CHAP (0x0305C22305)
> 1187090: Feb 20 14:27:35: Vp1 LCP:MagicNumber 0x3F8C879E (0x05063F8C879E)
> 1187091: Feb 20 14:27:35: Vp1 LCP: O CONFACK [REQsent] id 210 len 15
> 1187092: Feb 20 14:27:35: Vp1 LCP:AuthProto CHAP (0x0305C22305)
> 1187093: Feb 20 14:27:35: Vp1 LCP:MagicNumber 0x3F8C879E (0x05063F8C879E)
> <<  repeated output as per above/below omitted >>
> 1187094: Feb 20 14:27:35: Vp1 LCP: Event[Receive ConfReq+] State[REQsent to 
> ACKsent]
> 1187095: Feb 20 14:27:37: Vp1 LCP: O CONFREQ [ACKsent] id 2 len 10
> 1187096: Feb 20 14:27:37: Vp1 LCP:MagicNumber 0x7A1DCA64 (0x05067A1DCA64)
> 1187097: Feb 20 14:27:37: Vp1 LCP: Event[Timeout+] State[ACKsent to ACKsent]
> 1187160: Feb 20 14:27:54: Vp1 LCP: O CONFACK [ACKsent] id 216 len 15
> 1187161: Feb 20 14:27:54: Vp1 LCP:AuthProto CHAP (0x0305C22305)
> 1187162: Feb 20 14:27:54: Vp1 LCP:MagicNumber 0x3F8C879E (0x05063F8C879E)
> 1187163: Feb 20 14:27:54: Vp1 LCP: Event[Receive ConfReq+] State[ACKsent to 
> ACKsent]
> 1187164: Feb 20 14:27:55: Vp1 PPP DISC: LCP failed to negotiate
> 1187165: Feb 20 14:27:55: Vp1 PPP: Sending Acct Event[Down] id[26B]
> 1187166: Feb 20 14:27:55: PPP: NET STOP send to AAA.
> 1187167: Feb 20 14:27:55: Vp1 LCP: Event[Timeout-] State[ACKsent to Stopped]
> 1187168: Feb 20 14:27:55: Vp1 LCP: Event[DOWN] State[Stopped to Starting]
> 1187169: Feb 20 14:27:55: Vp1 PPP: Phase is DOWN
> Testing-dsltest-cpe #
> 
> 
> MXConfig:
> set interfaces si-0/0/0 hierarchical-scheduler
> set interfaces si-0/0/0 encapsulation generic-services
> set interfaces si-0/0/0 unit 0 family inet
> set chassis fpc 0 pic 0 inline-services bandwidth 10g
> set dynamic-profiles dyn-lns-profile routing-instances 
> "$junos-routing-instance" interface "$junos-interface-name"
> set dynamic-profiles dyn-lns-profile routing-instances 
> "$junos-routing-instance" routing-options access route 
> $junos-framed-route-ip-address-prefix next-hop "$junos-framed-route-nexthop"
> set dynamic-profiles dyn-lns-profile routing-instances 
> "$junos-routing-instance" routing-options 

Re: [j-nsp] EX4550 15.1 VLAN Translation

2018-02-19 Thread Olivier Benghozi
Hi Jed,

the EX4550 doesn't use the "Enhanced Layer 2 Software / ELS" (basically, for 
Broadcom based EXs) ; that is, it uses the "legacy/former" EX config style (for 
Marvell based EXs), so this doc is not the right one.
You may have a look at this "legacy EX" KB article about VLAN translation:

https://kb.juniper.net/InfoCenter/index?page=content=KB16755 



Olivier

> On 19 feb 2018 at 23:05, Jed Laundry  wrote :
> 
> Following the documentation at
> https://www.juniper.net/documentation/en_US/junos/topics/task/configuration/qinq-tunneling-ex-series-cli-els.html,

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] KB20870 workaround creates problems with Hub and Spoke downstream hubs?

2018-02-15 Thread Olivier Benghozi
Note that our observations (and found workaround) were when using sub-policies 
applied within export-policies.


> On 15 feb 2018 at 10:33, Olivier Benghozi <olivier.bengh...@wifirst.fr> wrote 
> :
> 
> Now, if you see some VPN routes no longer advertised toward other PEs, it 
> probably means that your VRF export policies must be modified (and of course 
> the doc is silent about that).
> What we observed is that you can no longer rely on the classic "routing 
> policies accept BGP routes by default", translated here to "(e)BGP routes are 
> exported by default to other i-MP-BGP neighbors", probably since they are now 
> exported to another table bgp.l3vpn.0, not directly to other neighbors.
> So one must instead explicitly "accept" BGP routes in the VRF export policies 
> (in addition to setting RT ext-community).

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] KB20870 workaround creates problems with Hub and Spoke downstream hubs?

2018-02-15 Thread Olivier Benghozi
Hi Sebastian,

This is an old workaround by the way.
Simpler workaround: use advertise-from-main-vpn-tables knob available since 
12.3 (required if you have NSR anyway):

https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/advertise-from-main-vpn-table-edit-protocols-bgp.html
 

https://www.juniper.net/documentation/en_US/junos/topics/reference/requirements/nsr-system-requirements.html#nsr-bgp
 

And NSP-J archives https://lists.gt.net/nsp/juniper/56263#56263 


So you might add this knob and remove the fantom session.


Now, if you see some VPN routes no longer advertised toward other PEs, it 
probably means that your VRF export policies must be modified (and of course 
the doc is silent about that).
What we observed is that you can no longer rely on the classic "routing 
policies accept BGP routes by default", translated here to "(e)BGP routes are 
exported by default to other i-MP-BGP neighbors", probably since they are now 
exported to another table bgp.l3vpn.0, not directly to other neighbors.
So one must instead explicitly "accept" BGP routes in the VRF export policies 
(in addition to setting RT ext-community).


Olivier


> On 15 feb 2018 at 08:30, Sebastian Wiesinger  wrote :
> 
> we configured the workaround mentioned in KB20870 to prevent unwanted
> VPN BGP session flaps when configuring eBGP/route-reflector clients. A
> problem we noticed is that when using a Hub hub on the affected
> router and when a downstream hub is used as well, it seems that the
> downstream hub stops exporting any VRF routes to other PEs.
> 
> Has anyone else noticed this and maybe even have a workaround? We'll
> probably try and replicate it in the lab, but it looks strange. This
> occured with JunOS 13.3 and 16.1.
> 
> Background for KB20870:
> https://kb.juniper.net/InfoCenter/index?page=content=KB20870
> https://www.juniper.net/documentation/en_US/junos/topics/example/bgp-vpn-session-flap-prevention.html
> 
> Downstream Hub:
> https://www.juniper.net/documentation/en_US/junos/topics/example/vpn-hub-spoke-topologies-one-interface.html

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Single RE-S-X6-64G with "error: Unrecognized command (chassis-control)"

2018-02-13 Thread Olivier Benghozi
Since you create an Olive from an M/MX/T release, you know that Junos declares 
itself an "Olive" if it doesn't recognise a Juniper hardware it knows (well, 
except for vMX/vSRX/vRR/vWhatever, even if there was a bug in older releases 
where it nevertheless showed Olive).

Additionally, RE-S-X6-64G IS a virtualised environnement, where Junos runs 
inside a VM on a Linux host.
Reference:
https://kb.juniper.net/resources/sites/CUSTOMERSERVICE/content/live/TECHNOTES/0/TN303/en_US/NextGenRoutingEngine_Tech_Intro.pdf
 


The fact is that 15.1R has no support for this config.

> On 13 feb. 2018 at 15:35, Aaron Gould  wrote:
> 
> I've never seen mention of olive outside of GNS3/Virtual Environment.
> Perhaps, this is a learning opportunity from me.  Please let me know if
> there are times when actually hardware routers show Junos as "olive"

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Transit composite next hops

2018-02-13 Thread Olivier Benghozi

> On 13 feb. 2018 at 18:51, Luis Balbinot <l...@luisbalbinot.com> wrote :
> 
> What is even more misleading is that the MX accepts the transit configuration 
> and commits without warnings. I issued the commit on a standalone router but 
> tomorrow I'm going to setup a lab with 3 routers. 

Well, there are plenty of config knobs that JunOS will happily and silently 
accept on any platform even if unsupported :)
They will either don't do anything, or fuck up something, or do what you expect 
but without Juniper support/endorsement.
Bet on one :P

> Some docs mention that MPC-only chassis like the MX80 come with CNHs 
> configured as the default, but that's only true for ingress EVPN. 

And in fact it's for all MXs.
Crappy doc.

> I'm still confused :-)

It's confusing.


> On Sun, 11 Feb 2018 at 19:56 Olivier Benghozi <olivier.bengh...@wifirst.fr 
> <mailto:olivier.bengh...@wifirst.fr>> wrote:
> Hi Luis,
> 
> I already wondered the same thing, and asked to our Juniper representative ; 
> the answer was that each family supports (and only supports) its specific 
> CCNH flavour:
> CCNH for ingress: MX
> CCNH for transit: PTX (I didn't asked for QFX10k).
> Olivier
> 
> > On 10 feb. 2018 at 19:17, Luis Balbinot <l...@luisbalbinot.com 
> > <mailto:l...@luisbalbinot.com>> wrote :
> >
> > I was reading about composite chained next hops and it was not clear to me
> > whether or not MX routers support them for transit traffic. According to
> > the doc bellow it's only a QFX10k/PTX thing:

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Single RE-S-X6-64G with "error: Unrecognized command (chassis-control)"

2018-02-12 Thread Olivier Benghozi
Hi Dave,

> On 12 feb 2018 at 23:41, Dave Peters - Terabit Systems 
>  wrote :
> Forgive my ignorance, but I've got an RE-S-X6-64G running in an MX480 (BP3) 
> with an SCBE2, version 15.1R6.7, and it has a chassis-control problem:

As this RE is not supported in 15.1R this seems legit, so I suppose you'd 
better install the last 16.1R (as the end of engineering for 15.1 is in 4 
months anyway).


> The show version command indicates "Model: olive," which leads me to believe 
> I've got a bad/wrong version, but Juniper's website says I'm good to go

No you're not: 15.1F is not 15.1R.


Olivier

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Transit composite next hops

2018-02-11 Thread Olivier Benghozi
Hi Luis,

I already wondered the same thing, and asked to our Juniper representative ; 
the answer was that each family supports (and only supports) its specific CCNH 
flavour:
CCNH for ingress: MX
CCNH for transit: PTX (I didn't asked for QFX10k).
Olivier

> On 10 feb. 2018 at 19:17, Luis Balbinot  wrote :
> 
> I was reading about composite chained next hops and it was not clear to me
> whether or not MX routers support them for transit traffic. According to
> the doc bellow it's only a QFX10k/PTX thing:

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Prefix independent convergence and FIB backup path

2018-02-08 Thread Olivier Benghozi
Hi Mark,


Here (VPNv4/v6, BGP PIC Core + PIC Edge, no addpath as not supported in vpn 
AFI) we can see that, when possible:
active eBGP path is backuped via iBGP path
active iBGP path is backuped via another iBGP path

We don't see:
active iBGP backuped via inactive eBGP
active eBGP backuped via another inactive eBGP


I understand it works the way it was advertised for:
PIC Core: protect against another PE failure, as detected via IGP
PIC Edge: protect against a local PE-CE link failure (CE being an IP transit by 
example)
The key idea seems to react immediately to a link loss quickly detected, that 
is locally or via IGP ; not to something signalled (potentially slowly) via 
iBGP.
However, protecting iBGP routes using eBGP paths makes sense, as a PE can be 
lost and quickly detected.

It would be interesting to check if "protecting iBGP routes using eBGP ones" or 
"active eBGP using inactive eBGP" are implemented on Cisco IOS-XR gears in 
their BGP PIC implementation.


Note that in your case (in inet.0) there's no BGP PIC Edge feature, as I 
understand it's just a special PIC feature needed for labelled paths toward 
outside, and you can see that BGP PIC Core for inet already covers your eBGP 
routes in inet.0.


Also note that in your case, PE2 (at least when using NHS) cannot quickly 
detect a TRA1 loss anyway, so there's no usecase here, in fact...

Of course you already know that having both TRA1 and TRA2 with the same 
localpref does the trick (even without addpath), but it's not what you intended 
to test :)


Olivier

> Le 8 févr. 2018 à 13:02, Mark Smith  a écrit :
> 
> Hi list,
> 
> Test topology below. 2x MX80 with dual ip transit (full table ~600k
> prefixes). TRA1 preferred over TRA2 (Localpref 200 set by PE1 import
> policy). Plain unlabeled inet.0, no mpls in use. In lab topology both
> transits belong to same AS65502.
> 
> What I'm trying to accomplish is somewhat faster failover time in case
> of primary transit failure. In case of no tuning the failover (FIB
> programming) can take up to 10 minutes.
> 
> 
> 
> | TRA1 || TRA2 |   AS65502
> 
>   | xe-1/3/0  | xe-1/3/0
> --- ---
> | PE1 | --ae0-- | PE2 |AS65501
> --- ---
>   |
> ---
> | test pc |
> ---
> 
> In the lab PE1 and PE2 are MX80s running 15.1R6.7.
> I have configured BGP add-path and PIC edge (routing-options protect
> core) on both PEs.
> All looks ok on PE1. Both primary and backup paths are installed in
> FIB. PE1 converges fast.
> The backup path is missing in PE2 FIB. When PE1-TRA1 cable is cut PE1
> quickly switches to backup path but PE2 does not and the result is a
> temporary routing loop between PE1 and PE2.
> If I switch the active transit to PE2 (set LP220 on TRA2 import on
> PE2, no other changes), the situation is reversed. All looks ok on PE2
> but not on PE1. So it looks like the PIC works only on the box
> connected to primary transit (=EBGP route is better than IBGP route).
> NHS/no-NHS on ibgp export does not have an effect. Is this a bug,
> feature, or am I doing something wrong?
> 
> I know that a better solution could be to get rid of full table and
> just use 2x default route from upstream... anyways I would like to get
> more familiar with PIC.
> 
> Stable situation, all ok on PE1:
> admin@PE1> show route table inet.0 8.8.8.8
> 
> inet.0: 607797 destinations, 1823329 routes (607797 active, 0
> holddown, 0 hidden)
> @ = Routing Use Only, # = Forwarding Use Only
> + = Active Route, - = Last Active, * = Both
> 
> 8.8.8.0/24 @[BGP/170] 05:03:44, localpref 200
>  AS path: 65502 65200 25091 15169 I,
> validation-state: unverified
>> to 10.100.100.133 via xe-1/3/0.0
>[BGP/170] 05:05:55, localpref 100, from 10.100.100.40
>  AS path: 65502 65200 25091 15169 I,
> validation-state: unverified
>> to 10.100.100.137 via ae0.0
>   #[Multipath/255] 05:02:54
>> to 10.100.100.133 via xe-1/3/0.0
>  to 10.100.100.137 via ae0.0
> 
> admin@PE1> show route forwarding-table destination 8.8.8.8 table
> default extensive
> Routing table: default.inet [Index 0]
> Internet:
> 
> Destination:  8.8.8.0/24
>  Route type: user
>  Route reference: 0   Route interface-index: 0
>  Multicast RPF nh index: 0
>  Flags: sent to PFE, rt nh decoupled
>  Next-hop type: unilist   Index: 1048575  Reference: 607767
>  Nexthop: 10.100.100.133
>  Next-hop type: unicast   Index: 826  Reference: 4
>  Next-hop interface: xe-1/3/0.0Weight: 0x1
>  Nexthop: 10.100.100.137
>  Next-hop type: unicast   Index: 827  Reference: 3
>  Next-hop interface: ae0.0 Weight: 0x4000
> 
> 
> But not on PE2:
> admin@PE2> show route table inet.0 8.8.8.8
> 
> inet.0: 607798 destinations, 1215564 routes (607798 active, 607766
> holddown, 0 hidden)
> @ = Routing Use 

Re: [j-nsp] Multicast through a switch

2018-01-09 Thread Olivier Benghozi
This is a bug, not a feature :P

> On 9 janv. 2018 at 11:00, Gert Doering  wrote :
> 
> Well.  Sort of.  EX3300 manages to apply IGMP-snooping logic to 224.0.0.x
> multicast, which by definition is link-local and is not(!) IGMP-queried
> for - thus breaking EIGRP routing, for example.  And annoyingly this
> feature is on-by-default.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Understanding limitations of various MX104 bundles

2018-01-05 Thread Olivier Benghozi
MX204 is probably not that expensive compared to a fully licensed MX104, I 
guess.
And while MX204 doesn't have RE redundancy, it supports NSR so I understand it 
runs two JunOS VMs in a Windriver Linux as hypervisor, I guess.

> On 5 janv. 2018 à 15:54, Edward Dore  
> wrote :
> 
> The MX204 seems to be amazing value for money if it has the right port 
> combination for your workload (i.e. not great if you need lots of 1GE). The 
> RE is also significantly more capable than the somewhat underpowered one in 
> the MX104.
> 
> For our use case (border router terminating peering/transit), having dual RE 
> isn’t particularly important as we achieve our redundancy using separate 
> routers. YMMV.
> 
> From: Josh Baird 
> Date: Friday, 5 January 2018 at 14:42
> 
> I believe this is what we are finding as well, which is unfortunate.  Maybe 
> we should look at the MX204 instead?  Although, it's 2X the cost (MSRP) and 
> only has one RE.  

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Poll Question (VRF scale on MX)

2017-12-21 Thread Olivier Benghozi
The use of NH DMEM might also slightly vary with various features 
(LFA/PIC/multipath), I guess.

> On 21 dec. 2017 at 12:19, adamv0...@netconsultings.com wrote :
> 
> Junos code version:
> Number of VRFs:
> Number of destinations (total or average per VRF):
> Output from: request pfe execute target fpc0 command "show jnh 0 pool" (+
> type of card this was executed on)
>   - this output will tell you where you are at with regards to your
> next-hop memory utilization (whether you are within the pre-allocated 2+2M
> or already borrowing something from the shared pool)

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Experience with Junos 15.1 on MX960?

2017-12-13 Thread Olivier Benghozi
Cosmetic bugs seen here: PR1254675, PR1289974, PR1293543, PR1261423 (all fixed 
in 16.1R6, can be seen in release notes).

> On 13 dec. 2017 at 16:29, Michael Hare  wrote :
> 
> We are looking at moving to 16.1R6 within the new few weeks on an MX2010 from 
> 14.1.  Several folks have mentioned cosmetics bugs in 16.1.  If anyone is 
> willing to highlight (publically or privately) PRs or high level descriptions 
> of the cosmetic issues (no more than a sentence), I'd be curious.  Admittedly 
> I can read the release notes, but there is value in hearing from others what 
> cosmetics bugs affected them.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Experience with Junos 15.1 on MX960?

2017-12-12 Thread Olivier Benghozi
We've been running 16.1R4-S3 or S4 for 4/5 months (we had to choose between 
15.1F and 16.1R for our MPC7s), without MC-LAG.
We've been hit by about 8 PR, including 4 non-cosmetic ones (with 3 also 
present in 15.1F anyway).
Most of them are allegedly fixed in 16.1R6.
17 might be the next step in 6 months.

> On 12 dec. 2017 at 22:01, Nikolas Geyer  wrote :
> 
> We’re running 16.1R4 and it’s been stable for the most part, aside from a few 
> annoying cosmetic problems.
> 
> Running it on MX480’s and 960’s, a variety of RE’s, a variety of 
> MPC2/MPC3/MPC4/MPC7, usual protocols such as BGP, OSPF, MPLS, RSVP and a few 
> Tbps of traffic. No MC-LAG unfortunately though.
> 
> Will probably schedule moving up to 17 some time early 2018.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Enhanced MX480 Midplane?

2017-11-14 Thread Olivier Benghozi
While I don't care about SONET/SDH in 2017 (sorry...), the enhanced midplane 
(in the MX240/480/960 MX generation) also (mainly?) allows more bandwidth per 
slot with the future SCBE3.
You may find a fugitive Juniper 2016 PDF on your preferred search engine 
("SCBE3" "premium3" "mx").

> On 14 nov. 2017 at 09:56, Karl Gerhard  wrote :
> 
> this article is mentioning an enhanced MX480 midplane. This is the first time 
> I hear of that: CHAS-BP-MX480-S (=Non-Enhanced) vs. CHAS-BP3-MX480-S 
> (=Enhanced)
> https://www.juniper.net/documentation/en_US/release-independent/junos/topics/concept/scbe2-mx480-desc.html
> 
> Is there anyone who can give more details about the enhanced midplane?
> As far as I understand the cross-coupling of clock input is only related to 
> SONET/SDH stuff, is that correct?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Sporadic LUCHIP IDMEM read errors

2017-09-26 Thread Olivier Benghozi
Maybe 
http://news.nationalgeographic.com/2017/09/sun-solar-flare-strongest-auroras-space-science/
 

 ?

> On 26 sept. 2017 at 09:14, Sebastian Wiesinger  wrote :
> 
> Hello,
> 
> we're seeing sporadic LUCHIP IDMEM read errors like these (from two
> routers):
> 
> fpc5 LUCHIP(0) IDMEM[112821] read error
> 
> tfeb0 LUCHIP(0) IDMEM[303084] read error
> 
> It is a single error and does not impact traffic in any measurable way
> for us. It appears to be random on different router models (MX960,
> MX480, MX80) and JunOS versions. It seems these have increased in the
> last few weeks. At least one other provider I asked is seeing them as
> well.
> 
> Is anyone here seeing these? Has anyone found a reason for it or has
> already a case open for this?
> 
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Moving onto EX2300

2017-09-20 Thread Olivier Benghozi
New additional licence needed to stack (VirtualChassis), VRF not supported.

> On 20 sept. 2017 at 17:16, William  wrote :
> 
> Due to the ex2200 going eol/eos we are looking at the EX2300 - can anyone
> share their experience with this model? Anything to watch out for?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Junos 15 on EX2200's

2017-08-31 Thread Olivier Benghozi
There were some various memory leaks in 15.1R6 on EX, fixed in 15.1R6-S2 
(TSB17127), and this probably didn't help :-P
Last available is 15.1R6-S3.

> On 31 aug 2017 at 21:11, Charles van Niman  wrote :
> 
> Just one datapoint, but I loaded 15.1R6 on my EX2200-C and saw mgd and
> all ssh/snmp access vanish in a seemingly oom situation after about three
> weeks. Per the official recommendation, I would also suggest staying on
> 12.3 or possibly 14.1? (haven't tried this one.) I was running 13.3, which
> is no longer public, for years without issue.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Why JUNOS need re-establish neighbour relationship when configuring advertise-inactive

2017-07-15 Thread Olivier Benghozi
Here we directly
set protocols bgp advertise-inactive
(and in routing-instances too with an apply-group adding various stuff, like 
always-compare-med, router-id, and so on).
Never seen any good reason to stay with the junos default about this point...

> On 15 jul. 2017 at 14:32, Roger Wiklund  wrote :
> 
> Indeed you are right Saku.
> 
> set routing-instances VR2 protocols bgp group TO-VR1-AND-VR2 neighbor
> 10.0.0.1 peer-as 100
> set routing-instances VR2 protocols bgp group TO-VR1-AND-VR2 neighbor
> 20.0.0.2 advertise-inactive
> set routing-instances VR2 protocols bgp group TO-VR1-AND-VR2 neighbor
> 20.0.0.2 peer-as 300
> 
> Session reset when I added advertise-inactive to the neighbor.
> 
> Before:
> Group Type: External   Local AS: 200
>  Name: TO-VR1-AND-VR2  Index: 4   Flags: <>
>  Holdtime: 0
>  Total peers: 2Established: 2
>  10.0.0.1+179
>  20.0.0.2+179
>  VR2.inet.0: 0/1/1/0
> 
> After:
> Group Type: External   Local AS: 200
>  Name: TO-VR1-AND-VR2  Index: 4   Flags: <>
>  Holdtime: 0
>  Total peers: 1Established: 1
>  10.0.0.1+179
>  VR2.inet.0: 0/1/1/0
> 
> Group Type: External   Local AS: 200
>  Name: TO-VR1-AND-VR2  Index: 1   Flags: <>
>  Options: 
>  Holdtime: 0
>  Total peers: 1Established: 1
>  20.0.0.2+179
>  VR2.inet.0: 0/0/0/0
> 
> So a workaround would be to put it in a separate group then...

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MPLS L3VPNs, Route-Reflection, and SPRING with IS-IS on QFX5100

2017-07-03 Thread Olivier Benghozi
By default JunOS will create a label for the primary loopback address (as told 
in "MPLS in the SDN Era", page 172). So, here, by default, the first one.
If you wan a label for the "242" IP only: invert both loopback IPs in the conf, 
or declare the second one as primary.

But if you need a label for both IPs, attach an "egress-policy" to protocol ldp 
matching those 2 IPs (maybe using route-filter, or a prefix-list, of whatever):

https://www.juniper.net/documentation/en_US/junos/topics/usage-guidelines/mpls-configuring-the-prefixes-advertised-into-ldp-from-the-routing-table.html
 



> Le 3 juil. 2017 à 20:19, Brant Ian Stevens  a 
> écrit :
> 
> I posted to the Juniper Forums, but figured I should try here as well:
> 
> Hello All,
> 
> I am attempting to build a network with the captioned technologies, and am 
> most of the way there, but am running into an issue.
> 
> We want to use a separate loopback address for our MP-BGP peering sessions in 
> support of the MPLS VPNs address family, but the "secondary" address on the 
> loopback interface does not get a label assigned to it in the IS-IS database. 
>  The addresses in the 10.242.0.0/24 range are the inet-vpn loopback sources, 
> while the addresses in the 100.64.0.0/24 range are the loopback ranges that 
> are used for inet-labeledunicast.
> 
> 
> branto@peer-rtr-01# show interfaces lo0
> unit 0 {
>family inet {
>address 100.64.0.7/32; This address is assigned a label.
>address 10.242.0.7/32; This address does NOT get assigned a label.
>}
>family iso {
>address 49..0100.0064.0007.00;
>}
>family mpls;
> }
> unit 4000 {
>family inet {
>address 10.240.0.7/32;
>}
> }
> 
> branto@peer-rtr-01# run show route 10.242.0.0/24
> 
> inet.0: 38 destinations, 41 routes (38 active, 0 holddown, 0 hidden)
> + = Active Route, - = Last Active, * = Both
> 
> 10.242.0.1/32 *[IS-IS/18] 22:15:08, metric 25
> > to 100.64.1.6 via et-0/0/48.0
> 10.242.0.3/32 *[IS-IS/18] 22:15:08, metric 50
> > to 100.64.1.6 via et-0/0/48.0
> *10.242.0.5/32 *[IS-IS/18] 22:15:08, metric 50*
> *> to 100.64.1.6 via et-0/0/48.0*
> 10.242.0.7/32 *[Direct/0] 22:46:30
> > via lo0.0
> 
> branto@peer-rtr-01# run show route 100.64.0.0/24
> 
> inet.0: 38 destinations, 41 routes (38 active, 0 holddown, 0 hidden)
> + = Active Route, - = Last Active, * = Both
> 
> 100.64.0.1/32 *[L-ISIS/14] 22:15:30, metric 25
> > to 100.64.1.6 via et-0/0/48.0
> [IS-IS/18] 22:15:30, metric 25
> > to 100.64.1.6 via et-0/0/48.0
> 100.64.0.3/32 *[L-ISIS/14] 22:15:30, metric 50
> > to 100.64.1.6 via et-0/0/48.0, Push 19
> [IS-IS/18] 22:15:30, metric 50
> > to 100.64.1.6 via et-0/0/48.0
> *100.64.0.5/32 *[L-ISIS/14] 22:15:30, metric 50*
> *> to 100.64.1.6 via et-0/0/48.0, Push 21*
> *[IS-IS/18] 22:15:30, metric 50*
> *> to 100.64.1.6 via et-0/0/48.0*
> 100.64.0.7/32 *[Direct/0] 22:46:52
> > via lo0.0
> 
> inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
> + = Active Route, - = Last Active, * = Both
> 
> 100.64.0.1/32 *[L-ISIS/14] 22:15:30, metric 25
> > to 100.64.1.6 via et-0/0/48.0
> 100.64.0.3/32 *[L-ISIS/14] 22:15:30, metric 50
> > to 100.64.1.6 via et-0/0/48.0, Push 19
> *100.64.0.5/32 *[L-ISIS/14] 22:15:30, metric 50*
> *> to 100.64.1.6 via et-0/0/48.0, Push 21*
> 
> {master:0}[edit]
> branto@peer-rtr-01#
> 
> The VPN routes are reflected across the network properly and received, but 
> the next-hop is unusable.
> 
> branto@peer-rtr-01# run show route protocol bgp hidden table bgp.l3vpn.0 
> extensive
> 
> bgp.l3vpn.0: 2 destinations, 2 routes (0 active, 0 holddown, 2 hidden)
> 10.242.0.5:1:10.240.0.5/32 (1 entry, 0 announced)
> BGPPreference: 170/-101
>Route Distinguisher: 10.242.0.5:1
>Next hop type: Unusable, Next hop index: 0
>Address: 0xa2f1744
>Next-hop reference count: 4
>State:
>Local AS: 29749 Peer AS: 29749
>Age: 22:27:35
>Validation State: unverified
>Task: BGP_29749.10.242.0.1
>AS path: I (Originator)
>Cluster list:  10.242.0.1
>Originator ID: 100.64.0.5
>Communities: target:29749:5
>Import Accepted
>VPN Label: 4114
>Localpref: 100
>Router ID: 100.64.0.1
>Secondary Tables: sinewave-mgmt.inet.0
>Indirect next hops: 1
>Protocol next hop: 10.242.0.5
>Label operation: Push 4114
>Label TTL action: prop-ttl
>Load balance label: Label 4114: None;
>Indirect next hop: 0x0 - INH Session ID: 0x0
> 
> 
> Here's my IS-IS config from 

Re: [j-nsp] can i get junos file from device

2017-06-28 Thread Olivier Benghozi
It validates the checksums then stores an installer locally (with the content 
of the tgz) that will be started at next boot, which will install the OS and 
stores the stuff (mainly to /packages/). On some platforms  the new OS is 
installed to the alternate boot partition (on EX platforms by example) which 
will the active one at next boot.

Usually you use the command with the no-copy option to avoid getting/keeping a 
useless additional local copy of the tgz archive itself (in /var/tmp/ I guess).

> On 28 june 2017 at 19:21, Aaron Gould  wrote :
> 
> Thanks Thomasz, well, sort of, I’m wondering if there is a way to upgrade 
> Junos from a box that is running the desired version ?  So I was wondering 
> how the following command runs and does the juniper device store that ENTIRE 
> file somewhere ?  if so, then I could copy it off and use it.  I was asking 
> if when I do the following command, does that juniper device store the whole 
> file somewhere, or not?  
> 
> request system software add validate force-host 
> ftp://172.17.143.125/jinstall-acx5k-15.1X54-D61.6-domestic-signed.tgz
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] bgp peer flapping

2017-04-28 Thread Olivier Benghozi
But what about route-target filtering using BGP route-target family ?
Seems close enough, works without any sessions reset, and "Statement introduced 
before Junos OS Release 7.4"...

> On 27 apr. 2017 at 18:15, adamv0...@netconsultings.com wrote :
> Hmm, good point, 
> ORR would be impossible with the old behaviour as every time the egress 
> policy changes due to ORR the BGP session from RR to a given PE would be 
> reset. 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] improving global unicast convergence (with or without BGP-PIC)

2017-04-22 Thread Olivier Benghozi
Hi,

> On 22 apr. 2017 at 22:47, Dragan Jovicic  wrote :
> 
> From documentation:
>> On platforms containing only MPCs chained composite next hops are enabled by 
>> default. With Junos OS Release 13.3, the support for chained composite next 
>> hops is enhanced to automatically identify the underlying platform 
>> capability on composite next hops at startup time, without relying on user 
>> configuration, and to decide the next hop type (composite or indirect) to 
>> embed in the Layer 3 VPN label.

In fact the most relevant part of this doc is what immediately follows that:
"This enhances the support for back-to-back PE-PE connections in Layer 3 VPN 
with composite next hops, and eliminates the need for the pe-pe-connection 
statement."

Actually, only "pe-pe-connection" became useless, if you enable composite for 
l3vpn.


> There's quite of few options to configure, and a few scenarios which might 
> affect how are they created, such as if your PE is also a P router, and if 
> you have degenerated PE-PE connection to name two,
> +l3vpn pe-pe-connection;

Since 13.3, only l3vpn.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] IPv6 flow routes

2017-04-07 Thread Olivier Benghozi
As read on 
https://www.juniper.net/techpubs/en_US/junos/topics/example/example-configuring-bgp-to-carry-ipv6-flow-routes.html
 

 :

set routing-options rib inet6.0 flow route route-1 match destination 
abcd::11:11:11:10/128


> Le 7 avr. 2017 à 22:23, Brad Fleming  a écrit :
> 
> Does Junos support creation of IPv6 flow routes?
> 
> This taken from an MX10 in our lab running 16.1R1.7:
> 
> root@lab# show routing-options flow
> route junk {
>then discard;
>match destination 2001:49d0::1/128;
> }
> [edit]
> root@lab# commit check
> [edit routing-options flow route junk]
>  'match'
>RTFLOW: invalid address
> 
> 
> I don't see an option to create a flow route under any other protocol
> family so I'm assuming they're all supposed to go under
> routing-options>flow (?).

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Match multiple bgp communities in a policy with AND condition

2017-04-06 Thread Olivier Benghozi
We use some same kinds of things here, that is subpolicies expressions (or 
subpolicies chains at other places):


policy-statement Blah {
term MyTerm {
from {
policy ( ! (( ! A ) && B && ( C || D )));
}
then next policy;
}

policy-statement A {
term match {
from community com-A;
then accept;
}
term default {
then reject;
}
}
policy-statement B {
term match {
from community com-B;
then accept;
}
term default {
then reject;
}
}
policy-statement C {
term match {
from community com-C;
then accept;
}
term default {
then reject;
}
}
policy-statement D {
term match {
from community com-D;
then accept;
}
term default {
then reject;
}
}

community com-A members 123:1;
community com-B members 123:2;
community com-C members 123:3;
community com-D members 123:4;


> Le 6 avr. 2017 à 17:59, serge vautour  a écrit :
> 
> IMHO whether you add a community to a policy term match statement or add a
> community to a community members list, you still have to add the community
> somewhere. I don't see how you get from 2x10 to 100 Maybe I don't
> understand the ask.
> 
> The only way I know how to get the AND logic to work in a single policy
> term is to call another policy. This isn't tested but something like this:
> 
> [edit policy-options]
> +   policy-statement communityb {
> +   term term1 {
> +   from community b;
> +   then accept;
> +   }
> +   }
> +   policy-statement xy {
> +   term term1 {
> +   from {
> +   community a;
> +   policy communityb;
> +   }
> +   then accept;
> +   }
> +   }
> [edit policy-options]
> +   community a members 123:1;
> +   community b members 123:2;
> 
> 
> I hope this helps.
> Serge
> 
> 
> On Thu, Apr 6, 2017 at 12:10 PM, "Rolf Hanßen"  wrote:
> 
>> Hello Serge,
>> 
>> this works, but that is exactly the config I would like to avoid.
>> In case of 2 communities this adds a third one, but in case of 2x 10
>> communities that can be combined this adds 100 additional communities.
>> 
>> kind regards
>> Rolf
>> 
>>> Hello,
>>> 
>>> Have you tried this?
>>> 
>>> set policy-options community MATCH2 members [ 123:1 123:2 ]
>>> 
>>> I believe this will result in a logical AND.
>>> 
>>> Serge
>>> 
>> 
>> 
>> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] problem with advertise ipv6 default route

2017-03-25 Thread Olivier Benghozi
Default BGP policy doesn't imply that static routes are spontaneously allowed 
anyway, so removing it is useless.
Your bgp export policy is probably fucked up but unfortunately you didn't show 
it.

> On 25 march 2017 at 16:23, Pedro  wrote :
> 
> On MX router i'm tring  advertise ::/0 to v6 peers. I have active, static 
> default route into my upstream direction. Other v6 routes are advertised to 
> my client but not ::/0
> I removed bgp6 export policy but still no success

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] routing instances on EX2300

2017-03-23 Thread Olivier Benghozi
Yes, for EX2300 it's 
https://pathfinder.juniper.net/feature-explorer/select-platform.html?category=Switching=1#family==30502300=EX2300=15.1X53-D55=799=0.4850388479542256=Junos+OS
 
<https://pathfinder.juniper.net/feature-explorer/select-platform.html?category=Switching=1#family==30502300=EX2300=15.1X53-D55=799=0.4850388479542256=Junos+OS>

> On 23 march 2017 at 14:17, Valentini, Lucio <lucio.valent...@siag.it> wrote :
> 
> I agree with you 110%, I hope it´ll be on the roadmap like the "oam", because 
> it is really a disappointing thing, particularly if I think the EX2300 was 
> introduced as an improvement on the "old" EX2200!
> 
> But where you get the Feature Explorer? Is  this link the right one?
> 
> https://pathfinder.juniper.net/feature-explorer/select-platform.html
> 
> 
> -Messaggio originale-
> Da: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] Per conto di 
> Olivier Benghozi
> Inviato: giovedì 23 marzo 2017 11:23
> A: juniper-nsp@puck.nether.net
> Oggetto: Re: [j-nsp] routing instances on EX2300
> 
> According to the Feature Explorer, VRF Lite are supported on EX2200, but not 
> on EX2300. Reducing the feature set of new products is just ridiculous...
> 
>> Le 23 mars 2017 à 08:55, Valentini, Lucio <lucio.valent...@siag.it> a écrit :
>> 
>> I was trying to configure routing instances on the EX2300, like I did on the 
>> EX4300, but it seems it´s not possible.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] routing instances on EX2300

2017-03-23 Thread Olivier Benghozi
According to the Feature Explorer, VRF Lite are supported on EX2200, but not on 
EX2300. Reducing the feature set of new products is just ridiculous...

> Le 23 mars 2017 à 08:55, Valentini, Lucio  a écrit :
> 
> I was trying to configure routing instances on the EX2300, like I did on the 
> EX4300, but it seems it´s not possible.
> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX104 limitation

2017-03-19 Thread Olivier Benghozi
What about bypass-queuing-chip on MIC interfaces ? Would it work on MX80/104 ?

> On 20 march 2017 at 01:32, Saku Ytti  wrote :
> 
> Ok that's only 31Gbps total, without having any actual data, my best
> guess is that you're running through QX. Only quick reason I can come
> up for HW to limit on so modest traffic levels.
> 
> On 20 March 2017 at 02:25, Javier Rodriguez  wrote:
>> Soku,
>> 
>> Maybe there was a misunderstanding , the inbound traffic on fpc2's LAG was
>> 4Gbps , and the outbound traffic was 27Gbps aprox. That outbound traffic
>> enters by the fpc1 and fpc0.
>> It's IMIX traffic, the average packet size is 1250Bytes (out) 200Bytes (in).
>> I tried to see dropped packets with "show precl-eng 5 statistics " and "show
>> mqchip 0 drop stats" at pfe shell but it's 0. Does it save historical data?
>> 
>> 
>> <--27G-- | | <--27G--
>> |FPC2 FPC 0/1 |
>> --4G--> | | --4G-->
>> 
>> Regards,
>> 
>> Javier.
>> 
>> 
>> 2017-03-19 20:43 GMT-03:00 Saku Ytti :
>>> 
>>> Hey,
>>> 
>>> There aren't multiple FPCs on the box really, there is only single MQ
>>> chip out of where all ports sit, usually MIC ports behind additional
>>> IX chip, which is not congested. It's architecturally single linecard
>>> fabricless box.
>>> You're saying you're pushing on the 4x10GE fixed ports 31+31Gbps, e.g.
>>> 62Gbps? It might be possible on (perhaps artificially) unfortunate
>>> cell alignment that it could be congested on so low values. Are all
>>> the packets same size, i.e is this lab scenario or just IMIX traffic?
>>> MQ pfe exceptions and MQ=>LU counters might be interesting to see.
>>> 
>>> If you use QX chip, 62Gbps would be really good, QX chip is not
>>> dimensioned for line rate _unidir_ (i.e. can't do even 40Gbps). If you
>>> don't know if you're using QX or not, just deactive whole
>>> class-of-service and scheduer config in interfaces.
>>> 
>>> On 20 March 2017 at 01:26, Javier Rodriguez 
>>> wrote:
 Hi,
 
 Thanks for your reply Saku.
 The problem is that fpc2 (fixed ports) can't overcome 31Gbps (in + out)
 with 6Mpps. The graph shows a straight line as if it were being limited.
 I have moved some interfaces from LAG to fpc1 and fpc0 and the traffic
 has
 incresed. (It only has a tunnel-service in fpc0 of 1g)
 It's as if it were being limited by the MQ, but I do not see discarded
 packages, or I do not know where to look at them.
 
 JR.
 
 2017-03-19 6:53 GMT-03:00 Saku Ytti :
 
> Hey Javier,
> 
> 
> MX104 and MX80 (1st gen Trio MQ/LU) should do about 55Mpps and 75Gbps
> (in+out).
> 
> On 19 March 2017 at 09:12, Javier Rodriguez 
> wrote:
>> Hi everyone,
>> 
>> I need a bit of your knowledge.
>> I have a MX104 as PE router with 4 LAGs.
>> One LAG facing to P router on FPC2 (fixed ports). The other LAGs
>> distributed in FPC0 and FPC1.
>> The problem is that traffic is being limited when reach 28G out/ 4G
>> in
>> (31Gbps total).
>> I changed one interface (10G) of the LAG (to P router) to FPC1 and
>> the
>> traffic has grown a little more.
>> 
>> Where is the limitation? In the MQ chip?
>> Where can I see those discarded packages?
>> How much traffic will the router support on FPC2?
>> Where could I get a graphic of its internal architecture?
>> Does a MX80 have the same behavior?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Advertise inactive route EBGP session

2016-12-01 Thread Olivier Benghozi
It's expected to work according to 
https://www.juniper.net/documentation/en_US/junos/topics/example/bgp-advertise-inactive.html
 

So, aren't you trying to advertise an AS200 route to an AS200 router ? In that 
case you would need to add advertise-peer-as.

> Le 30 nov. 2016 à 22:46, Mileto Tales  a écrit :
> 
> Hello,
> 
> 
> I'm not having success to advertise the best BGP inactive route to my eBGP 
> peer. My scenario is very simple. I have configured one static route "set 
> routing-options static route 192.168.0.0/24 next-hop 10.0.1.1" and I'm 
> receiving this same route by eBGP.
> 
> I want to keep the static route configured in the router and advertise BGP 
> learned route to another eBGP peers. In my understanding the 
> advertise-inactive configuration inside the BGP group was supposed to work in 
> this scenario. I add this configuration, cleared the BGP session and I'm 
> still having problems to advertise the inactive route.
> 
> 
> Another test that I did:I created a policy matching on routes that are in 
> inactive state and tried to export then. If I remove the static route then 
> the BGP is advertised (best route)
> 
> 
> Anyone have this configuration working?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Netflow/Jflow

2016-11-04 Thread Olivier Benghozi
Didn't try in 15.1F...

But I see that in 16.1R there's a new family "MPLS", I don't know what it 
includes...



> Le 4 nov. 2016 à 08:18, Nitzan Tzelniker <nitzan.tzelni...@gmail.com> a écrit 
> :
> 
> From 15.1F2 (I test it on 15.1F6 ) changing the flow table size dose not 
> restart the FPC
> 
> Nitzan
> 
> On Fri, Nov 4, 2016 at 5:47 AM, Scott Granados <sc...@granados-llc.net 
> <mailto:sc...@granados-llc.net>> wrote:
> +1, this is how I have set things up as well and yes, changing the table 
> sizes will cause an FPC reboot.
> 
> > On Nov 3, 2016, at 9:04 PM, Olivier Benghozi <olivier.bengh...@wifirst.fr 
> > <mailto:olivier.bengh...@wifirst.fr>> wrote:
> >
> > Hi Keith,
> >
> > Adjusting the size of the flow hash table will reboot the FPC.
> > In 14.2 and previous, you have everything (15) for IPv4 and only a few 
> > entries for IPv6 and VPLS (0). Each unit is 256K flows (except for 0).
> > Starting from 15.1R, all flow tables have a default size of "0" (that is, a 
> > mini-minimum of space, 1024 flows), so in 15.1+, fixing the sizing of the 
> > flow-tables is more or less mandatory.
> >
> >
> > This is the kind of generic config we use here to avoid an FPC reboot, 
> > would we need some unplanned stuff with inline-jflow (that you may adjust 
> > according to your needs, would you have plenty of IPv6 flows...). You have 
> > 15 units of 256K to spend.
> >
> >
> > groups {
> >chassis-fpc-netflow {
> >chassis {
> >fpc <*> {
> >sampling-instance sample-1;
> >inline-services {
> >flow-table-size {
> >ipv4-flow-table-size 12;
> >ipv6-flow-table-size 2;
> >vpls-flow-table-size 1;
> >ipv6-extended-attrib;
> >}
> >}
> >}
> >}
> >}
> > }
> > chassis {
> >fpc 0 {
> >apply-groups chassis-fpc-netflow;
> >}
> > }
> >
> >
> > Olivier
> >
> >
> >> On 3 nov. 2016 at 23:43, Keith <kwo...@citywest.ca 
> >> <mailto:kwo...@citywest.ca>> wrote :
> >>
> >> One thing about inline that Juniper config docs say is about the 
> >> flow-table size. I had someone
> >> tell me enabling inline jflow will cause the fpc to reboot, but from what 
> >> I read adjusting the size
> >> of the flow hash table will cause that.
> >>
> >> Any idea what is correct here? I would think that just enabling it would 
> >> not cause an FPC to restart.
> >
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net 
> > <mailto:juniper-nsp@puck.nether.net>
> > https://puck.nether.net/mailman/listinfo/juniper-nsp 
> > <https://puck.nether.net/mailman/listinfo/juniper-nsp>
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> <mailto:juniper-nsp@puck.nether.net>
> https://puck.nether.net/mailman/listinfo/juniper-nsp 
> <https://puck.nether.net/mailman/listinfo/juniper-nsp>
> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Netflow/Jflow

2016-11-03 Thread Olivier Benghozi
Hi Keith,

Adjusting the size of the flow hash table will reboot the FPC.
In 14.2 and previous, you have everything (15) for IPv4 and only a few entries 
for IPv6 and VPLS (0). Each unit is 256K flows (except for 0).
Starting from 15.1R, all flow tables have a default size of "0" (that is, a 
mini-minimum of space, 1024 flows), so in 15.1+, fixing the sizing of the 
flow-tables is more or less mandatory.


This is the kind of generic config we use here to avoid an FPC reboot, would we 
need some unplanned stuff with inline-jflow (that you may adjust according to 
your needs, would you have plenty of IPv6 flows...). You have 15 units of 256K 
to spend.


groups {
chassis-fpc-netflow {
chassis {
fpc <*> {
sampling-instance sample-1;
inline-services {
flow-table-size {
ipv4-flow-table-size 12;
ipv6-flow-table-size 2;
vpls-flow-table-size 1;
ipv6-extended-attrib;
}
}
}
}
}
}
chassis {   
fpc 0 {
apply-groups chassis-fpc-netflow;
}
}


Olivier


> On 3 nov. 2016 at 23:43, Keith  wrote :
> 
> One thing about inline that Juniper config docs say is about the flow-table 
> size. I had someone
> tell me enabling inline jflow will cause the fpc to reboot, but from what I 
> read adjusting the size
> of the flow hash table will cause that.
> 
> Any idea what is correct here? I would think that just enabling it would not 
> cause an FPC to restart.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Netflow/Jflow

2016-11-02 Thread Olivier Benghozi
Basically, you must use the latest incarnation of inline-jflow (starting from 
14.2), with most of PR fixed (that is : 14.2R7, 15.1R4, 15.1F6) and it should 
be fine.

> Le 2 nov. 2016 à 23:53, Scott Granados  a écrit :
> 
> Hi, it’s been a while so if I’m wrong I’m happy to be corrected but in your 
> case you’d sample on the input side of each of the individual interfaces 
> facing the customer.
> 
> The load shouldn’t be an issue providing you’re running later code with out 
> the JFlow related bugs such as the PR (I don’t recall the number) for the bug 
> where flow processing blocked the PFE from accepting routes from the RE.  
> These were pre 13.2 on the 480 if memory serves but this is going back in the 
> haze a bit so feel free to sanity check.  Inline JFlow does not have the same 
> impact to processing of other platforms so you can be less concerned. 
>   The other thing to remember is that all sampling takes place at 1:1 and 
> the sampling rate knob is more of a scaling factor rather than actually 
> adjusting the rate of sampling.  Setting this to 1/1 instead of 1/1000 or 
> what ever value will help the data appear correctly.
> 
> Thanks
> Scott
> 
>> On Nov 2, 2016, at 1:34 PM, Keith  wrote:
>> 
>> We have a small network with one customer with ten connections all from an 
>> MX480
>> w/MPCE 2 3D,  RE2000 acting as a PE
>> router.
>> 
>> All interfaces are L3VPN and have multiple vrf's on them.
>> 
>> Unit 11
>> Unit 101
>> Unit 102
>> Unit 1000 - our management
>> 
>> The CPE at the customer is an EX4200 at each location.
>> 
>> The customer would like to see top talkers etc on each site.
>> 
>> We have flow capture setup on our peering/transit locations, but use
>> unit 0 on those interfaces, and is pretty simple.
>> 
>> Would I put the sampling on each unit on each interface that from where
>> we want to capture?
>> 
>> When setting up multiple captures on several interfaces is this going to
>> be too much to do load wise?
>> 
>> The sites in question schools, are low bandwidth, mostly 30 megs with one at 
>> 90.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX 14.2R7 / PR1177571

2016-10-26 Thread Olivier Benghozi
Here the alarm (detected on the re0 in older version) disappeared as soon as 
re1 (in newer version) took mastership in the chassis (non-GRES switchover, as 
specified in JunOS updating documents).

> Le 26 oct. 2016 à 15:19, Theo Voss <m...@theo-voss.de> a écrit :
> 
> Hi Santiago,
>  
> did the alarm disappeared after the 2nd RE was detected with the same 
> software or after a complete reboot?
>  
> Best regards,
> Theo
>  
> Von: santiago martinez <santiago.martinez...@gmail.com>
> Datum: Mittwoch, 26. Oktober 2016 um 15:15
> An: Theo Voss <m...@theo-voss.de>
> Cc: "juniper-nsp@puck.nether.net" <juniper-nsp@puck.nether.net>, Olivier 
> Benghozi <olivier.bengh...@wifirst.fr>
> Betreff: Re: [j-nsp] MX 14.2R7 / PR1177571
>  
> Hi there, yes we did hit the same PR.
> 
> the alarm was raised during the upgrade and completely disappear after both 
> RE. were running the same code version (14.2R6).
> 
> Regards
> 
> santiago
> 
>  
> On 26 Oct 2016 12:00, "Theo Voss" <m...@theo-voss.de 
> <mailto:m...@theo-voss.de>> wrote:
> Hi Olivier,
> 
> thanks for your reply. Yes, /var is correctly mounted.
> 
> Best regards,
> Theo
> 
> -Ursprüngliche Nachricht-
> Von: juniper-nsp <juniper-nsp-boun...@puck.nether.net 
> <mailto:juniper-nsp-boun...@puck.nether.net>> im Auftrag von Olivier Benghozi 
> <olivier.bengh...@wifirst.fr <mailto:olivier.bengh...@wifirst.fr>>
> Datum: Mittwoch, 26. Oktober 2016 um 10:59
> An: "juniper-nsp@puck.nether.net <mailto:juniper-nsp@puck.nether.net>" 
> <juniper-nsp@puck.nether.net <mailto:juniper-nsp@puck.nether.net>>
> Betreff: Re: [j-nsp] MX 14.2R7 / PR1177571
> 
> Yes but with 14.2R6 on re0 and 15.1R4 on re1 (so, during the update).
> 
> Did you check that /var was properly mounted on re1? :)
> 
> > Le 26 oct. 2016 à 10:53, Theo Voss <m...@theo-voss.de 
> > <mailto:m...@theo-voss.de>> a écrit :
> >
> > we've upgraded two of our MXs (MX960, 1800x4-32) to 14.2R7 and ran into 
> > PR1177571 which should already be fixed in R7.
> >
> > router> show version invoke-on all-routing-engines | match boot
> > JUNOS Base OS boot [14.2R7.5]
> > JUNOS Base OS boot [14.2R7.5]
> >
> > router> show system alarms
> > 1 alarms currently active
> > Alarm time   Class  Description
> > 2016-10-25 23:36:53 UTC  Major  Host 1 failed to mount /var off HDD, 
> > emergency /var created
> >
> > Workaround according to Juniper: Upgrade backup RE to the same release with 
> > master RE. << see "show version".
> > Resolved In: 13.3R9-S4 13.3R10 14.1R8 >> 14.2R7 << 15.1R4 15.1R5 15.1F5-S3 
> > 15.1F6-S1 16.1X70-D10 16.1R2 << see "show version".
> >
> > Has anybody encountered the same problem?
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> <mailto:juniper-nsp@puck.nether.net>
> https://puck.nether.net/mailman/listinfo/juniper-nsp 
> <https://puck.nether.net/mailman/listinfo/juniper-nsp>
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> <mailto:juniper-nsp@puck.nether.net>
> https://puck.nether.net/mailman/listinfo/juniper-nsp 
> <https://puck.nether.net/mailman/listinfo/juniper-nsp>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX 14.2R7 / PR1177571

2016-10-26 Thread Olivier Benghozi
Yes but with 14.2R6 on re0 and 15.1R4 on re1 (so, during the update).

Did you check that /var was properly mounted on re1? :)

> Le 26 oct. 2016 à 10:53, Theo Voss  a écrit :
> 
> we've upgraded two of our MXs (MX960, 1800x4-32) to 14.2R7 and ran into 
> PR1177571 which should already be fixed in R7.
> 
> router> show version invoke-on all-routing-engines | match boot
> JUNOS Base OS boot [14.2R7.5]
> JUNOS Base OS boot [14.2R7.5]
> 
> router> show system alarms
> 1 alarms currently active
> Alarm time   Class  Description
> 2016-10-25 23:36:53 UTC  Major  Host 1 failed to mount /var off HDD, 
> emergency /var created
> 
> Workaround according to Juniper: Upgrade backup RE to the same release with 
> master RE. << see "show version".
> Resolved In: 13.3R9-S4 13.3R10 14.1R8 >> 14.2R7 << 15.1R4 15.1R5 15.1F5-S3 
> 15.1F6-S1 16.1X70-D10 16.1R2 << see "show version".
> 
> Has anybody encountered the same problem?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Suggestion for Junos Version MX104

2016-10-25 Thread Olivier Benghozi
About SRRD:
- CPU usage: beware of PR1170656. Told to be fixed in 14.2R7 15.1R4 15.1F6.
- Mem usage: beware of PR1187721. Told to be fixed only in future or service 
releases (14.2R8 15.1R5 15.1F6-S2 16.1R3).

> On 25 oct. 2016 at 14:23, Mark Tinka  wrote :
> On 25/Oct/16 14:16, sth...@nethelp.no wrote:
>> We see significantly higher rpd CPU usage for a while after IPfix flow
>> export is enabled. However, it seems to settle down to normal levels.
> 
> Yes, that's normally SRRD (Sampling Route-Record Daemon) starting up and
> making sure all is well for the collection and export of the flows.
> 
> We've seen issues where SRRD can hang the box quite badly if a number of
> BGP sessions are de-activated in one go (and vice versa). In such cases,
> the only fix was to reboot the box. This was on the MX80.
> 
> SRRD seems reasonably fragile (and buggy), particularly in high
> route-churn situations. You want to stay on top of it, i.e., fix bugs
> when they are announced, e.t.c.
> 
> Otherwise, under normal conditions, exporting will be done inline, so it
> won't touch the RE.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Best way to do QOS bleach

2016-10-17 Thread Olivier Benghozi
In 14.2R3 and later, and in 15.1F and 16.1R (but not in 15.1R).

> On 17 oct. 2016 at 18:11, Dragan Jovicic  wrote :
> 
> And if you require more granular ingress remark, as Mark suggested after
> 14.2R3.8 you can use policy-maps.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Limit content of bgp.l3vpn.0

2016-09-28 Thread Olivier Benghozi
It just does.

> On 28 sept. 2016 at 18:49, Johan Borch  wrote :
> 
> I don't have a route-reflector, this is a full iBGP mesh, will family
> route-target still work?
> 
> On Wed, Sep 28, 2016 at 4:27 PM, Dragan Jovicic  wrote:
> 
>> By default route-reflector will reflect/refresh all vpn routes to a PE
>> router, even if PE doesn't need those routes (doesn't import target
>> community).
>> Route-target family allows PE to give route-reflector a permission to send
>> only those routes for which import target exists.
>> The fact that this family works across all other vpn families (l2vpn,
>> inet-vpn, inet6-vpn, etc) make this almost a necessity in large networks.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX upgrade to 15.1R4.6: loopback filters drop all traffic

2016-09-18 Thread Olivier Benghozi
Updated from 14.2 to 15.1R here (on several MX, same RE hardware).
Didn't see this issue.
Any particular stuff in your filters ?

> Le 18 sept. 2016 à 09:18, Chuck Anderson  a écrit :
> 
> Has anyone upgraded from 14.2 to 15.1 and seen this issue?  Right
> after the upgrade, all loopback filters started dropping all traffic
> causing OSPF & BGP failures, inability to ping or SSH into fxp0, etc.,
> despite being configured to allow the appropriate management & control
> plane traffic which was working perfectly fine in 14.2.
> Deactivating/reactivating the filters "fixed" the issue, as did a full
> box reboot.  Of course JTAC can't reproduce it.  64-bit Junos,
> RE-1800X4's.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] IPV6 over MPLS

2016-08-30 Thread Olivier Benghozi
If you have only RSVP-TE, you may have a look at this (inet6 shortcuts) which 
might be an alternative to 6PE for you:
https://forums.juniper.net/t5/TheRoutingChurn/Traffic-engineering-inet6-shortcuts-to-connect-IPv6-islands-Part/ba-p/192763
 



> Le 30 août 2016 à 15:09, raf  a écrit :
> 
> Le 30/08/2016 à 14:27, Alexander Arseniev a écrit :
>> Hello,
>> 
>> If You don't care whether IPv6 packets take RSVP or LDP LSP, then You
>> could just enable LDPv6 everywhere (JUNOS 16.1 onwards) and save on
>> rewriting NHs from IPv4-mapped IPv6 to proper IPv6.
>> 
> 
> No I only have RSVP and don't want to activate LDPv6 :)
> 
>> For VPNv6 You would still need NH rewriting as VPNv6 NH is still
>> IPv4-mapped IPv6 even if carried over BGP-over-IPv6.
>> 
> No vpnv6 for now.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] IPV6 over MPLS

2016-08-30 Thread Olivier Benghozi
Hi raf,

When using the new LDP native IPv6 support, as explained in 
https://www.juniper.net/techpubs/en_US/junos16.1/topics/task/configuration/configuring-ldp-native-ipv6-support.html
 

you have to "Enable forwarding equivalence class (FEC) deaggregation in order 
to use different labels for different address families.".
So you won't be really using "existing LSPs", but parallel ones. But that's a 
detail :)

However, it's 16.1R1 and you'll probably need an NBC suit to approach it. And 
that's probably not a detail.

So I feel that currently 6PE would still be the way to go if you want to 
MPLSizse your IPv6 immediately within global table.

"mpls ipv6-tunneling" is mandatory for it anyway.
You would have to add bgp inet6 labelled unicast to your iBGP, remove the ipv6 
interco addresses (but keep inet6 family), remove your ibgp ipv6 mesh (and 
transfer your ipv6 policies to the IPv4 sessions), remove your ospfv3 (or add 
no-ipv6-unicast to your interfaces in isis), and you can keep your ipv6 
loopbacks (to make some IPv6 pings and traceroutes) as long as you redistribute 
them using bgp.
I guess your bgp nexthop self policy now works properly :)


Olivier

> Le 30 août 2016 à 14:27, Alexander Arseniev  a écrit 
> :
> 
> Hello,
> 
> If You don't care whether IPv6 packets take RSVP or LDP LSP, then You could 
> just enable LDPv6 everywhere (JUNOS 16.1 onwards) and save on rewriting NHs 
> from IPv4-mapped IPv6 to proper IPv6.
> 
> For VPNv6 You would still need NH rewriting as VPNv6 NH is still IPv4-mapped 
> IPv6 even if carried over BGP-over-IPv6.
> 
> HTH
> Thx
> Alex
> 
> On 30/08/2016 11:49, raf wrote:
>> 
>> Hello list,
>> 
>> 
>> So I have now a properly configured MPLS network with a standard 
>> configuration :) All my traffic goes trough LSPs except the IPv6 one which 
>> it a bit frustrating.
>> 
>> The actual configuration is a separate ivp6 bgp mesh. No need for inet6-vpn.
>> 
>> I've read that the simplest method is to use 6PE with ipv6 labeled unicast. 
>> That's look simple, but if I understand correctly I can just wipe all my v6 
>> loopbacks and intercos.
>> Is there an alternative using my actual v6 bgp mesh, and just mapping the v6 
>> traffic on existing LSPs.
>> 
>> Can I trick with ipv6 mapped loopback enabled by 'mpls ipv6-tunneling ? 
>> rewriting nh ?
>> 
>> 
>> 
>> PS :
>> This will certainly be my last request for a while.
>> I learn a lot with you guys.
>> Thanks for having been patient with me :)

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Limit on the number of BGP communities a route can be tagged with?

2016-08-23 Thread Olivier Benghozi
And about a limitation to 10 communities:
I've seen that on SEOS (Redback/Ericsson OS for SmartEdge routers) when using 
"set community" in a route-map. This is a ridiculous arbitrary limitation, of 
course.

Hopefully the limitation was only in the CLI, not in the BGP code itself. So 
the workaround was to use the route-map "continue" command like in a BASIC GOTO 
structure to add more communities in additional route-map entries (with set 
community additive - these are Cisco-like commands).

> Le 23 août 2016 à 14:03, Alexander Arseniev  a écrit 
> :
> 
> In BGP messages, a regular community is encoded in 7 bytes, and extended one 
> in 11 bytes.
> 
> Max BGP message size is 4096 bytes - this sets a limit for regular 
> communities number to about 4K/7=570, and for extended communities to about 
> 4K/11=360, if You consider the minimal mandatory information that has to be 
> there apart from communities.
> 
> 
> On 23/08/2016 03:18, Huan Pham wrote:
>> 
>> I remember hitting a limit on a number of communities (something like 10 or
>> so) on a platform (can not remember which one from which vendor). So I
>> believe that there is a hard limit a platform or OS can support.
>> 
>> I test this in the lab and found no problem with tagging 100 communities.
>> 
>> Is there a maximum number of communities that Junos can tag to a route? If
>> yes, then what it is?  Thanks.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] RVSP signaled L3VPN and RRs

2016-08-18 Thread Olivier Benghozi
One must not use NHS for all routes on an RR, but only for external routes :)

policy-statement next-hop-self {
term iBGP {
from {
protocol bgp;
route-type internal;
}
then next policy;
}
term default {
then {
next-hop self;
}
}
}


> Le 18 août 2016 à 17:46, raf  a écrit :
> 
> Hum my RRs do NHS, and I don't think I could easily change this.
> Without NHS this is effectively not needed as I already have the loopbacks of 
> all my PEs in inet.3 populated via RSVP.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] RVSP signaled L3VPN and RRs

2016-08-18 Thread Olivier Benghozi
Did you set protocols mpls traffic-engineering mpls-forwarding ?

> Le 18 août 2016 à 17:13, raf  a écrit :
> 
> So there is a problem in resolving route of my L3vpn as there is no route in 
> inet.3 for my RRs.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

  1   2   >