Re: [j-nsp] Questions about T640

2016-04-11 Thread Alireza Soltanian
 

Thank you all for your concern.

Anyway is there anyone who actually answer my questions about features which I 
asked before?


1-  GRE Tunnels
2-  GRE Keepalive (OAM)
3-  802.1q over 802.1ad
4-  MPLS TE
5-  ATOM VLAN re-write

 

Thank you all





From: gbrown.k...@gmail.com [mailto:gbrown.k...@gmail.com] On Behalf Of Graham 
Brown
Sent: Monday, April 11, 2016 12:28 PM
To: Jimmy <hngji...@gmail.com>
Cc: Alireza Soltanian <soltan...@gmail.com>; Juniper List 
<juniper-nsp@puck.nether.net>
Subject: Re: [j-nsp] Questions about T640

 

I'm sorry, but I'd agree with the others in that I'd replace an M series with 
an MX or a T series with the PTX.

 

As Jimmy has shown, the vast majority of parts have been EOL'd and if I were 
investing in a new device it would be the MX or PTX.

 

I'm very surprised that the T Series works out cheaper, although I'm not sure 
what discounts you have been quoted so this may have a significant impact.

 

If it were my network, I'd go with the MX480 or 960 as PE.

 

HTH,

Graham




Graham Brown

Twitter - @mountainrescuer <https://twitter.com/#!/mountainrescuer> 

LinkedIn <http://www.linkedin.com/in/grahamcbrown> 

 

On 11 April 2016 at 19:10, Jimmy <hngji...@gmail.com 
<mailto:hngji...@gmail.com> > wrote:

resend

I'm sorry,
For T640, How about this announcement ?
https://gallery.mailchimp.com/1466897c24c515e6739f14c9e/files/TSB16819.pdf

On Mon, Apr 11, 2016 at 2:55 PM, Alireza Soltanian <soltan...@gmail.com 
<mailto:soltan...@gmail.com> >
wrote:


> No T640 and T4000 are still manufactured. Anyway is there anybody who can
> provide answer?
> On Apr 11, 2016 11:24 AM, "Fredrik Korsbäck" <hu...@nordu.net 
> <mailto:hu...@nordu.net> > wrote:
>
> > You do realize that most of T-series is EOL?
> >
> > Hugge@ as2603
> >
> > > 11 Apr 2016 kl. 07:02 skrev Alireza Soltanian <soltan...@gmail.com 
> > > <mailto:soltan...@gmail.com> >:
> > >
> > > Hi
> > >
> > > Yes the price of MX Series is much higher.
> > >
> > >
> > >
> > > From: Josh Reynolds [mailto:j...@kyneticwifi.com 
> > > <mailto:j...@kyneticwifi.com> ]
> > > Sent: Monday, April 11, 2016 9:27 AM
> > > To: Alireza Soltanian <soltan...@gmail.com <mailto:soltan...@gmail.com> >
> > > Cc: Juniper List <juniper-nsp@puck.nether.net 
> > > <mailto:juniper-nsp@puck.nether.net> >
> > > Subject: Re: [j-nsp] Questions about T640
> > >
> > >
> > >
> > > Any reason to not go with the MX line here?
> > >
> > > On Apr 10, 2016 11:55 PM, "Alireza Soltanian" <soltan...@gmail.com 
> > > <mailto:soltan...@gmail.com> 
> > <mailto:soltan...@gmail.com <mailto:soltan...@gmail.com> > > wrote:
> > >
> > > Hi everybody
> > >
> > > We are going to change our M320 router with T640. There are some
> concerns
> > > about supported features on T640. We need to have following features on
> > this
> > > router:
> > >
> > >
> > >
> > > 1-  GRE Tunnels
> > >
> > > 2-  GRE Keepalive (OAM)
> > >
> > > 3-  802.1q over 802.1ad
> > >
> > > 4-  MPLS TE
> > >
> > > 5-  ATOM VLAN re-write
> > >
> > >
> > >
> > > We need to handle up to 80Gbps of traffic. Here is the setup:
> > >
> > > For Module setup we are going to use following FPC:
> > >
> > > -  T640-FPC3-ES
> > >
> > > For Physical connections we are going to use following PICs:
> > >
> > > 1-  PC-1XGE-TYPE3-XFP-IQ2
> > >
> > > 2-  PC-1XGE-XENPAK
> > >
> > > For GRE tunnels we are going to use following PICs:
> > >
> > > PC-Tunnel
> > >
> > > For Routing Engine following item is considered:
> > >
> > > -  RE- A-2000-4096-BB
> > >
> > > For SIB:
> > >
> > > -  SIB-I-T640-B
> > >
> > > For Control Board:
> > >
> > > -  CB-L-T
> > >
> > > And for Connector interface panel:
> > >
> > > -  CIP-L-T640-S
> > >
> > > Is there anybody here who can guide me is this setup good or not and
> can
> > I
> > > have the mentioned feature with T640? Also is there any note about
> > required
> > > JunOS version of this setup?
> > >
> > >
> > >
> > > Tha

Re: [j-nsp] Questions about T640

2016-04-11 Thread Alireza Soltanian
No T640 and T4000 are still manufactured. Anyway is there anybody who can
provide answer?
On Apr 11, 2016 11:24 AM, "Fredrik Korsbäck" <hu...@nordu.net> wrote:

> You do realize that most of T-series is EOL?
>
> Hugge@ as2603
>
> > 11 Apr 2016 kl. 07:02 skrev Alireza Soltanian <soltan...@gmail.com>:
> >
> > Hi
> >
> > Yes the price of MX Series is much higher.
> >
> >
> >
> > From: Josh Reynolds [mailto:j...@kyneticwifi.com]
> > Sent: Monday, April 11, 2016 9:27 AM
> > To: Alireza Soltanian <soltan...@gmail.com>
> > Cc: Juniper List <juniper-nsp@puck.nether.net>
> > Subject: Re: [j-nsp] Questions about T640
> >
> >
> >
> > Any reason to not go with the MX line here?
> >
> > On Apr 10, 2016 11:55 PM, "Alireza Soltanian" <soltan...@gmail.com
> <mailto:soltan...@gmail.com> > wrote:
> >
> > Hi everybody
> >
> > We are going to change our M320 router with T640. There are some concerns
> > about supported features on T640. We need to have following features on
> this
> > router:
> >
> >
> >
> > 1-  GRE Tunnels
> >
> > 2-  GRE Keepalive (OAM)
> >
> > 3-  802.1q over 802.1ad
> >
> > 4-  MPLS TE
> >
> > 5-  ATOM VLAN re-write
> >
> >
> >
> > We need to handle up to 80Gbps of traffic. Here is the setup:
> >
> > For Module setup we are going to use following FPC:
> >
> > -  T640-FPC3-ES
> >
> > For Physical connections we are going to use following PICs:
> >
> > 1-  PC-1XGE-TYPE3-XFP-IQ2
> >
> > 2-  PC-1XGE-XENPAK
> >
> > For GRE tunnels we are going to use following PICs:
> >
> > PC-Tunnel
> >
> > For Routing Engine following item is considered:
> >
> > -  RE- A-2000-4096-BB
> >
> > For SIB:
> >
> > -  SIB-I-T640-B
> >
> > For Control Board:
> >
> > -  CB-L-T
> >
> > And for Connector interface panel:
> >
> > -  CIP-L-T640-S
> >
> > Is there anybody here who can guide me is this setup good or not and can
> I
> > have the mentioned feature with T640? Also is there any note about
> required
> > JunOS version of this setup?
> >
> >
> >
> > Thank you
> >
> >
> >
> >
> >
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net  juniper-nsp@puck.nether.net>
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
> >
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Questions about T640

2016-04-10 Thread Alireza Soltanian
Hi

Yes the price of MX Series is much higher.

 

From: Josh Reynolds [mailto:j...@kyneticwifi.com] 
Sent: Monday, April 11, 2016 9:27 AM
To: Alireza Soltanian <soltan...@gmail.com>
Cc: Juniper List <juniper-nsp@puck.nether.net>
Subject: Re: [j-nsp] Questions about T640

 

Any reason to not go with the MX line here?

On Apr 10, 2016 11:55 PM, "Alireza Soltanian" <soltan...@gmail.com 
<mailto:soltan...@gmail.com> > wrote:

Hi everybody

We are going to change our M320 router with T640. There are some concerns
about supported features on T640. We need to have following features on this
router:



1-  GRE Tunnels

2-  GRE Keepalive (OAM)

3-  802.1q over 802.1ad

4-  MPLS TE

5-  ATOM VLAN re-write



We need to handle up to 80Gbps of traffic. Here is the setup:

For Module setup we are going to use following FPC:

-  T640-FPC3-ES

For Physical connections we are going to use following PICs:

1-  PC-1XGE-TYPE3-XFP-IQ2

2-  PC-1XGE-XENPAK

For GRE tunnels we are going to use following PICs:

PC-Tunnel

For Routing Engine following item is considered:

-  RE- A-2000-4096-BB

For SIB:

-  SIB-I-T640-B

For Control Board:

-  CB-L-T

And for Connector interface panel:

-  CIP-L-T640-S

Is there anybody here who can guide me is this setup good or not and can I
have the mentioned feature with T640? Also is there any note about required
JunOS version of this setup?



Thank you





___
juniper-nsp mailing list juniper-nsp@puck.nether.net 
<mailto:juniper-nsp@puck.nether.net> 
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Questions about T640

2016-04-10 Thread Alireza Soltanian
Hi everybody

We are going to change our M320 router with T640. There are some concerns
about supported features on T640. We need to have following features on this
router:

 

1-  GRE Tunnels

2-  GRE Keepalive (OAM)

3-  802.1q over 802.1ad

4-  MPLS TE

5-  ATOM VLAN re-write

 

We need to handle up to 80Gbps of traffic. Here is the setup:

For Module setup we are going to use following FPC:

-  T640-FPC3-ES

For Physical connections we are going to use following PICs:

1-  PC-1XGE-TYPE3-XFP-IQ2

2-  PC-1XGE-XENPAK

For GRE tunnels we are going to use following PICs:

PC-Tunnel

For Routing Engine following item is considered:

-  RE- A-2000-4096-BB

For SIB:

-  SIB-I-T640-B

For Control Board: 

-  CB-L-T

And for Connector interface panel:

-  CIP-L-T640-S

Is there anybody here who can guide me is this setup good or not and can I
have the mentioned feature with T640? Also is there any note about required
JunOS version of this setup?

 

Thank you

 

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Strange Log about GRE Keepalive

2016-01-04 Thread Alireza Soltanian
Thanks for the explanation.

 

I don't have public IP address on this router. I installed some 10GE PICs on
other FPCs(2,3,4). Source of the GRE tunnels is IP addresses of those PICs.
But GRE tunnel itself is configured on PIC in FPC0 or FPC1.

Anyway Keepalive mechanism works fine and reacts to losing the Keepalive and
invalidates OSPF neighbors and routes as soon as NOT receiving Keepalive
from neighbor. Neighbor routers are all Cisco IOS routers.

There is no problem in operation but I want to suppress this log.

 

From: Alireza Soltanian [mailto:soltan...@gmail.com] 
Sent: Monday, January 4, 2016 2:55 PM
To: 'juniper-nsp@puck.nether.net' <juniper-nsp@puck.nether.net>
Cc: 'rdobb...@arbor.net' <rdobb...@arbor.net>
Subject: RE: Strange Log about GRE Keepalive

 

Hi

I did not understand what are saying. Anyway I personally installed the
modules on the chassis so I am sure there is no PIC Tunnel on FPC2,3,4.

GRE source destinations are on Interfaces which reside on other FPCs but GRE
tunnel interface is on FPC0 or FPC1.

Also I must mention FPC type is different:

 

FPC 0REV 05   ---   --M320 E3-FPC Type 3

  I3MB A REV 04   ---   --M320 E3-FPC I3 Mez
Board

  I3MB B REV 04   ---   --M320 E3-FPC I3 Mez
Board

FPC 1REV 02   ---   --M320 E3-FPC Type 3

  I3MB A REV 06   ---   --M320 E3-FPC I3 Mez
Board

  I3MB B REV 06   ---   --M320 E3-FPC I3 Mez
Board

FPC 2REV 07   ---   --M320 E2-FPC Type 1

  CPUREV 04   ---   --M320 FPC CPU

FPC 3REV 05   ---   --M320 E2-FPC Type 3

  CPUREV 04   ---   --M320 FPC CPU

FPC 4REV 08   ---   --M320 E2-FPC Type 3

  CPUREV 04   ---   --M320 FPC CPU

 

 

 

From: Alireza Soltanian [mailto:soltan...@gmail.com] 
Sent: Monday, January 4, 2016 2:39 PM
To: 'juniper-nsp@puck.nether.net' <juniper-nsp@puck.nether.net
<mailto:juniper-nsp@puck.nether.net> >
Subject: Strange Log about GRE Keepalive

 

Hi

On our M320 we always have this log:

 

fpc2 pfe doesn't support GRE Keepalives

fpc4 pfe doesn't support GRE Keepalives

fpc3 pfe doesn't support GRE Keepalives

 

The point is we don't have Tunnel PIC on these FPCs but we have on FPC0 and
FPC1. Also GRE keepalive was configured for tunnels on those PICs. Is there
any method for suppressing this log?

 

Thank you for your help and support

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] RPD Crash on M320

2016-01-04 Thread Alireza Soltanian
Hi everybody

Recently, we had continuous link flap between our M320 and remote sites. We
have a lot of L2Circuits between these sites on our M320. At one point we
had crash on RPD process which lead to following log. I must mention the
link flap started at 12:10AM and it was continued until 2:30AM. But Crash
was occurred at 12:30AM.

 

Jan  3 00:31:04  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
10.237.253.168 is down, reason: received notification from peer

Jan  3 00:31:05  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
10.237.254.1 is down, reason: received notification from peer

Jan  3 00:31:05  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
10.237.253.120 is down, reason: received notification from peer

Jan  3 00:31:05  apa-rtr-028 /kernel: jsr_prl_recv_ack_msg(): received PRL
ACK message on non-active socket w/handle 0x1008af801c6

Jan  3 00:31:06  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
10.237.253.192 is down, reason: received notification from peer

Jan  3 00:31:28  apa-rtr-028 /kernel: jsr_prl_recv_ack_msg(): received PRL
ACK message on non-active socket w/handle 0x10046fa004e

 

Jan  3 00:32:18  apa-rtr-028 init: routing (PID 42128) terminated by signal
number 6. Core dumped!

Jan  3 00:32:18  apa-rtr-028 init: routing (PID 18307) started

Jan  3 00:32:18  apa-rtr-028 rpd[18307]: L2CKT acquiring mastership for
primary

Jan  3 00:32:18  apa-rtr-028 rpd[18307]: L2VPN acquiring mastership for
primary

Jan  3 00:32:20  apa-rtr-028 rpd[18307]: RPD_KRT_KERNEL_BAD_ROUTE: KRT: lost
ifl 0 for route (null)

Jan  3 00:32:20  apa-rtr-028 last message repeated 65 times

Jan  3 00:32:20  apa-rtr-028 rpd[18307]: L2CKT acquiring mastership for
primary

Jan  3 00:32:20  apa-rtr-028 rpd[18307]: Primary starts deleting all
L2circuit IFL Repository

Jan  3 00:32:20  apa-rtr-028 rpd[18307]: RPD_TASK_BEGIN: Commencing routing
updates, version 11.2R2.4, built 2011-09-01 06:53:31 UTC by builder

 

Jan  3 00:32:21  apa-rtr-028 mib2d[33413]: SNMP_TRAP_LINK_DOWN: ifIndex
1329, ifAdminStatus up(1), ifOperStatus down(2), ifName ae1.1041

Jan  3 00:32:21  apa-rtr-028 mib2d[33413]: SNMP_TRAP_LINK_DOWN: ifIndex
1311, ifAdminStatus up(1), ifOperStatus down(2), ifName ae1.1039

Jan  3 00:32:21  apa-rtr-028 mib2d[33413]: SNMP_TRAP_LINK_DOWN: ifIndex
1312, ifAdminStatus up(1), ifOperStatus down(2), ifName ae1.1038

 

The case is we always have this kind of log (except the Crash) on the
device. Is there any clue why RPD process crashed? I don't have access to
JTAC so I cannot analyze the dump.

The JunOS version is : 11.2R2.4

 

Thank you for your help and support

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Strange Log about GRE Keepalive

2016-01-04 Thread Alireza Soltanian
Hi

On our M320 we always have this log:

 

fpc2 pfe doesn't support GRE Keepalives

fpc4 pfe doesn't support GRE Keepalives

fpc3 pfe doesn't support GRE Keepalives

 

The point is we don't have Tunnel PIC on these FPCs but we have on FPC0 and
FPC1. Also GRE keepalive was configured for tunnels on those PICs. Is there
any method for suppressing this log?

 

Thank you for your help and support

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Strange Log about GRE Keepalive

2016-01-04 Thread Alireza Soltanian
Hi

I did not understand what are saying. Anyway I personally installed the
modules on the chassis so I am sure there is no PIC Tunnel on FPC2,3,4.

GRE source destinations are on Interfaces which reside on other FPCs but GRE
tunnel interface is on FPC0 or FPC1.

Also I must mention FPC type is different:

 

FPC 0REV 05   ---   --M320 E3-FPC Type 3

  I3MB A REV 04   ---   --M320 E3-FPC I3 Mez
Board

  I3MB B REV 04   ---   --M320 E3-FPC I3 Mez
Board

FPC 1REV 02   ---   --M320 E3-FPC Type 3

  I3MB A REV 06   ---   --M320 E3-FPC I3 Mez
Board

  I3MB B REV 06   ---   --M320 E3-FPC I3 Mez
Board

FPC 2REV 07   ---   --M320 E2-FPC Type 1

  CPUREV 04   ---   --M320 FPC CPU

FPC 3REV 05   ---   --M320 E2-FPC Type 3

  CPUREV 04   ---   --M320 FPC CPU

FPC 4REV 08   ---   --M320 E2-FPC Type 3

  CPUREV 04   ---   --M320 FPC CPU

 

 

 

From: Alireza Soltanian [mailto:soltan...@gmail.com] 
Sent: Monday, January 4, 2016 2:39 PM
To: 'juniper-nsp@puck.nether.net' <juniper-nsp@puck.nether.net>
Subject: Strange Log about GRE Keepalive

 

Hi

On our M320 we always have this log:

 

fpc2 pfe doesn't support GRE Keepalives

fpc4 pfe doesn't support GRE Keepalives

fpc3 pfe doesn't support GRE Keepalives

 

The point is we don't have Tunnel PIC on these FPCs but we have on FPC0 and
FPC1. Also GRE keepalive was configured for tunnels on those PICs. Is there
any method for suppressing this log?

 

Thank you for your help and support

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RPD Crash on M320

2016-01-04 Thread Alireza Soltanian
Just asking. Anyway any idea about my comments? Also is there any mechanism
or approach for dealing with these kind of situations?
On Jan 4, 2016 6:45 PM, "Niall Donaghy" <niall.dona...@geant.org> wrote:

> Reading the core dump is beyond my expertise I’m afraid.
>
>
>
> Br,
>
> Niall
>
>
>
> *From:* Alireza Soltanian [mailto:soltan...@gmail.com]
> *Sent:* 04 January 2016 15:14
> *To:* Niall Donaghy
> *Cc:* juniper-nsp@puck.nether.net
> *Subject:* RE: [j-nsp] RPD Crash on M320
>
>
>
> Hi
> Yes I checked the CPU graph and there was a spike on CPU load.
> The link was flappy 20 minutes before crash. Also it remained flappy two
> hours after this crash. During this time we can see LDP sessions go UP DOWN
> over and over. But the only time there was a crash was this time and there
> is no spike on CPU.
> I must mention we had another issue with another M320. Whenever a link
> flapped, CPU of RPD went high and all OSPF sessions reset. I found out the
> root cause for that. It was traceoption for LDP. For this box we dont use
> traceoption.
> Is there any way to read the dump?
>
> Thank you
>
> On Jan 4, 2016 6:34 PM, "Niall Donaghy" <niall.dona...@geant.org> wrote:
>
> Hi Alireza,
>
> It seemed to me this event could be related to the core dump: Jan  3
> 00:31:28  apa-rtr-028 /kernel: jsr_prl_recv_ack_msg(): received PRL ACK
> message on non-active socket w/handle 0x10046fa004e
> However upon further investigation
> (http://kb.juniper.net/InfoCenter/index?page=content=KB18195) I see
> these
> messages are normal/harmless.
>
> Do you have Cacti graphs of CPU utilisation for both REs, before the rpd
> crash? Link flapping may be giving rise to CPU hogging, leading to
> instability and subsequent rpd crash.
> Was the link particularly flappy just before the crash?
>
> Kind regards,
> Niall
>
>
>
>
> > -Original Message-
> > From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
> Of
> > Alireza Soltanian
> > Sent: 04 January 2016 11:04
> > To: juniper-nsp@puck.nether.net
> > Subject: [j-nsp] RPD Crash on M320
> >
> > Hi everybody
> >
> > Recently, we had continuous link flap between our M320 and remote sites.
> We
> > have a lot of L2Circuits between these sites on our M320. At one point we
> had
> > crash on RPD process which lead to following log. I must mention the link
> flap
> > started at 12:10AM and it was continued until 2:30AM. But Crash was
> occurred
> > at 12:30AM.
> >
> >
> >
> > Jan  3 00:31:04  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
> > 10.237.253.168 is down, reason: received notification from peer
> >
> > Jan  3 00:31:05  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
> > 10.237.254.1 is down, reason: received notification from peer
> >
> > Jan  3 00:31:05  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
> > 10.237.253.120 is down, reason: received notification from peer
> >
> > Jan  3 00:31:05  apa-rtr-028 /kernel: jsr_prl_recv_ack_msg(): received
> PRL
> ACK
> > message on non-active socket w/handle 0x1008af801c6
> >
> > Jan  3 00:31:06  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
> > 10.237.253.192 is down, reason: received notification from peer
> >
> > Jan  3 00:31:28  apa-rtr-028 /kernel: jsr_prl_recv_ack_msg(): received
> PRL
> ACK
> > message on non-active socket w/handle 0x10046fa004e
> >
> >
> >
> > Jan  3 00:32:18  apa-rtr-028 init: routing (PID 42128) terminated by
> signal
> > number 6. Core dumped!
> >
> > Jan  3 00:32:18  apa-rtr-028 init: routing (PID 18307) started
> >
> > Jan  3 00:32:18  apa-rtr-028 rpd[18307]: L2CKT acquiring mastership for
> primary
> >
> > Jan  3 00:32:18  apa-rtr-028 rpd[18307]: L2VPN acquiring mastership for
> primary
> >
> > Jan  3 00:32:20  apa-rtr-028 rpd[18307]: RPD_KRT_KERNEL_BAD_ROUTE: KRT:
> > lost ifl 0 for route (null)
> >
> > Jan  3 00:32:20  apa-rtr-028 last message repeated 65 times
> >
> > Jan  3 00:32:20  apa-rtr-028 rpd[18307]: L2CKT acquiring mastership for
> primary
> >
> > Jan  3 00:32:20  apa-rtr-028 rpd[18307]: Primary starts deleting all
> L2circuit IFL
> > Repository
> >
> > Jan  3 00:32:20  apa-rtr-028 rpd[18307]: RPD_TASK_BEGIN: Commencing
> routing
> > updates, version 11.2R2.4, built 2011-09-01 06:53:31 UTC by builder
> >
> >
> >
> > Jan  3 00:32:21  apa-rtr-028 mib2d[33413]: SNMP_TRAP_LINK_DOWN: ifIndex
> > 1329, ifAdminStatus up(1

Re: [j-nsp] Strange Log about GRE Keepalive

2016-01-04 Thread Alireza Soltanian
Thanks man I will check this...
On Jan 4, 2016 6:21 PM, "Niall Donaghy" <niall.dona...@geant.org> wrote:

> Hi Alireza,
>
> If you want to suppress this message from the on-box log files, you can do
> something like this on MX - not sure about M320:
>
> set system syslog   match "!(fpc.*pfe doesn't support
> GRE Keepalives)"
>
> To suppress the message from going to your syslog server, you can try this
> -
> but I haven't tested this myself:
>
> set system syslog   match "!(fpc.*pfe doesn't support
> GRE Keepalives)"
>
> Finally, this is not applicable in your case but might help someone
> searching the archive in future.
> If you have an MS-MIC card on MX-series, you can alter the syslog severity
> in the syslog process on the MS-MIC itself.
> We needed this workaround to stop the eventd process hogging CPU due to
> excessive error messages from the MS-MIC using Netflow v9 with mpls-ipv4
> template; the errors are 'cosmetic' but nevertheless hogging the main RE
> CPU, if generated.
>
> set chassis fpc 9 pic 0 adaptive-services service-package
> extension-provider syslog daemon critical
>
> HTH,
> Niall
>
> > -Original Message-
> > From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
> Of
> > Alireza Soltanian
> > Sent: 04 January 2016 12:02
> > To: juniper-nsp@puck.nether.net
> > Subject: Re: [j-nsp] Strange Log about GRE Keepalive
> >
> > Thanks for the explanation.
> >
> >
> >
> > I don't have public IP address on this router. I installed some 10GE PICs
> on other
> > FPCs(2,3,4). Source of the GRE tunnels is IP addresses of those PICs.
> > But GRE tunnel itself is configured on PIC in FPC0 or FPC1.
> >
> > Anyway Keepalive mechanism works fine and reacts to losing the Keepalive
> and
> > invalidates OSPF neighbors and routes as soon as NOT receiving Keepalive
> from
> > neighbor. Neighbor routers are all Cisco IOS routers.
> >
> > There is no problem in operation but I want to suppress this log.
> >
> >
> >
> > From: Alireza Soltanian [mailto:soltan...@gmail.com]
> > Sent: Monday, January 4, 2016 2:55 PM
> > To: 'juniper-nsp@puck.nether.net' <juniper-nsp@puck.nether.net>
> > Cc: 'rdobb...@arbor.net' <rdobb...@arbor.net>
> > Subject: RE: Strange Log about GRE Keepalive
> >
> >
> >
> > Hi
> >
> > I did not understand what are saying. Anyway I personally installed the
> modules
> > on the chassis so I am sure there is no PIC Tunnel on FPC2,3,4.
> >
> > GRE source destinations are on Interfaces which reside on other FPCs but
> GRE
> > tunnel interface is on FPC0 or FPC1.
> >
> > Also I must mention FPC type is different:
> >
> >
> >
> > FPC 0REV 05   ---   --M320 E3-FPC Type 3
> >
> >   I3MB A REV 04   ---   --M320 E3-FPC I3 Mez
> > Board
> >
> >   I3MB B REV 04   ---   --M320 E3-FPC I3 Mez
> > Board
> >
> > FPC 1REV 02   ---   --M320 E3-FPC Type 3
> >
> >   I3MB A REV 06   ---   --M320 E3-FPC I3 Mez
> > Board
> >
> >   I3MB B REV 06   ---   --M320 E3-FPC I3 Mez
> > Board
> >
> > FPC 2REV 07   ---   --M320 E2-FPC Type 1
> >
> >   CPUREV 04   ---   --M320 FPC CPU
> >
> > FPC 3REV 05   ---   --M320 E2-FPC Type 3
> >
> >   CPUREV 04   ---   --M320 FPC CPU
> >
> > FPC 4REV 08   ---   --M320 E2-FPC Type 3
> >
> >   CPUREV 04   ---   --M320 FPC CPU
> >
> >
> >
> >
> >
> >
> >
> > From: Alireza Soltanian [mailto:soltan...@gmail.com]
> > Sent: Monday, January 4, 2016 2:39 PM
> > To: 'juniper-nsp@puck.nether.net' <juniper-nsp@puck.nether.net
> > <mailto:juniper-nsp@puck.nether.net> >
> > Subject: Strange Log about GRE Keepalive
> >
> >
> >
> > Hi
> >
> > On our M320 we always have this log:
> >
> >
> >
> > fpc2 pfe doesn't support GRE Keepalives
> >
> > fpc4 pfe doesn't support GRE Keepalives
> >
> > fpc3 pfe doesn't support GRE Keepalives
> >
> >
> >
> > The point is we don't have Tunnel PIC on these FPCs but we have on FPC0
> and
> > FPC1. Also GRE keepalive was configured for tunnels on those PICs. Is
> there any
> > method for suppressing this log?
> >
> >
> >
> > Thank you for your help and support
> >
> >
> >
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RPD Crash on M320

2016-01-04 Thread Alireza Soltanian
Hi
Yes I checked the CPU graph and there was a spike on CPU load.
The link was flappy 20 minutes before crash. Also it remained flappy two
hours after this crash. During this time we can see LDP sessions go UP DOWN
over and over. But the only time there was a crash was this time and there
is no spike on CPU.
I must mention we had another issue with another M320. Whenever a link
flapped, CPU of RPD went high and all OSPF sessions reset. I found out the
root cause for that. It was traceoption for LDP. For this box we dont use
traceoption.
Is there any way to read the dump?

Thank you
On Jan 4, 2016 6:34 PM, "Niall Donaghy" <niall.dona...@geant.org> wrote:

> Hi Alireza,
>
> It seemed to me this event could be related to the core dump: Jan  3
> 00:31:28  apa-rtr-028 /kernel: jsr_prl_recv_ack_msg(): received PRL ACK
> message on non-active socket w/handle 0x10046fa004e
> However upon further investigation
> (http://kb.juniper.net/InfoCenter/index?page=content=KB18195) I see
> these
> messages are normal/harmless.
>
> Do you have Cacti graphs of CPU utilisation for both REs, before the rpd
> crash? Link flapping may be giving rise to CPU hogging, leading to
> instability and subsequent rpd crash.
> Was the link particularly flappy just before the crash?
>
> Kind regards,
> Niall
>
>
>
>
> > -Original Message-
> > From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
> Of
> > Alireza Soltanian
> > Sent: 04 January 2016 11:04
> > To: juniper-nsp@puck.nether.net
> > Subject: [j-nsp] RPD Crash on M320
> >
> > Hi everybody
> >
> > Recently, we had continuous link flap between our M320 and remote sites.
> We
> > have a lot of L2Circuits between these sites on our M320. At one point we
> had
> > crash on RPD process which lead to following log. I must mention the link
> flap
> > started at 12:10AM and it was continued until 2:30AM. But Crash was
> occurred
> > at 12:30AM.
> >
> >
> >
> > Jan  3 00:31:04  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
> > 10.237.253.168 is down, reason: received notification from peer
> >
> > Jan  3 00:31:05  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
> > 10.237.254.1 is down, reason: received notification from peer
> >
> > Jan  3 00:31:05  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
> > 10.237.253.120 is down, reason: received notification from peer
> >
> > Jan  3 00:31:05  apa-rtr-028 /kernel: jsr_prl_recv_ack_msg(): received
> PRL
> ACK
> > message on non-active socket w/handle 0x1008af801c6
> >
> > Jan  3 00:31:06  apa-rtr-028 rpd[42128]: RPD_LDP_SESSIONDOWN: LDP session
> > 10.237.253.192 is down, reason: received notification from peer
> >
> > Jan  3 00:31:28  apa-rtr-028 /kernel: jsr_prl_recv_ack_msg(): received
> PRL
> ACK
> > message on non-active socket w/handle 0x10046fa004e
> >
> >
> >
> > Jan  3 00:32:18  apa-rtr-028 init: routing (PID 42128) terminated by
> signal
> > number 6. Core dumped!
> >
> > Jan  3 00:32:18  apa-rtr-028 init: routing (PID 18307) started
> >
> > Jan  3 00:32:18  apa-rtr-028 rpd[18307]: L2CKT acquiring mastership for
> primary
> >
> > Jan  3 00:32:18  apa-rtr-028 rpd[18307]: L2VPN acquiring mastership for
> primary
> >
> > Jan  3 00:32:20  apa-rtr-028 rpd[18307]: RPD_KRT_KERNEL_BAD_ROUTE: KRT:
> > lost ifl 0 for route (null)
> >
> > Jan  3 00:32:20  apa-rtr-028 last message repeated 65 times
> >
> > Jan  3 00:32:20  apa-rtr-028 rpd[18307]: L2CKT acquiring mastership for
> primary
> >
> > Jan  3 00:32:20  apa-rtr-028 rpd[18307]: Primary starts deleting all
> L2circuit IFL
> > Repository
> >
> > Jan  3 00:32:20  apa-rtr-028 rpd[18307]: RPD_TASK_BEGIN: Commencing
> routing
> > updates, version 11.2R2.4, built 2011-09-01 06:53:31 UTC by builder
> >
> >
> >
> > Jan  3 00:32:21  apa-rtr-028 mib2d[33413]: SNMP_TRAP_LINK_DOWN: ifIndex
> > 1329, ifAdminStatus up(1), ifOperStatus down(2), ifName ae1.1041
> >
> > Jan  3 00:32:21  apa-rtr-028 mib2d[33413]: SNMP_TRAP_LINK_DOWN: ifIndex
> > 1311, ifAdminStatus up(1), ifOperStatus down(2), ifName ae1.1039
> >
> > Jan  3 00:32:21  apa-rtr-028 mib2d[33413]: SNMP_TRAP_LINK_DOWN: ifIndex
> > 1312, ifAdminStatus up(1), ifOperStatus down(2), ifName ae1.1038
> >
> >
> >
> > The case is we always have this kind of log (except the Crash) on the
> device. Is
> > there any clue why RPD process crashed? I don't have access to JTAC so I
> cannot
> > analyze the dump.
> >
> > The JunOS version is : 11.2R2.4
> >
> >
> >
> > Thank you for your help and support
> >
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net
> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Strange log on M320

2015-10-17 Thread Alireza Soltanian
Hi everybody

Recently, we installed several FPCs and PICs on our M320 router. We already
have this setup:

 

FPC0   M320 E3-FPC Type 3

  PIC0  1x 10GE(LAN/WAN) IQ2

  PIC1 1x Tunnel(PC-Tunnel)

 

FPC1   M320 E3-FPC Type 3

  PIC0  1x 10GE(LAN/WAN) IQ2

  PIC1 1x Tunnel(PC-Tunnel)

 

FPC2  M320 E2-FPC Type 1

PIC2 4x STM-1 SDH, SMIR

PIC3  4x OC-3 SONET, SMIR

 

FPC3 M320 E2-FPC Type 3

PIC0 1x 10GE(LAN),XENPAK

PIC1 1x 10GE(LAN),XENPAK

 

FPC4 M320 E2-FPC Type 3

PIC0 1x 10GE(LAN),XENPAK

PIC1 No PIC

 

Everything works fine. But the funny part is  we have this log on the
console over and over:

 

fpc2 pfe doesn't support GRE Keepalives

fpc4 pfe doesn't support GRE Keepalives

fpc3 pfe doesn't support GRE Keepalives

 

We are using GRE keepalive. But we don't have Tunnel PIC on mentioned FPCs
in the log. Also everything works fine. Is there anybody here who knows:

1-  Why we have this log?

2-  How can we suppress this log from generation?

 

Thank you for your help and support.

 

Alireza

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] No IFL found...

2015-09-28 Thread Alireza Soltanian
Hi everybody

 

A couple of days ago, I sent an email about GRE Keepalive on M10/M20.

I did some more tests on this case. I am using PE-Tunnel on M10/M20. Tunnel
can be configured and works fine but when I try to configure GRE Keepalive
via oam and check the statistics about it I received an error about IFL not
found.

I don't have this issue on M320 with PC-Tunnel.

IS anybody here familiar with this issue?

 

Thank you

Alireza

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] GRE Keepalive on M10/M20

2015-09-24 Thread Alireza Soltanian
Hi Everybody
We have several M10/M20 with PE-Tunnel pics. I wonder how can I enable GRE
keepalive for GRE tunnels?
We also have M320 with PC-Tunnel pics. GRE keepalive works fine with GRE
tunnels on this PIC of M320 but for M10/M20 the related command was not
shown. Anyway When I configure it the command is accepted but GRE Keepalive
functionality does not work and I have an error about LFI interface
I searched the Internet but could not find anything. I must mention We use
JunOS 9 to 11 on these M10/M20

Thank you for your help and support...
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] GRE Keepalive on M10/M20

2015-09-24 Thread Alireza Soltanian
Does Anybody have any suggestion?
On 24 Sep 2015 12:14, "Alireza Soltanian" <soltan...@gmail.com> wrote:

> Hi Everybody
> We have several M10/M20 with PE-Tunnel pics. I wonder how can I enable GRE
> keepalive for GRE tunnels?
> We also have M320 with PC-Tunnel pics. GRE keepalive works fine with GRE
> tunnels on this PIC of M320 but for M10/M20 the related command was not
> shown. Anyway When I configure it the command is accepted but GRE Keepalive
> functionality does not work and I have an error about LFI interface
> I searched the Internet but could not find anything. I must mention We use
> JunOS 9 to 11 on these M10/M20
>
> Thank you for your help and support...
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Suppressing SNMP Trap to just one packet

2015-09-09 Thread Alireza Soltanian
Hi
We are implementing SNMP Trap on Juniper routers. The case is when an event
occurred, device sends trap for it several times(every one/two minute). Our
trap receiver is connected to a mailing system which generate email upon
receiving the trap. This action causes sending a lot of email for just one
event.
I am using following configuration:

set snmp v3 vacm security-to-group security-model v2c security-name
sn_v2c_trap group gr_v2c_trap
set snmp v3 vacm access group gr_v2c_trap default-context-prefix
security-model v2c security-level none read-view all
set snmp v3 vacm access group gr_v2c_trap default-context-prefix
security-model v2c security-level none notify-view all
set snmp v3 target-address TA_v2c_trap address 10.10.10.100
set snmp v3 target-address TA_v2c_trap port 162
set snmp v3 target-address TA_v2c_trap tag-list TAG
set snmp v3 target-address TA_v2c_trap target-parameters TP_v2c_trap
set snmp v3 target-parameters TP_v2c_trap parameters
message-processing-model v2c
set snmp v3 target-parameters TP_v2c_trap parameters security-model v2c
set snmp v3 target-parameters TP_v2c_trap parameters security-level none
set snmp v3 target-parameters TP_v2c_trap parameters security-name
sn_v2c_trap
set snmp v3 target-parameters TP_v2c_trap notify-filter nf1
set snmp v3 notify v2c_notify type trap
set snmp v3 notify v2c_notify tag TAG
set snmp v3 notify-filter nf1 oid .1 include
set snmp v3 notify-filter nf1 oid .1.3.6.1.4.1.2636.4.5 exclude
set snmp v3 notify-filter nf1 oid 1.3.6.1.2.1.14.16.2.10 exclude
set snmp v3 notify-filter nf1 oid 1.3.6.1.4.1.2636.4.4.0 exclude
set snmp v3 snmp-community index1 community-name
"$9$nprbCtuOBIrlv69tO1RlegoJDkmP5Ftp0"
set snmp v3 snmp-community index1 security-name sn_v2c_trap
set snmp view all oid .1 include

Is there one here who can suggest any soultion for sending just one trap
after an event. Specially when a BGP neighbor is down.
We are using variations of M Series router and JunOS versions from version
9 to 12

Thank you
Alireza
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] L2Circuit Backup does not switch back to Primary

2015-06-30 Thread Alireza Soltanian
Hi everybody

I am using L2Circuit connection between three routers:

1)  Juniper M10(R1)

2)  Cisco 7200VXR(R2)

3)  Cisco 7200VXR(R3)

The setup is follow:

1)  R1 is connected to R2

2)  R1 is connected to R3

3)  There is OSPF between them

4)  There is MPLS between them

5)  I want to configure L2circuit between R1 and R2 while have a backup
L2Circuit connection between R1 and R3.

6)  When connection between R1 and R2 was failed, backup L2Circuit
between R1 and R3 must be enabled.

7)  After re-establishing the connection between R1 and R2, Traffic must
be switched back to Main L2Circuit connection.

 

This is the configuration on R1:

 

set interfaces ge-0/0/0 unit 3085 encapsulation vlan-ccc

set interfaces ge-0/0/0 unit 3085 vlan-id 3085

!

set protocols l2circuit neighbor 10.241.252.7 interface ge-0/0/0.3085
virtual-circuit-id 3085

set protocols l2circuit neighbor 10.241.252.7 interface ge-0/0/0.3085
no-control-word

set protocols l2circuit neighbor 10.241.252.7 interface ge-0/0/0.3085 mtu
1500

set protocols l2circuit neighbor 10.241.252.7 interface ge-0/0/0.3085
backup-neighbor 10.241.252.8 virtual-circuit-id 3085

!

In this configuration, IP 10.241.252.7 is LDP router-id of R2 and
10.241.252.8 is LDP router-id of R3

 

What is the issue:

At the first, everything is OK and I have this output on R1:

 

show l2circuit connections   

Neighbor: 10.241.252.7 

Interface Type  St Time last up  # Up trans

ge-0/0/0.3085(vc 3085)rmt   Up Jun 30 13:05:13 2015   1

  Remote PE: 10.241.252.7, Negotiated control-word: No

  Incoming label: 486563, Outgoing label: 670

  Negotiated PW status TLV: No

  Local interface: ge-0/0/0.3085, Status: Up, Encapsulation: VLAN

Neighbor: 10.241.252.8 

Interface Type  St Time last up  # Up trans

ge-0/0/0.3085(vc 3085)rmt   BK   

 

After failing link to R2, the output is this:

 

Neighbor: 10.241.252.7 

Interface Type  St Time last up  # Up trans

ge-0/0/0.3085(vc 3085)rmt   OL   

Neighbor: 10.241.252.8 

Interface Type  St Time last up  # Up trans

ge-0/0/0.3085(vc 3085)rmt   Up Jun 30 13:08:07 2015   1

  Remote PE: 10.241.252.8, Negotiated control-word: No

  Incoming label: 487043, Outgoing label: 15102

  Negotiated PW status TLV: No  

  Local interface: ge-0/0/0.3085, Status: Up, Encapsulation: VLAN

 

Now, the link becomes OK but L2Circuit connection is not switched back to
primary path and I have this Output:

 

Neighbor: 10.241.252.7 

Interface Type  St Time last up  # Up trans

ge-0/0/0.3085(vc 3085)rmt   BK   

Neighbor: 10.241.252.8 

Interface Type  St Time last up  # Up trans

ge-0/0/0.3085(vc 3085)rmt   Up Jun 30 13:08:07 2015   1

  Remote PE: 10.241.252.8, Negotiated control-word: No

  Incoming label: 487043, Outgoing label: 15102

  Negotiated PW status TLV: No  

  Local interface: ge-0/0/0.3085, Status: Up, Encapsulation: VLAN

 

This means, Connection to R2 remains in Backup state, since I need to put
Connection to R3 in Backup State and have the Connection to R2 as UP. I must
mention L2VPN connection is also remains in Down State at R2.

Can you help me in this regard?

 

Thank you

Alireza

 

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp