[j-nsp] Junos MPLS/LDP L2circuit label issue

2012-02-22 Thread Scott Harvanek
Hello folks, I'm trying to setup a l2circuit between a M20 box running 
Junos 8.5 and a cisco 1841 over two bonded T1s, the relevant 
configuration bits follow:


 show configuration interfaces lsq-3/0/0
per-unit-scheduler;
unit 3 {
encapsulation multilink-ppp;
family inet {
address 10.7.7.1/30;
}
family mpls;
}

 show configuration interfaces ge-2/0/0
vlan-tagging;
encapsulation vlan-ccc;
unit 777 {
encapsulation vlan-ccc;
vlan-id 777;
}

 show configuration protocols mpls
interface all;

 show configuration protocols ldp
interface lsq-3/0/0.3;
interface lo0.0;


 show configuration protocols l2circuit
neighbor 10.104.1.2 {
interface ge-2/0/0.777 {
virtual-circuit-id 777;
}
}

 show configuration protocols ospf
traffic-engineering;
export ospf;
area 0.0.0.0 {
interface lsq-3/0/0.3 {
authentication {
simple-password //## SECRET-DATA
}
}
}


 show ldp database
Input label database, 101.101.1.7:0--101.101.1.2:0
  Label Prefix
  3 101.101.1.2/32
 10 L2CKT CtrlWord VLAN VC 777

Output label database, 101.101.1.7:0--101.101.1.2:0
  Label Prefix
 100912 101.101.1.2/32
  3 101.101.1.7/32


 show l2circuit connections extensive
Layer-2 Circuit Connections:

Legend for connection status (St)
EI -- encapsulation invalid  NP -- interface h/w not present
MM -- mtu mismatch   Dn -- down
EM -- encapsulation mismatch VC-Dn -- Virtual circuit Down
CM -- control-word mismatch  Up -- operational
VM -- vlan id mismatch   CF -- Call admission control failure
OL -- no outgoing label  XX -- unknown
NC -- intf encaps not CCC/TCC
CB -- rcvd cell-bundle size bad

Legend for interface status
Up -- operational
Dn -- down
Neighbor: 101.101.1.2
Interface Type  St Time last up  # Up 
trans
ge-2/0/0.777(vc 777)  rmt   Up Feb 21 16:27:26 
2012   1

  Local interface: ge-2/0/0.777, Status: Up, Encapsulation: VLAN
  Remote PE: 101.101.1.2, Negotiated control-word: Yes (Null)
  Incoming label: 100864, Outgoing label: 10
Time  Event   Interface/Lbl/PE
Feb 21 16:27:26 2012  status update timer
Feb 21 16:27:24 2012  PE route changed
Feb 21 16:27:24 2012  Out lbl Update10
Feb 21 16:27:24 2012  In lbl Update 100864
Feb 21 16:27:24 2012  loc intf up ge-2/0/0.777


As you can see, the juniper thinks the VC is up but it's not sending the 
label out as per the ldp database and so the cisco side never sees it up 
or the label information.
At this point I believe the cisco is operating properly as it is sending 
the L2CKT information.


The Juniper does install everything in mpls.0:

 show route table mpls

mpls.0: 7 destinations, 7 routes (7 active, 0 holddown, 0 hidden)
Restart Complete
+ = Active Route, - = Last Active, * = Both

0  *[MPLS/0] 01:24:50, metric 1
  Receive
1  *[MPLS/0] 01:24:50, metric 1
  Receive
2  *[MPLS/0] 01:24:50, metric 1
  Receive
100864 *[L2CKT/7] 00:25:55
 via ge-2/0/0.777, Pop   Offset: 4
100912 *[LDP/9] 00:25:55, metric 1
 via lsq-3/0/0.3, Pop
100912(S=0)*[LDP/9] 00:25:55, metric 1
 via lsq-3/0/0.3, Pop
ge-2/0/0.777   *[L2CKT/7] 00:25:55, metric2 1
 via lsq-3/0/0.3, Push 10 Offset: -4


I've tried many different traceoptions but nothing has proven useful in 
generating an error message that can be acted upon.  Does anyone have 
any ideas on why a label that exists both on the l2circuit and in mpls.0 
would not be getting inserted into the ldp database such as this?


thanks guys,

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 AC power strip

2012-08-23 Thread Scott Harvanek
You can easily get 30A PDUs with L6-20Rs which is what Juniper 
recommends for the MX960...


e.g. 
http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7893


Geist, ServerTech, etc. all also make many many options.

-Scott H.
-Login Inc.
On 08/23/2012 07:59 AM, JA wrote:

Hi

I need advice if someone is having an MX960 up on AC power.

Usually high capacity (32A) power bars (PDU) come with C13 or C19 outlets
while Juniper has no provision for such power cords. If European power
cords are ordered with MX960, the CEE7/7 plug can be connected to Schuko
outlets. But there is no Schuko PDU that supports more than 16A. One can
easily exceed 16A if two power supplies are connected on same PDU.

Can anyone recommend some alternative or if anyone faced similar situation?
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] FIB Capacity on older platform

2012-12-15 Thread Scott Harvanek
We have a few older M20s in service still, with the RE-3.0 we have 2.4MM 
routes and 241k+ active, RE @ 60% memory usage, SSB @ 52% memory usage.


Not bad for such an old box.

-SH
On 12/15/2012 10:34 AM, Michael Loftis wrote:

FIB capacity is determined solely by the FEB/CFEB on those platforms. I am a 
bit rusty on that line and not in front of one but I believe it is under show 
chassis ... Show chassis feb ... In terms of number of routes I don't think 
there is direct correlation because the FIB depends on your number of peers and 
interfaces.

Sent from my iPhone

On Dec 15, 2012, at 7:45, Robert Hass robh...@gmail.com wrote:


Hi
What is maximum FIB capacity on older M-Series platforms ? Eg. Juniper
M5 w/RE600 or Juniper M20

Rob
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] EX-2500 Rebooting at will

2013-05-31 Thread Scott Harvanek
We have two 2500s that both seem to reboot at their own will and 
indicate power cycle as the reason for the last reload but both have 
dual power and neither has had a power interruption to either power 
supply and they are on different power sources it seems it may be 
related to the transceivers? Last time it happened we did a show of the 
transceiver list and it rebooted has anyone else experienced this?


( we are running 3.1R2 )
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MX80 / 3D MIC buffers/queues

2013-11-07 Thread Scott Harvanek
Does anyone know if there is there a way to see how much buffer 
space/queue space is being used for shaping policies on the MX80 / 
MIC-3D-20SFP?  I can see queue status but I'm more interested in how 
much memory is being consumed for shaping.


We apply some shaping policies per unit on interfaces and we have _a 
lot_ of them, I'm wondering if there is any sort of limit of how many 
interfaces can be shaped reliably or how we can check buckets/buffers 
per physical port to ensure we are not overflowing / losing the shaping 
ability.


Hopefully that question makes sense, thanks.

--
Scott H.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX80 / 3D MIC buffers/queues

2013-11-07 Thread Scott Harvanek

Thanks!

So here's what I got, does this mean I'm not even to 1 % utilization 
even with 1866 buffers?


##

request pfe execute command show qxchip 0 memory target tfeb0
SENT: Ukern command: show qxchip 0 memory
GOT:
GOT: QX Linkram : 0
GOT:Total buffers in use: 5  (0%)
GOT: Bank 0 in use: 1  (0%)
GOT: Bank 1 in use: 4  (0%)
GOT:Use meter regions:
GOT:  region up-threshold  down-threshold
GOT:  --   --
GOT:068%   0%--- current region
GOT:187%  65%
GOT:293%  83%
GOT:3100%  89%
GOT: QX Linkram : 1
GOT:Total buffers in use: 1866  (0%)
GOT: Bank 0 in use: 977  (0%)
GOT: Bank 1 in use: 889  (0%)
GOT:Use meter regions:
GOT:  region up-threshold  down-threshold
GOT:  --   --
GOT:068%   0%--- current region
GOT:187%  65%
GOT:293%  83%
GOT:3100%  89%
LOCAL: End of file

##

Scott H.

On 11/7/13, 10:15 AM, Nikita Shirokov wrote:
in trio qxchip is responsible for H-QOS. you can check it's memory 
utilization thru this command:


hostnamerequest pfe execute command show qxchip 0 memory target tfeb0
SENT: Ukern command: show qxchip 0 memory
GOT:
GOT: QX Linkram : 0
GOT:Total buffers in use: 6  (0%)
GOT: Bank 0 in use: 3  (0%)
GOT: Bank 1 in use: 3  (0%)
GOT:Use meter regions:
GOT:  region up-threshold  down-threshold
GOT:  --   --
GOT:068%   0%--- current 
region

GOT:187%  65%
GOT:293%  83%
GOT:3100%  89%
GOT: QX Linkram : 1
GOT:Total buffers in use: 6  (0%)
GOT: Bank 0 in use: 3  (0%)
GOT: Bank 1 in use: 3  (0%)
GOT:Use meter regions:
GOT:  region up-threshold  down-threshold
GOT:  --   --
GOT:068%   0%--- current 
region

GOT:187%  65%
GOT:293%  83%
GOT:3100%  89%
LOCAL: End of file



2013/11/7 Scott Harvanek scott.harva...@login.com 
mailto:scott.harva...@login.com


Does anyone know if there is there a way to see how much buffer
space/queue space is being used for shaping policies on the MX80 /
MIC-3D-20SFP?  I can see queue status but I'm more interested in
how much memory is being consumed for shaping.

We apply some shaping policies per unit on interfaces and we have
_a lot_ of them, I'm wondering if there is any sort of limit of
how many interfaces can be shaped reliably or how we can check
buckets/buffers per physical port to ensure we are not overflowing
/ losing the shaping ability.

Hopefully that question makes sense, thanks.

-- 
Scott H.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
mailto:juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] M-series IPSEC / SP interface and VRF

2013-11-09 Thread Scott Harvanek
Is there a way to build a IPSec tunnel / service interface where the 
local gateway is NOT in the same routing-instance as the service interface?


Here's what I'm trying to do;

[ router A (SRX) ] == Switch / IS-IS mesh == [ router B m10i ]
[ st0.0 / VRF ] = [ sp-0/0/0.0 / VRF ]

The problem is, I want sp-0/0/0.0 on router B in a VRF but NOT the 
outside interface on router B, I cannot commit unless the 
outside/local-gateway on the IPSec tunnel is in the same 
routing-instance as the service interface, is there a way around this? 
The SRX devices can do this without issue.


service-set  {
interface-service {
service-interface sp-0/0/0.0; -- want this in a VRF
}
ipsec-vpn-options {
local-gateway x.x.x.x; -- default routing instance
}
ipsec-vpn-rules 
}

--
Scott H.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M-series IPSEC / SP interface and VRF

2013-11-12 Thread Scott Harvanek

Anyone with any ideas on this?

Scott H.

On 11/9/13, 12:58 PM, Scott Harvanek wrote:
Is there a way to build a IPSec tunnel / service interface where the 
local gateway is NOT in the same routing-instance as the service 
interface?


Here's what I'm trying to do;

[ router A (SRX) ] == Switch / IS-IS mesh == [ router B m10i ]
[ st0.0 / VRF ] = [ sp-0/0/0.0 / VRF ]

The problem is, I want sp-0/0/0.0 on router B in a VRF but NOT the 
outside interface on router B, I cannot commit unless the 
outside/local-gateway on the IPSec tunnel is in the same 
routing-instance as the service interface, is there a way around this? 
The SRX devices can do this without issue.


service-set  {
interface-service {
service-interface sp-0/0/0.0; -- want this in a VRF
}
ipsec-vpn-options {
local-gateway x.x.x.x; -- default routing instance
}
ipsec-vpn-rules 
}



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M-series IPSEC / SP interface and VRF

2013-11-12 Thread Scott Harvanek

Alex,

Yea, tried this but it looks like you can't set it to the default inet.0 
instance, only to things different... the local gw in my case is in the 
default instance and I want the service interface in another so unless 
I'm mistaken it's in default by default and this fails?


Scott H.

On 11/12/13, 11:22 AM, Alex Arseniev wrote:

Yes

[edit]
aarseniev@m120# set services service-set SS1 ipsec-vpn-options 
local-gateway ?

Possible completions:
  addressLocal gateway address
  routing-instance Name of routing instance that hosts local 
gateway = CHECK THIS OUT!!!

aarseniev@m120 show version
Hostname: m120
Model: m120
JUNOS Base OS boot [10.4S7.1]

HTH
Thanks
Alex

On 12/11/2013 16:05, Scott Harvanek wrote:

Anyone with any ideas on this?

Scott H.

On 11/9/13, 12:58 PM, Scott Harvanek wrote:
Is there a way to build a IPSec tunnel / service interface where the 
local gateway is NOT in the same routing-instance as the service 
interface?


Here's what I'm trying to do;

[ router A (SRX) ] == Switch / IS-IS mesh == [ router B m10i ]
[ st0.0 / VRF ] = [ sp-0/0/0.0 / VRF ]

The problem is, I want sp-0/0/0.0 on router B in a VRF but NOT the 
outside interface on router B, I cannot commit unless the 
outside/local-gateway on the IPSec tunnel is in the same 
routing-instance as the service interface, is there a way around 
this? The SRX devices can do this without issue.


service-set  {
interface-service {
service-interface sp-0/0/0.0; -- want this in a VRF
}
ipsec-vpn-options {
local-gateway x.x.x.x; -- default routing instance
}
ipsec-vpn-rules 
}



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M-series IPSEC / SP interface and VRF

2013-11-12 Thread Scott Harvanek

Yep excellent, I'll give it a whirl, thanks!

Scott H.

On 11/12/13, 1:24 PM, Alex Arseniev wrote:
So, if I understand Your requirement, You want sp-0/0/0.unit in VRF, 
correct?

And outgoing GE interface in inet.0?
And where the decrypted packets should be placed, inet.0 or VRF?
And where from the to-be-ecrypted packets should arrive, from inet.0 
or VRF?
If the answer is correct/inet.0/VRF/VRF then migrate to 
next-hop-style IPSec and place inside sp-* unit into the VRF leaving 
outside sp-* unit in inet.0.

HTH
Thanks
Alex

On 12/11/2013 16:35, Scott Harvanek wrote:

Alex,

Yea, tried this but it looks like you can't set it to the default 
inet.0 instance, only to things different... the local gw in my case 
is in the default instance and I want the service interface in 
another so unless I'm mistaken it's in default by default and this 
fails?


Scott H.

On 11/12/13, 11:22 AM, Alex Arseniev wrote:

Yes

[edit]
aarseniev@m120# set services service-set SS1 ipsec-vpn-options 
local-gateway ?

Possible completions:
  addressLocal gateway address
  routing-instance Name of routing instance that hosts local 
gateway = CHECK THIS OUT!!!

aarseniev@m120 show version
Hostname: m120
Model: m120
JUNOS Base OS boot [10.4S7.1]

HTH
Thanks
Alex

On 12/11/2013 16:05, Scott Harvanek wrote:

Anyone with any ideas on this?

Scott H.

On 11/9/13, 12:58 PM, Scott Harvanek wrote:
Is there a way to build a IPSec tunnel / service interface where 
the local gateway is NOT in the same routing-instance as the 
service interface?


Here's what I'm trying to do;

[ router A (SRX) ] == Switch / IS-IS mesh == [ router B m10i ]
[ st0.0 / VRF ] = [ sp-0/0/0.0 / VRF ]

The problem is, I want sp-0/0/0.0 on router B in a VRF but NOT the 
outside interface on router B, I cannot commit unless the 
outside/local-gateway on the IPSec tunnel is in the same 
routing-instance as the service interface, is there a way around 
this? The SRX devices can do this without issue.


service-set  {
interface-service {
service-interface sp-0/0/0.0; -- want this in a VRF
}
ipsec-vpn-options {
local-gateway x.x.x.x; -- default routing instance
}
ipsec-vpn-rules 
}



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] per-unit-scheduling, vlan shaping, MX480

2013-11-14 Thread Scott Harvanek

Hey guys,

What's the correct MIC/MPC combination to support per-vlan shaping? ( 
the mpc/mic supported feature docs are a bit confusing on this ) We're 
having success with a MX80 sporting a MIC-3D-20GE-SFP but looking to add 
a MX480 to replace some aging hardware and would like that same 
ability.  I'm assuming I need a MPC1-Q and the same MIC at the minimum ( 
preferably a MPC2-Q )?


--
Scott H.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M-series IPSEC / SP interface and VRF

2013-12-17 Thread Scott Harvanek
So this works to establish the tunnels, the problem is, BGP received 
routes over the tunnel do not function correctly.  The routes are 
properly installed in the VRF but traffic to those destinations does not 
pass correctly.  Does anyone have any experience running BGP like this 
on the m-series or does it just not work on next-hop-style?


Thanks,
-SH

On 11/12/13, 1:34 PM, Scott Harvanek wrote:

Yep excellent, I'll give it a whirl, thanks!

Scott H.

On 11/12/13, 1:24 PM, Alex Arseniev wrote:
So, if I understand Your requirement, You want sp-0/0/0.unit in 
VRF, correct?

And outgoing GE interface in inet.0?
And where the decrypted packets should be placed, inet.0 or VRF?
And where from the to-be-ecrypted packets should arrive, from inet.0 
or VRF?
If the answer is correct/inet.0/VRF/VRF then migrate to 
next-hop-style IPSec and place inside sp-* unit into the VRF leaving 
outside sp-* unit in inet.0.

HTH
Thanks
Alex

On 12/11/2013 16:35, Scott Harvanek wrote:

Alex,

Yea, tried this but it looks like you can't set it to the default 
inet.0 instance, only to things different... the local gw in my case 
is in the default instance and I want the service interface in 
another so unless I'm mistaken it's in default by default and this 
fails?


Scott H.

On 11/12/13, 11:22 AM, Alex Arseniev wrote:

Yes

[edit]
aarseniev@m120# set services service-set SS1 ipsec-vpn-options 
local-gateway ?

Possible completions:
  addressLocal gateway address
  routing-instance Name of routing instance that hosts local 
gateway = CHECK THIS OUT!!!

aarseniev@m120 show version
Hostname: m120
Model: m120
JUNOS Base OS boot [10.4S7.1]

HTH
Thanks
Alex

On 12/11/2013 16:05, Scott Harvanek wrote:

Anyone with any ideas on this?

Scott H.

On 11/9/13, 12:58 PM, Scott Harvanek wrote:
Is there a way to build a IPSec tunnel / service interface where 
the local gateway is NOT in the same routing-instance as the 
service interface?


Here's what I'm trying to do;

[ router A (SRX) ] == Switch / IS-IS mesh == [ router B m10i ]
[ st0.0 / VRF ] = [ sp-0/0/0.0 / VRF ]

The problem is, I want sp-0/0/0.0 on router B in a VRF but NOT 
the outside interface on router B, I cannot commit unless the 
outside/local-gateway on the IPSec tunnel is in the same 
routing-instance as the service interface, is there a way around 
this? The SRX devices can do this without issue.


service-set  {
interface-service {
service-interface sp-0/0/0.0; -- want this in a VRF
}
ipsec-vpn-options {
local-gateway x.x.x.x; -- default routing instance
}
ipsec-vpn-rules 
}



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M-series IPSEC / SP interface and VRF

2013-12-17 Thread Scott Harvanek
BGP is running in the tunnel and the next hop is the far side of the 
tunnel, everything looks correct. All the routes show the far end of the 
tunnel and BGP is established inside the VRF but traffic will not pass 
except of traffic directly between the two endpoints. E.g. BGP/ICMP on 
the tunnel subnet.  I'm at a loss.


I'll pull some info and post it back, maybe someone sees something I don't.

Scott H.

On 12/17/13, 12:27 PM, Alex Arseniev wrote:
For the traffic to be encrypted, the BGP nexthop has to point into the 
tunnel which means one of the below:

1/ BGP has to run inside the tunnel, or
2/ You have to have a BGP import policy to change the nexthop to 
tunnel's remote address. If this is eBGP, then also add 
accept-remote-nexthop knob.

HTH
Thanks
Alex

On 17/12/2013 16:08, Scott Harvanek wrote:
So this works to establish the tunnels, the problem is, BGP received 
routes over the tunnel do not function correctly.  The routes are 
properly installed in the VRF but traffic to those destinations does 
not pass correctly. Does anyone have any experience running BGP like 
this on the m-series or does it just not work on next-hop-style?


Thanks,
-SH

On 11/12/13, 1:34 PM, Scott Harvanek wrote:

Yep excellent, I'll give it a whirl, thanks!

Scott H.

On 11/12/13, 1:24 PM, Alex Arseniev wrote:
So, if I understand Your requirement, You want sp-0/0/0.unit in 
VRF, correct?

And outgoing GE interface in inet.0?
And where the decrypted packets should be placed, inet.0 or VRF?
And where from the to-be-ecrypted packets should arrive, from 
inet.0 or VRF?
If the answer is correct/inet.0/VRF/VRF then migrate to 
next-hop-style IPSec and place inside sp-* unit into the VRF 
leaving outside sp-* unit in inet.0.

HTH
Thanks
Alex

On 12/11/2013 16:35, Scott Harvanek wrote:

Alex,

Yea, tried this but it looks like you can't set it to the default 
inet.0 instance, only to things different... the local gw in my 
case is in the default instance and I want the service interface 
in another so unless I'm mistaken it's in default by default and 
this fails?


Scott H.

On 11/12/13, 11:22 AM, Alex Arseniev wrote:

Yes

[edit]
aarseniev@m120# set services service-set SS1 ipsec-vpn-options 
local-gateway ?

Possible completions:
  addressLocal gateway address
  routing-instance Name of routing instance that hosts local 
gateway = CHECK THIS OUT!!!

aarseniev@m120 show version
Hostname: m120
Model: m120
JUNOS Base OS boot [10.4S7.1]

HTH
Thanks
Alex

On 12/11/2013 16:05, Scott Harvanek wrote:

Anyone with any ideas on this?

Scott H.

On 11/9/13, 12:58 PM, Scott Harvanek wrote:
Is there a way to build a IPSec tunnel / service interface 
where the local gateway is NOT in the same routing-instance as 
the service interface?


Here's what I'm trying to do;

[ router A (SRX) ] == Switch / IS-IS mesh == [ router B m10i ]
[ st0.0 / VRF ] = [ sp-0/0/0.0 / VRF ]

The problem is, I want sp-0/0/0.0 on router B in a VRF but NOT 
the outside interface on router B, I cannot commit unless the 
outside/local-gateway on the IPSec tunnel is in the same 
routing-instance as the service interface, is there a way 
around this? The SRX devices can do this without issue.


service-set  {
interface-service {
service-interface sp-0/0/0.0; -- want this in a VRF
}
ipsec-vpn-options {
local-gateway x.x.x.x; -- default routing instance
}
ipsec-vpn-rules 
}



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] M-series IPSEC / SP interface and VRF

2014-01-26 Thread Scott Harvanek
So I finally got some time to work on this again and we found the
problem it appears to be was the destination match, needed to switch to
any/any matching instead.  It would appear that destination address and
next-hop don't mean the same thing, the actual destination needs to
match the destination address, not the next hop.

So, referencing the any/any special case section on
http://www.juniper.net/techpubs/en_US/junos10.4/topics/usage-guidelines/services-configuring-ipsec-rules.html#id-12180015

I removed my destination match and vwalla, I can ping across
appropriately now using BGP learned routes.

Alex,

Thanks for the help on just getting the stuff going with next-hop style
and the VRF, I really appreciate that.

-SH

On 12/18/13, 4:55 AM, Alex Arseniev wrote:
 And what happens if You ping a destination IP known via BGP across the
 tunnel but with different src.ip?

 ping routing-instance VRFname dst.ip source whatever

 This src.ip must be known by/reachable from far end.
 HTH
 Thanks
 Alex

 On 17/12/2013 20:40, Scott Harvanek wrote:
 BGP is running in the tunnel and the next hop is the far side of the
 tunnel, everything looks correct. All the routes show the far end of
 the tunnel and BGP is established inside the VRF but traffic will not
 pass except of traffic directly between the two endpoints. E.g.
 BGP/ICMP on the tunnel subnet.  I'm at a loss.

 I'll pull some info and post it back, maybe someone sees something I
 don't.

 Scott H.

 On 12/17/13, 12:27 PM, Alex Arseniev wrote:
 For the traffic to be encrypted, the BGP nexthop has to point into
 the tunnel which means one of the below:
 1/ BGP has to run inside the tunnel, or
 2/ You have to have a BGP import policy to change the nexthop to
 tunnel's remote address. If this is eBGP, then also add
 accept-remote-nexthop knob.
 HTH
 Thanks
 Alex

 On 17/12/2013 16:08, Scott Harvanek wrote:
 So this works to establish the tunnels, the problem is, BGP
 received routes over the tunnel do not function correctly.  The
 routes are properly installed in the VRF but traffic to those
 destinations does not pass correctly. Does anyone have any
 experience running BGP like this on the m-series or does it just
 not work on next-hop-style?

 Thanks,
 -SH

 On 11/12/13, 1:34 PM, Scott Harvanek wrote:
 Yep excellent, I'll give it a whirl, thanks!

 Scott H.

 On 11/12/13, 1:24 PM, Alex Arseniev wrote:
 So, if I understand Your requirement, You want sp-0/0/0.unit in
 VRF, correct?
 And outgoing GE interface in inet.0?
 And where the decrypted packets should be placed, inet.0 or VRF?
 And where from the to-be-ecrypted packets should arrive, from
 inet.0 or VRF?
 If the answer is correct/inet.0/VRF/VRF then migrate to
 next-hop-style IPSec and place inside sp-* unit into the VRF
 leaving outside sp-* unit in inet.0.
 HTH
 Thanks
 Alex

 On 12/11/2013 16:35, Scott Harvanek wrote:
 Alex,

 Yea, tried this but it looks like you can't set it to the
 default inet.0 instance, only to things different... the local
 gw in my case is in the default instance and I want the service
 interface in another so unless I'm mistaken it's in default by
 default and this fails?

 Scott H.

 On 11/12/13, 11:22 AM, Alex Arseniev wrote:
 Yes

 [edit]
 aarseniev@m120# set services service-set SS1 ipsec-vpn-options
 local-gateway ?
 Possible completions:
   addressLocal gateway address
   routing-instance Name of routing instance that hosts
 local gateway = CHECK THIS OUT!!!
 aarseniev@m120 show version
 Hostname: m120
 Model: m120
 JUNOS Base OS boot [10.4S7.1]

 HTH
 Thanks
 Alex

 On 12/11/2013 16:05, Scott Harvanek wrote:
 Anyone with any ideas on this?

 Scott H.

 On 11/9/13, 12:58 PM, Scott Harvanek wrote:
 Is there a way to build a IPSec tunnel / service interface
 where the local gateway is NOT in the same routing-instance
 as the service interface?

 Here's what I'm trying to do;

 [ router A (SRX) ] == Switch / IS-IS mesh == [ router B m10i ]
 [ st0.0 / VRF ] = [ sp-0/0/0.0 / VRF ]

 The problem is, I want sp-0/0/0.0 on router B in a VRF but
 NOT the outside interface on router B, I cannot commit unless
 the outside/local-gateway on the IPSec tunnel is in the same
 routing-instance as the service interface, is there a way
 around this? The SRX devices can do this without issue.

 service-set  {
 interface-service {
 service-interface sp-0/0/0.0; -- want this in a VRF
 }
 ipsec-vpn-options {
 local-gateway x.x.x.x; -- default routing instance
 }
 ipsec-vpn-rules 
 }


 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https

[j-nsp] MX VC ISSU

2014-04-17 Thread Scott Harvanek

Does anyone know if ISSU will ever be supported on a MX virtual-chassis?

Kind of a show stopper to have to reboot the whole VC for a upgrade.

--
Scott H.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX VC ISSU

2014-04-24 Thread Scott Harvanek

Morgan,

Yea, we've successfully done this in the lab with the MXs and breaking 
the VC.  I guess it's better than nothing.  The VC setup on the MXs and 
NSR/Graceful failover is pretty awesome and I don't want to abandon VC 
just because of this and at the same time don't want to wait to deploy 
this cluster until 14.X is stable and ISSU is available.


Ugh, decisions.

Scott H.

On 4/24/14, 3:42 PM, Morgan McLean wrote:
People get into these kind of situations on the SRX's as well. I've 
done things where I upgrade one of the SRX's and bring it back, 
however the cluster isn't happy because the version is mismatched. 
However, when I pull the remaining working SRX out of the cluster, the 
other one takes back over because it has no choice. You can't fail it 
over manually, but in a downtime situation it will still take over.


Similarly, as a hack, until ISSU is available couldn't you admin down 
all ports on MX-A, upgrade its software, reboot it, disconnect the VC 
ports causing a split brain setup, it will come up thinking its master 
since the other member is lost, and then enable its ports. At the same 
time you would have to admin down all the ports on MX-B, so that you 
can upgrade the software and zeroize and set VC ports it so that it 
slaves config info initially from MX-A and rejoins the VC?


Its a hack, and it will probably cause a blip of downtime but at least 
you'll have one working MX at all times basically. Ideally you'd only 
have to do that dance once, assuming ISSU works as expected (I've 
heard a lot of horror stories).


Thanks,
Morgan


On Wed, Apr 23, 2014 at 2:48 PM, Scott Harvanek 
scott.harva...@login.com mailto:scott.harva...@login.com wrote:


Okay so then here's the million dollar question.  Has anyone
attempted a ISSU on a MX VC?

I've got some 480s in the lab in pre-deployment state and the
question is, go VC and suffer a complete outage when a upgrade is
due or leave them non-VC.  I'm not comfortable running 14.1 out of
the gate but I also like the redundancy of the VC.

Scott H.


On 4/23/14, 10:40 AM, JP Velders wrote:

Date: Thu, 17 Apr 2014 18:45:33 -0400
From: Scott Harvanek scott.harva...@login.com
mailto:scott.harva...@login.com
Subject: [j-nsp] MX VC ISSU
Does anyone know if ISSU will ever be supported on a MX
virtual-chassis?

I believe it's on the roadmap and supposed to become available
in 14.1.

Kind regards,
JP Velders


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
mailto:juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX VC ISSU

2014-04-24 Thread Scott Harvanek
Yea, it's just that fear of being stuck on a specific version of Junos 
without being able to cleanly upgrade without a significant interruption.


Scott H.
Login Inc.

On 4/24/14, 4:22 PM, Morgan McLean wrote:
Just make the VC and do the dance later :), you know you want to! I'm 
converting two customers to MX VC over the next couple weeks.


Thanks,
Morgan


On Thu, Apr 24, 2014 at 1:12 PM, Scott Harvanek 
scott.harva...@login.com mailto:scott.harva...@login.com wrote:


Morgan,

Yea, we've successfully done this in the lab with the MXs and
breaking the VC.  I guess it's better than nothing.  The VC setup
on the MXs and NSR/Graceful failover is pretty awesome and I don't
want to abandon VC just because of this and at the same time don't
want to wait to deploy this cluster until 14.X is stable and ISSU
is available.

Ugh, decisions.

Scott H.

On 4/24/14, 3:42 PM, Morgan McLean wrote:

People get into these kind of situations on the SRX's as well.
I've done things where I upgrade one of the SRX's and bring it
back, however the cluster isn't happy because the version is
mismatched. However, when I pull the remaining working SRX out of
the cluster, the other one takes back over because it has no
choice. You can't fail it over manually, but in a downtime
situation it will still take over.

Similarly, as a hack, until ISSU is available couldn't you admin
down all ports on MX-A, upgrade its software, reboot it,
disconnect the VC ports causing a split brain setup, it will come
up thinking its master since the other member is lost, and then
enable its ports. At the same time you would have to admin down
all the ports on MX-B, so that you can upgrade the software and
zeroize and set VC ports it so that it slaves config info
initially from MX-A and rejoins the VC?

Its a hack, and it will probably cause a blip of downtime but at
least you'll have one working MX at all times basically. Ideally
you'd only have to do that dance once, assuming ISSU works as
expected (I've heard a lot of horror stories).

Thanks,
Morgan


On Wed, Apr 23, 2014 at 2:48 PM, Scott Harvanek
scott.harva...@login.com mailto:scott.harva...@login.com wrote:

Okay so then here's the million dollar question.  Has anyone
attempted a ISSU on a MX VC?

I've got some 480s in the lab in pre-deployment state and the
question is, go VC and suffer a complete outage when a
upgrade is due or leave them non-VC.  I'm not comfortable
running 14.1 out of the gate but I also like the redundancy
of the VC.

Scott H.


On 4/23/14, 10:40 AM, JP Velders wrote:

Date: Thu, 17 Apr 2014 18:45:33 -0400
From: Scott Harvanek scott.harva...@login.com
mailto:scott.harva...@login.com
Subject: [j-nsp] MX VC ISSU
Does anyone know if ISSU will ever be supported on a
MX virtual-chassis?

I believe it's on the roadmap and supposed to become
available in 14.1.

Kind regards,
JP Velders


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
mailto:juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp







___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] EX2500 communication failure outside subnet

2014-07-07 Thread Scott Harvanek

DIsclaimer: I know the EX2500 is just a rebranded BLADE/IBM switch.

Randomly the switch has refused to speak outside of its own subnet 
EXCEPT for traceroute/telnet/www.


Fails: Ping, SNMP

Works: Traceroute, Telnet, WWW

E.g.: I can only ping within the subnet, nothing more and SNMP is 
unresponsive outside of the subnet.


Does anyone have any idea or experienced this before where stuff is like 
halfway working? I assume the only corrective action I really have is a 
reboot but if someone else has seen this, please do share.


Juniper Networks EX2500 10GbE Switch
Software Version 3.1R2, Boot Version 3.1R2, active config block

--
Scott H.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] ipv4/ipv6-flow-table-size

2014-08-25 Thread Scott Harvanek

I'm wondering if anyone can clarify something for me from docs:



 * Any change in the configured size of flow hash table sizes initiates
   an automatic reboot of the FPC.
 * The total number of units used for both IPv4 and IPv6 cannot exceed 15.



- Does the initial config entry of ipv4/ipv6-flow-table-size cause the 
FPC to reboot or only if the configured value is changed?


-- I.e. the default for IPv4 size is 15, if that gets changed [ not 
currently set in config ] does that cause a reboot?


Also, is 15 the maximum aggregate or is it per table:

-- Can you have 15 units assigned to IPv4 and IPv6 at the same time? or, 
is 15 the maximum between the two?


Thanks!
-SH

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ipv4/ipv6-flow-table-size

2014-08-25 Thread Scott Harvanek

Scott,

Thanks, my next question then with that is - how/why is the default of 
ipv4 15 and ipv6 1?  That would break that constraint of 15 total?


Scott H.
Login Inc.

On 8/25/14, 3:53 PM, Scott Granados wrote:

When ever you set the flow table size you initiate a reboot of the FPC.  The 
table size is a combined value of v4 and v6 so 15 total a subset of which is 
IPV4 and the remainder is IPV6.

Thanks
Scott

On Aug 25, 2014, at 3:02 PM, Scott Harvanek scott.harva...@login.com wrote:


I'm wondering if anyone can clarify something for me from docs:



  * Any change in the configured size of flow hash table sizes initiates
an automatic reboot of the FPC.
  * The total number of units used for both IPv4 and IPv6 cannot exceed 15.



- Does the initial config entry of ipv4/ipv6-flow-table-size cause the
FPC to reboot or only if the configured value is changed?

-- I.e. the default for IPv4 size is 15, if that gets changed [ not
currently set in config ] does that cause a reboot?

Also, is 15 the maximum aggregate or is it per table:

-- Can you have 15 units assigned to IPv4 and IPv6 at the same time? or,
is 15 the maximum between the two?

Thanks!
-SH

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ipv4/ipv6-flow-table-size

2014-08-25 Thread Scott Harvanek

Thanks, I believe this does;

Assumptions based on this data;
- I can operate on the defaults for my application [ very little 
IPv6/VPLS ].

- I can change it later if needed but that will cause the FPC to reboot.

-SH

On 8/25/14, 4:00 PM, Scott Granados wrote:

Here’s a bit more that I received that may help clear things up.


Scott,

A total of 4M flows can be created per 1 LU chip. Now it depends on the size 
that we allocate to ipv4-flow-table size,ipv6-flow-table-size and 
vpls-flow-table-size.
If only ipv4 template is to be used, then all the memory on the pfe could be 
reserved only for the ipv4 family.

For eg:  if 15 is allocated for IPv4 then total flows that can be created = 
15*256*1024 = 3932160 and 1k IPv6 flows and 1K vpls flows (default).
Each unit in flow hash table size corresponds to memory size of 256k (256x1024).

In your case,
flow-table-size for ipv4 = 5, Total-flows that can be created = 5*256*1024 = 
1310720
you are exporting @ 7k flows/sec to the flow-collector
If all of your traffic belongs to ipv4: Number of flows getting created = 
7*1024*10 = 71680 flows/sec [ 10 = number of flow-records per 1 packet sent to 
flow-collector ]


On Aug 25, 2014, at 3:56 PM, Scott Harvanek scott.harva...@login.com wrote:


Scott,

Thanks, my next question then with that is - how/why is the default of
ipv4 15 and ipv6 1?  That would break that constraint of 15 total?

Scott H.
Login Inc.

On 8/25/14, 3:53 PM, Scott Granados wrote:

When ever you set the flow table size you initiate a reboot of the FPC.  The 
table size is a combined value of v4 and v6 so 15 total a subset of which is 
IPV4 and the remainder is IPV6.

Thanks
Scott

On Aug 25, 2014, at 3:02 PM, Scott Harvanek scott.harva...@login.com wrote:


I'm wondering if anyone can clarify something for me from docs:



  * Any change in the configured size of flow hash table sizes initiates
an automatic reboot of the FPC.
  * The total number of units used for both IPv4 and IPv6 cannot exceed 15.



- Does the initial config entry of ipv4/ipv6-flow-table-size cause the
FPC to reboot or only if the configured value is changed?

-- I.e. the default for IPv4 size is 15, if that gets changed [ not
currently set in config ] does that cause a reboot?

Also, is 15 the maximum aggregate or is it per table:

-- Can you have 15 units assigned to IPv4 and IPv6 at the same time? or,
is 15 the maximum between the two?

Thanks!
-SH

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] BGP Peer formatting

2014-09-09 Thread Scott Harvanek

This is a silly/OCD question;

I've faced this before and I can't recall how it was prettied up...

If I recall there is a way to pretty up the formatting of show bgp summary;

Peer AS  InPkt OutPktOutQ   Flaps Last 
Up/Dwn State|#Active/Received/Accepted/Damped...
XXX.XXX.XXX.XXX   X4463666 120866   0   0 5w3d 
9:31:20 Establ

  inet.0: 272410/510233/510233/0

To remove the line break / fix the table formatting.  I've tried 
adjusting screen-width with no joy.


Halp?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP Peer formatting

2014-09-10 Thread Scott Harvanek

Awesome, thank you very much :-)

Both work great!

Scott H.

On 9/9/14, 9:32 PM, Ben Dale wrote:

On 10 Sep 2014, at 7:54 am, Scott Harvanek scott.harva...@login.com wrote:


This is a silly/OCD question;

I've faced this before and I can't recall how it was prettied up...

If I recall there is a way to pretty up the formatting of show bgp summary;

Peer AS  InPkt OutPktOutQ   Flaps Last Up/Dwn 
State|#Active/Received/Accepted/Damped...
XXX.XXX.XXX.XXX   X4463666 120866   0   0 5w3d 9:31:20 
Establ
  inet.0: 272410/510233/510233/0

To remove the line break / fix the table formatting.  I've tried adjusting 
screen-width with no joy.

Halp?

There's a few ways to neaten it, but it's a case of which information you can 
live without:

show bgp summary | except inet
show bgp group summary | match l:

Failing that, I just hacked up an op script to only show a summarised version 
from each peer - output here:

https://github.com/dfex/DFEXjunoscripts/blob/master/show-bgp-neat.md

Code here:

https://github.com/dfex/DFEXjunoscripts/blob/master/show-bgp-neat.slax

The script *should* sum all the prefixes from each RIB into a single summarised 
number per peer, but I haven't had a chance to test it too thoroughly yet.  
Feedback/Pull Requests welcome.

Cheers,

Ben




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Inline jflow - No hash table changes

2014-09-11 Thread Scott Harvanek

Hey guys,

Quick question, if we setup inline jflow on a MX480 and do not adjust 
the hash table sizes, will the FPC still restart?*


Specifically the config change would look like this ( MX480 VC, member 
1, FPC 0(VC FPC 12) would be put into this but not member 0 ):



[edit chassis]
+   member 1 {
+   fpc 0 {
+   sampling-instance 480flows;
+   }
+   }
[edit]
+  services {
+  flow-monitoring {
+  version-ipfix {
+  template ipv4 {
+  flow-active-timeout 60;
+  flow-inactive-timeout 60;
+  template-refresh-rate {
+  packets 1000;
+  seconds 10;
+  }
+  option-refresh-rate {
+  packets 1000;
+  seconds 10;
+  }
+  ipv4-template;
+  }
+  }
+  }
+  }
[edit interfaces xe-12/1/0 unit 716 family inet]
+   sampling {
+   input;
+   }
[edit]
+  forwarding-options {
+  sampling {
+  instance {
+  480flows {
+  input {
+  rate 1;
+  }
+  family inet {
+  output {
+  flow-server x.x.x.x {
+  port 2055;
+  version-ipfix {
+  template {
+  ipv4;
+  }
+  }
+  }
+  inline-jflow {
+  source-address x.x.x.x;
+  }
+  }
+  }
+  }
+  }
+  }
+  }


* I find it pretty annoying the FPC will restart on the hash updates if 
you want to adjust the defaults...

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Inline jflow - No hash table changes

2014-09-11 Thread Scott Harvanek
Thanks for all the input guys, we're going to give this a go early 
tomorrow morning.  We're running 14.1, I'll report back my findings for 
reference.


Scott H.

On 9/11/14, 5:59 PM, Hugo Slabbert wrote:
We did not get a hit on enabling inline sampling with a config very 
similar yours, though we're running dual-RE MX480 on a single chassis, 
not VC.  We did take a hit on an MX-5, but I believe that was due to 
touching defaults, as you mentioned.


So, I can offer you an anecdote but I don't have an official word on it.

On Thu 2014-Sep-11 16:42:34 -0400, Scott Harvanek 
scott.harva...@login.com wrote:



Hey guys,

Quick question, if we setup inline jflow on a MX480 and do not adjust 
the hash table sizes, will the FPC still restart?*


Specifically the config change would look like this ( MX480 VC, 
member 1, FPC 0(VC FPC 12) would be put into this but not member 0 ):



[edit chassis]
+   member 1 {
+   fpc 0 {
+   sampling-instance 480flows;
+   }
+   }
[edit]
+  services {
+  flow-monitoring {
+  version-ipfix {
+  template ipv4 {
+  flow-active-timeout 60;
+  flow-inactive-timeout 60;
+  template-refresh-rate {
+  packets 1000;
+  seconds 10;
+  }
+  option-refresh-rate {
+  packets 1000;
+  seconds 10;
+  }
+  ipv4-template;
+  }
+  }
+  }
+  }
[edit interfaces xe-12/1/0 unit 716 family inet]
+   sampling {
+   input;
+   }
[edit]
+  forwarding-options {
+  sampling {
+  instance {
+  480flows {
+  input {
+  rate 1;
+  }
+  family inet {
+  output {
+  flow-server x.x.x.x {
+  port 2055;
+  version-ipfix {
+  template {
+  ipv4;
+  }
+  }
+  }
+  inline-jflow {
+  source-address x.x.x.x;
+  }
+  }
+  }
+  }
+  }
+  }
+  }


* I find it pretty annoying the FPC will restart on the hash updates 
if you want to adjust the defaults...

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Inline jflow - No hash table changes

2014-09-12 Thread Scott Harvanek
We turned this up this morning with no service hits and flows are 
exporting correctly;


- MX480 Virtual-Chassis
- Enabled on member 1 / FPC 0
- Junos 14.1

:)

Scott H.

On 9/11/14, 7:00 PM, Hugo Slabbert wrote:
Forgot to note: we were running 11.4R7.5 on both that MX480 and MX5, 
in case that's relevant to you at all.


On Thu 2014-Sep-11 18:49:27 -0400, Scott Harvanek 
scott.harva...@login.com wrote:


Thanks for all the input guys, we're going to give this a go early 
tomorrow morning.  We're running 14.1, I'll report back my findings 
for reference.


Scott H.

On 9/11/14, 5:59 PM, Hugo Slabbert wrote:
We did not get a hit on enabling inline sampling with a config very 
similar yours, though we're running dual-RE MX480 on a single 
chassis, not VC.  We did take a hit on an MX-5, but I believe that 
was due to touching defaults, as you mentioned.


So, I can offer you an anecdote but I don't have an official word on 
it.


On Thu 2014-Sep-11 16:42:34 -0400, Scott Harvanek 
scott.harva...@login.com wrote:



Hey guys,

Quick question, if we setup inline jflow on a MX480 and do not 
adjust the hash table sizes, will the FPC still restart?*


Specifically the config change would look like this ( MX480 VC, 
member 1, FPC 0(VC FPC 12) would be put into this but not member 0 ):



[edit chassis]
+   member 1 {
+   fpc 0 {
+   sampling-instance 480flows;
+   }
+   }
[edit]
+  services {
+  flow-monitoring {
+  version-ipfix {
+  template ipv4 {
+  flow-active-timeout 60;
+  flow-inactive-timeout 60;
+  template-refresh-rate {
+  packets 1000;
+  seconds 10;
+  }
+  option-refresh-rate {
+  packets 1000;
+  seconds 10;
+  }
+  ipv4-template;
+  }
+  }
+  }
+  }
[edit interfaces xe-12/1/0 unit 716 family inet]
+   sampling {
+   input;
+   }
[edit]
+  forwarding-options {
+  sampling {
+  instance {
+  480flows {
+  input {
+  rate 1;
+  }
+  family inet {
+  output {
+  flow-server x.x.x.x {
+  port 2055;
+  version-ipfix {
+  template {
+  ipv4;
+  }
+  }
+  }
+  inline-jflow {
+  source-address x.x.x.x;
+  }
+  }
+  }
+  }
+  }
+  }
+  }


* I find it pretty annoying the FPC will restart on the hash 
updates if you want to adjust the defaults...

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Virtual Chassis RPD/BGP Rsync high CPU

2014-09-18 Thread Scott Harvanek
Has anyone had a issue with MX units in a VC where BGP rsync was 
consuming a boatload of CPU?


Master chassis shows:
Task   StartedUser Time  System Time Longest Run
BGP rsync 9650  10. 0.8  0.0
( BGP rsync is the only task with any user time during high user CPU for 
rpd )


now, that's only like 20% CPU on the master but on the slave it's 
90%  This seems to have happened when our total paths exceeded 2MM 
but does not seem to be a memory issue:


Dynamically allocated memory:  411009024  Maximum: 808517632
 Program data+BSS memory:5537792  Maximum:   5537792
  Page data overhead:1196032  Maximum:   1196032
 Page directory size: 212992  Maximum:212992
  --
  Total bytes in use:  417955840 (12% of available memory)

--
Scott H.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Virtual Chassis RPD/BGP Rsync high CPU

2014-09-24 Thread Scott Harvanek
Okay so we traced this down to BGP Replication for NSR.  Looks like a 
bad attribute kills the replication process.  Other than blocking the 
received prefix is there a way to fix this:


Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]: Received malformed update from 
x
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]:   Family inet-unicast, prefix 
5.56.168.0/21
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]:   Malformed Attribute 
AGGREGATOR4(18) flag 0xc0 length 8.
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]: Total incoming malformed 
attributes from xx since last logging
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]:   Received  1 malformed 
attribute AGGREGATOR4(18)


Mind you, the primary session with the peer stays up, this only kills 
the replication process...


Scott H.

On 9/18/14, 11:38 AM, Scott Harvanek wrote:
Has anyone had a issue with MX units in a VC where BGP rsync was 
consuming a boatload of CPU?


Master chassis shows:
Task   StartedUser Time  System Time Longest Run
BGP rsync 9650  10. 0.8  0.0
( BGP rsync is the only task with any user time during high user CPU 
for rpd )


now, that's only like 20% CPU on the master but on the slave it's 
90%  This seems to have happened when our total paths exceeded 2MM 
but does not seem to be a memory issue:


Dynamically allocated memory:  411009024  Maximum: 808517632
 Program data+BSS memory:5537792  Maximum: 5537792
  Page data overhead:1196032  Maximum: 1196032
 Page directory size: 212992  Maximum: 212992
  --
  Total bytes in use:  417955840 (12% of available memory)



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Virtual Chassis RPD/BGP Rsync high CPU

2014-09-24 Thread Scott Harvanek
Well shoot, that's a great idea, looks like this command is hidden so 
I didn't even see it.  I assume AGGREGATOR4 is type code 18? I can't 
find anything official confirming that though?


Scott H.

On 9/24/14, 3:39 PM, Alexander Arseniev wrote:
Have You tried drop-path-attributes 
http://kb.juniper.net/InfoCenter/index?page=contentid=JSA10491 ?

You can drop any attribute, not only 128 as in the KB.
Thanks
Alex

On 24/09/2014 18:38, Scott Harvanek wrote:
Okay so we traced this down to BGP Replication for NSR.  Looks like a 
bad attribute kills the replication process.  Other than blocking the 
received prefix is there a way to fix this:


Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]: Received malformed update 
from x
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]:   Family inet-unicast, prefix 
5.56.168.0/21
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]:   Malformed Attribute 
AGGREGATOR4(18) flag 0xc0 length 8.
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]: Total incoming malformed 
attributes from xx since last logging
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]:   Received  1 malformed 
attribute AGGREGATOR4(18)


Mind you, the primary session with the peer stays up, this only kills 
the replication process...


Scott H.

On 9/18/14, 11:38 AM, Scott Harvanek wrote:
Has anyone had a issue with MX units in a VC where BGP rsync was 
consuming a boatload of CPU?


Master chassis shows:
Task   StartedUser Time  System Time Longest 
Run

BGP rsync 9650  10. 0.8 0.0
( BGP rsync is the only task with any user time during high user CPU 
for rpd )


now, that's only like 20% CPU on the master but on the slave it's 
90%  This seems to have happened when our total paths exceeded 
2MM but does not seem to be a memory issue:


Dynamically allocated memory:  411009024  Maximum: 808517632
 Program data+BSS memory:5537792  Maximum: 5537792
  Page data overhead:1196032  Maximum: 1196032
 Page directory size: 212992  Maximum: 212992
  --
  Total bytes in use:  417955840 (12% of available memory)



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Virtual Chassis RPD/BGP Rsync high CPU

2014-09-24 Thread Scott Harvanek
Disregard, 18 is correct, looks like IETF/RFC4893 has this as  
AS4_AGGREGATOR not AGGREGATOR4.


Scott H.
Login Inc.

On 9/24/14, 5:11 PM, Scott Harvanek wrote:
Well shoot, that's a great idea, looks like this command is hidden 
so I didn't even see it.  I assume AGGREGATOR4 is type code 18? I 
can't find anything official confirming that though?


Scott H.

On 9/24/14, 3:39 PM, Alexander Arseniev wrote:
Have You tried drop-path-attributes 
http://kb.juniper.net/InfoCenter/index?page=contentid=JSA10491 ?

You can drop any attribute, not only 128 as in the KB.
Thanks
Alex

On 24/09/2014 18:38, Scott Harvanek wrote:
Okay so we traced this down to BGP Replication for NSR.  Looks like 
a bad attribute kills the replication process.  Other than blocking 
the received prefix is there a way to fix this:


Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]: Received malformed update 
from x
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]:   Family inet-unicast, 
prefix 5.56.168.0/21
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]:   Malformed Attribute 
AGGREGATOR4(18) flag 0xc0 length 8.
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]: Total incoming malformed 
attributes from xx since last logging
Sep 24 17:31:06  TUS-2-VC-1 rpd[48424]:   Received  1 malformed 
attribute AGGREGATOR4(18)


Mind you, the primary session with the peer stays up, this only 
kills the replication process...


Scott H.

On 9/18/14, 11:38 AM, Scott Harvanek wrote:
Has anyone had a issue with MX units in a VC where BGP rsync was 
consuming a boatload of CPU?


Master chassis shows:
Task   StartedUser Time  System Time 
Longest Run

BGP rsync 9650  10. 0.8 0.0
( BGP rsync is the only task with any user time during high user 
CPU for rpd )


now, that's only like 20% CPU on the master but on the slave it's 
90%  This seems to have happened when our total paths exceeded 
2MM but does not seem to be a memory issue:


Dynamically allocated memory:  411009024  Maximum: 808517632
 Program data+BSS memory:5537792  Maximum: 5537792
  Page data overhead:1196032  Maximum: 1196032
 Page directory size: 212992  Maximum: 212992
  --
  Total bytes in use:  417955840 (12% of available memory)



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Dear Juniper...

2014-09-25 Thread Scott Harvanek
I agree with this more than the former.  I haven't had any issues 
finding specs etc. but it's certainly fatty block buzzword design.


Scott H.

On 9/25/14, 3:44 PM, Daniel Rohan wrote:

I have to agree, but from a different angle. The How Do We section made
me laugh out loud, so filled with buzzwords: 'multi-dimensional core'
'super-core', 'service-centric'. Better question is how do I make sense of
what question is being asked here without reading each and every article?

I actually didn't have any trouble getting to the spec sheets of the
products I care most about though.






On Thu, Sep 25, 2014 at 12:25 PM, Michael Loftis mlof...@wgops.com wrote:


Your web site now sucks rocks.  Like who decided to ship this?  One
single page for the entire EX switch lineup?  Can't find CRAP anymore.

Seriously?  Did ANYONE think about actually USING the site, or did you
just say make it preettyyy?

/rant

--

Genius might be described as a supreme capacity for getting its possessors
into trouble of all kinds.
-- Samuel Butler
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX4550 L2Circuit/VPN to MX80/lt Interface

2014-11-10 Thread Scott Harvanek
I think the question is, why not carry the customer traffic on a VLAN 
back to the MX80?


Scott H.
Login Inc.

On 11/10/14 12:55 PM, Raphael Mazelier wrote:



Le 10/11/14 18:40, Hugo Slabbert a écrit :

What's the connection between the EX and the MX? Could you not just
switch the customers through the EX to the MX and land them on tagged
interfaces on the MX?

I don't know all of your requirements, but perhaps the simple option
works here?



Ah good question. I have only one 10G ethernet back to back connection 
between the EX and the MX. And I want to use my EX as a router for 
managing the gateway of my clients on it. So I need BGP/MPLS on it, 
and unfortunelaty MPLS does not work on vlan interface on Ex :(
I was a pseudo BGP/Tor design. It work well, but I does not want to 
use a dedicated MX80 port and switch to transit customers (wich are 
not the majority). If I had more money I had bought some MX480 :p





___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] L2Circuit with BGP/LDP

2015-02-18 Thread Scott Harvanek

Hey all-

I've got a question about a L2Circuit, normally we use LDP/OSPF, the 
loopback of the neighbor is reachable as the OSPF route for that /32 is 
available in the internal LDP route table.  BGP routes are not imported 
into this table, my question is, is there a way to have a /32 received 
over BGP known by LDP and all other routes excluded? We have a situation 
where OSPF is less desirable and I'd rather not setup a routing-instance 
to do full BGP signalling for this if we don't have to.


--
Scott H.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] L2Circuit with BGP/LDP

2015-02-19 Thread Scott Harvanek
Thanks for the suggestions all, 3107 looks like it would do what I want, 
I'll give that a try.


Scott H.

On 2/19/15 9:46 AM, Adam Vitkovsky wrote:

That's what I understood the PW Labels generated by BGP.
Hmmm though now that I read Scott's post again he actually wanted to avoid 
using routing instances -which unfortunately is exactly what I have recommended 
with the VPLS setup, but I think that's the only way how to get BGP to 
allocate/advertise PW Labels and I'd be glad to learn otherwise.

Though the RFC3107 could certainly be used to address the no OSPF requirement.
And don't you worry you gonna love RFC3107, it's a perfect cure for all routing 
problems! As David Wheeler once said:
All problems in computer science can be solved by another level of indirection, 
except for the problem of too many layers of indirection.



adam

-Original Message-
From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf
Of Mark Tinka
Sent: 19 February 2015 11:54
To: juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] L2Circuit with BGP/LDP



On 19/Feb/15 11:51, Adam Vitkovsky wrote:

Hi Scott,

There are two ways how you can setup the E-lines using BGP.
BGP Auto Discovery with LDP signalling.
BGP Auto Discovery with BGP signalling.

I think the second one is what you want to accomplish.

I might have misunderstood the OP's question, but wasn't he asking about
how he can provision Martini EoMPLS circuits where FEC's are generated
from iBGP routes?

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

---
  This email has been scanned for email related threats and delivered safely by 
Mimecast.
  For more information please visit http://www.mimecast.com
---

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] SRX / Flow + Session Data

2015-03-24 Thread Scott Harvanek
Is there any way to get session data + flow data for clients off of a 
SRX box, basically we have a need to track URLs client machines may 
access, there's too much data to do a port mirror without losing 
historical data and flows don't contain the session data of course.


Anyone had to run into anything like this and find a solution?

Thanks
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] VCCP

2017-11-16 Thread Scott Harvanek
We’ve been running VC on the MX platform for years without issue.  

Scott H
Login, LLC



> On Nov 16, 2017, at 8:51 AM, Chuck Anderson  wrote:
> 
> Virtual Chassis shares the management, control, and data planes across the 
> two routers.  I don't like that from a high-availability standpoint.  The two 
> routers are tightly coupled with software versions, bootup, etc.
> 
> MC-LAG shares some of the control and data planes via ICCP but maintains 
> separate routing & management planes so it is better in that respect.
> 
> But IMO the best architecture is a L3 routed one.  If you need L2 services to 
> extend across the L3 then use MPLS services such as EVPN.
> 
> On Thu, Nov 16, 2017 at 08:57:42AM -0500, harbor235 wrote:
>> Has anyone deployed VCCP on the MX platform as a solution for a pair of
>> edge routers that traditionally would support a BGP multihomed architecture?
>> 
>> I am interested if VCCP is a viable solution to replace the traditional
>> dual homed architecture and if there are any pros and cons. Are there
>> limitations with VCCP? Operational issues? EGP and/or IGP limitations,
>> etc
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread Scott Harvanek
Adam,

I thought that the MPC3E and MPC5E had the same generation Trio w/ XL and XQ 
chips?  Just the MPC5E has two XM chips.  

Scott H



> On Nov 1, 2017, at 10:28 AM, <adamv0...@netconsultings.com> 
> <adamv0...@netconsultings.com> wrote:
> 
>> Scott Harvanek
>> Sent: Tuesday, October 31, 2017 6:57 PM
>> 
>> Hey folks,
>> 
>> We have some MX480s we need to add queuing capable 10G/40G ports to
>> and it looks like MPC5EQ-40G10G is going to be our most cost effective
>> solution.  Has anyone run into any limitations with these MPCs that aren’t
>> clearly documented?
>> 
>> We intend to use them for L3/VLAN traffic w/ CoS/Shaping.  Currently we’re
>> doing that on MPC2E NG Qs w/ 10XGE-SFPP MICs , any reason we couldn’t
>> do the same on this along with the adding of the 40G ports? Any Layer3
>> limitations or the normal 2MM/6MM FIB/RIB?
>> 
> Hey Scott,
> I'd rather go with a standard Trio architecture i.e. one lookup block one 
> buffering block (and one queuing block) -so mpc3 or mpc7. 
> To me it seems like 4 and 5 are just experiments with the Trio architecture 
> that did not stood the test of time. 
> 
> adam
> 
> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread Scott Harvanek
Pavel,

Thank you for the detailed comments, this is basically what I understood to be 
the case.  I’m running MPC3 NG EQs right now which as you note is the same as 
the MPC5, we haven’t had any issues so it sounds like the MPC5E should achieve 
what we need and operate as we expect.

Scott H



> On Nov 1, 2017, at 11:07 AM, Pavel Lunin <plu...@gmail.com> wrote:
> 
> 
> 
> There were two versions of MPC3:
> 
> 1. MPC3 non-NG, which has a single XM buffer manager and four LU chips (the 
> old good ~65 Mpps LUs as in "classic" MPC1/2/16XGE old trio PFEs).
> 2. MPC3-NG which is based on exactly the same chipset as MPC5, based on XM+XL.
> 
> MPC4 is much like MPC3 non-NG though it has two LUs instead of four with a 
> new more "performant" microcode.
> 
> XL chip (extended LU), which is present in MPC5/6 and 2-NG/3-NG has also 
> multiple ALU cores (four, IIRC) but in contrast to MPC3 non-NG and MPC4 these 
> cores have a shared memory, so they don't suffer from some limitations (like 
> not very precise policers) which you can face with multi-LU PFE architectures.
> 
> MPC7 has a completely new single core 400G chip (also present in the recently 
> announced MX204 and MX10003).
> 
> This said, I find MPC4 quite not bad in most scenarios. Never had any issues, 
> specific to its architecture.
> 
> P. S. Finally this choice is all about money/performance.
> 
> 
> Kind regards,
> Pavel
> 
> 
> 2017-11-01 16:46 GMT+01:00 Scott Harvanek <scott.harva...@login.com 
> <mailto:scott.harva...@login.com>>:
> Adam,
> 
> I thought that the MPC3E and MPC5E had the same generation Trio w/ XL and XQ 
> chips?  Just the MPC5E has two XM chips.
> 
> Scott H
> 
> 
> 
> > On Nov 1, 2017, at 10:28 AM, <adamv0...@netconsultings.com 
> > <mailto:adamv0...@netconsultings.com>> <adamv0...@netconsultings.com 
> > <mailto:adamv0...@netconsultings.com>> wrote:
> >
> >> Scott Harvanek
> >> Sent: Tuesday, October 31, 2017 6:57 PM
> >>
> >> Hey folks,
> >>
> >> We have some MX480s we need to add queuing capable 10G/40G ports to
> >> and it looks like MPC5EQ-40G10G is going to be our most cost effective
> >> solution.  Has anyone run into any limitations with these MPCs that aren’t
> >> clearly documented?
> >>
> >> We intend to use them for L3/VLAN traffic w/ CoS/Shaping.  Currently we’re
> >> doing that on MPC2E NG Qs w/ 10XGE-SFPP MICs , any reason we couldn’t
> >> do the same on this along with the adding of the 40G ports? Any Layer3
> >> limitations or the normal 2MM/6MM FIB/RIB?
> >>
> > Hey Scott,
> > I'd rather go with a standard Trio architecture i.e. one lookup block one 
> > buffering block (and one queuing block) -so mpc3 or mpc7.
> > To me it seems like 4 and 5 are just experiments with the Trio architecture 
> > that did not stood the test of time.
> >
> > adam
> >
> >
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> <mailto:juniper-nsp@puck.nether.net>
> https://puck.nether.net/mailman/listinfo/juniper-nsp 
> <https://puck.nether.net/mailman/listinfo/juniper-nsp>

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC5EQ Feedback?

2017-11-01 Thread Scott Harvanek
David,

Thanks for pointing that out, I did read that and understand the available 
options/limitations between the 10G/40G interfaces. :)

Scott H



> On Nov 1, 2017, at 11:34 AM, Hunter, David B.  wrote:
> 
> Scott,
> 
> Just FYI, you may already be aware of this, but there was one limitation with 
> the MPC5E that we ran into.  We use the MPC5E-40G10G, which are working fine 
> in our data center for 40Gbs service.  I think the EQ version is the same in 
> regards to the limitation we discovered.  At the time we purchased the cards, 
> it wasn’t well documented as to how the 10 and 40Gbs ports could be used.  
> Juniper has since updated the documentation to specify the allowed port usage.
> 
> The card is not oversubscribed, the limit seems to be a hard 240Gbs for the 
> entire card imposed by the way in which the PICs can be used.
> 
> https://www.juniper.net/documentation/en_US/release-independent/junos/topics/reference/general/mpc5eq-6x40ge-24x10ge.html
>  
> 
> 
> “...Supports one of the following port combinations:
> • Six 40-Gigabit Ethernet ports
> • Twenty-four 10-Gigabit Ethernet ports
> • Three 40-Gigabit Ethernet ports and twelve 10-Gigabit Ethernet ports”
> 
> Also see:
> 
> https://www.juniper.net/documentation/en_US/junos/topics/reference/general/active-pics-mpc5e-guidelines.html
>  
> 
> 
> David B. Hunter
> IU Network Design Engineer
> Indiana University
> 317-278-4873
> davbh...@iu.edu
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] MPC5EQ Feedback?

2017-10-31 Thread Scott Harvanek
Hey folks,

We have some MX480s we need to add queuing capable 10G/40G ports to and it 
looks like MPC5EQ-40G10G is going to be our most cost effective solution.  Has 
anyone run into any limitations with these MPCs that aren’t clearly documented?

We intend to use them for L3/VLAN traffic w/ CoS/Shaping.  Currently we’re 
doing that on MPC2E NG Qs w/ 10XGE-SFPP MICs , any reason we couldn’t do the 
same on this along with the adding of the 40G ports? Any Layer3 limitations or 
the normal 2MM/6MM FIB/RIB?

Scott H


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] QFX5110 / VXLAN

2018-07-03 Thread Scott Harvanek
Is anyone on here running 5110s for VXLAN/VTEP/EVPN and run into any issues?  
I’ve gone over the caveats list Juniper has for these in regards to what they 
won’t do in regards to VXLAN and it seems like they meet our needs… just 
curious if anyone has run into any lesser documented issues with them.

I’m looking at the list here; 
https://www.juniper.net/documentation/en_US/junos/topics/concept/vxlan-constraints-qfx-series.html
 


Is there a better device for VXLAN on the juniper side? We’re looking for 
something comparable to the Nexus 9372 on the Cisco side.

Cheers!

Scott H



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] L2ALD_MAC_MOVE_NOTIFICATION

2018-02-22 Thread Scott Harvanek
https://puck.nether.net/pipermail/juniper-nsp/2014-November/029561.html

> On Feb 22, 2018, at 10:37 PM, Nikolas Geyer  wrote:
> 
> You’ve probably got a layer 2 loop in your topology somewhere. OSPF probably 
> went down due to the RE CPU utilization going through the roof.
> 
> Sent from my iPhone
> 
>> On 22 Feb 2018, at 11:23 pm, Brijesh Patel  wrote:
>> 
>> Hello Friends,
>> 
>> We have switch EX4300 which is connected to EX4500. .
>> 
>> Receving an erroe message : *2ald[1299]: L2ALD_MAC_MOVE_NOTIFICATION: MAC
>> Moves detected in the system*
>> 
>> *and system went down and ospf flap . ANy idea ?*
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] L2ALD_MAC_MOVE_NOTIFICATION

2018-02-23 Thread Scott Harvanek
You just need to locate and clear whatever is causing the loop, that’s the 
solution right?

-SH

> On Feb 23, 2018, at 6:15 AM, Brijesh Patel <brju.pa...@gmail.com> wrote:
> 
> Hi Scott,
> 
> Issue is similar but there is no solution ? Or not on juniper website which 
> mention in that blog.
> 
> So lost
> 
> 
> Thanks
> 
> Brijesh 
> On Friday, February 23, 2018, Scott Harvanek <scott.harva...@login.com 
> <mailto:scott.harva...@login.com>> wrote:
> https://puck.nether.net/pipermail/juniper-nsp/2014-November/029561.html 
> <https://puck.nether.net/pipermail/juniper-nsp/2014-November/029561.html>
> 
> > On Feb 22, 2018, at 10:37 PM, Nikolas Geyer <n...@neko.id.au 
> > <mailto:n...@neko.id.au>> wrote:
> >
> > You’ve probably got a layer 2 loop in your topology somewhere. OSPF 
> > probably went down due to the RE CPU utilization going through the roof.
> >
> > Sent from my iPhone
> >
> >> On 22 Feb 2018, at 11:23 pm, Brijesh Patel <brju.pa...@gmail.com 
> >> <mailto:brju.pa...@gmail.com>> wrote:
> >>
> >> Hello Friends,
> >>
> >> We have switch EX4300 which is connected to EX4500. .
> >>
> >> Receving an erroe message : *2ald[1299]: L2ALD_MAC_MOVE_NOTIFICATION: MAC
> >> Moves detected in the system*
> >>
> >> *and system went down and ospf flap . ANy idea ?*
> >> ___
> >> juniper-nsp mailing list juniper-nsp@puck.nether.net 
> >> <mailto:juniper-nsp@puck.nether.net>
> >> https://puck.nether.net/mailman/listinfo/juniper-nsp 
> >> <https://puck.nether.net/mailman/listinfo/juniper-nsp>
> > ___
> > juniper-nsp mailing list juniper-nsp@puck.nether.net 
> > <mailto:juniper-nsp@puck.nether.net>
> > https://puck.nether.net/mailman/listinfo/juniper-nsp 
> > <https://puck.nether.net/mailman/listinfo/juniper-nsp>
> 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] QFX5110 / VXLAN

2018-07-04 Thread Scott Harvanek
Cost is a factor I don’t think I can get anyone to bite on something bigger 
either as the application is solely VXLAN in a compact form factor.  

-Scott H

> On Jul 4, 2018, at 2:20 PM, Pavel Lunin  wrote:
> 
> Btw, it's a very good question if anyone here has more or less close to 
> real-world experience with L3 gw and evpn type 5 routes on QFX5110 or maybe 
> any other trident 2+ based box.
> 
> Would much appreciate your input.
> 
> Regards,
> Pavel
> 
> July 3, 2018, 18:48 Roger Wiklund :
>> Hi Scott
>> 
>> Should be fine as L2 GW. L3 GW and Route Type 5 support is quite recent.
>> 
>> Beefier alternatives are QFX10002, or MX204 if you want to go MX route with
>> fewer ports. Both have custom ASICs with higher scale, and higher chance to
>> overcome caveats/limitations especially tied to chipset limitation.
>> 
>> Regards
>> Roger
>> 
>> On Tue, Jul 3, 2018 at 1:48 PM, Scott Harvanek 
>> wrote:
>> 
>> > Is anyone on here running 5110s for VXLAN/VTEP/EVPN and run into any
>> > issues?  I’ve gone over the caveats list Juniper has for these in regards
>> > to what they won’t do in regards to VXLAN and it seems like they meet our
>> > needs… just curious if anyone has run into any lesser documented issues
>> > with them.
>> >
>> > I’m looking at the list here; https://www.juniper.net/
>> > documentation/en_US/junos/topics/concept/vxlan-constraints-qfx-series.html
>> > <https://www.juniper.net/documentation/en_US/junos/topics/concept/vxlan-
>> > constraints-qfx-series.html>
>> >
>> > Is there a better device for VXLAN on the juniper side? We’re looking for
>> > something comparable to the Nexus 9372 on the Cisco side.
>> >
>> > Cheers!
>> >
>> > Scott H
>> >
>> >
>> >
>> > ___
>> > juniper-nsp mailing list juniper-nsp@puck.nether.net
>> > https://puck.nether.net/mailman/listinfo/juniper-nsp
>> >
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX480 ospf3 ipsec jammed?

2019-06-16 Thread Scott Harvanek
I’ll get those outputs when at a terminal but the configuration did not change 
and this was working pre reboot :/

The only other change was a failed MPC that was replaced.

Downstream devices are sending HELLOs but this 480 is not indicating it’s 
receiving them via ospf3 stats output which is weird but connectivity is good.

-Scott H

> On Jun 16, 2019, at 2:08 AM, Anderson, Charles R  wrote:
> 
> Silly question, does the sa name match between "ospf3...ipsec-sa FOO"
> and "security ipsec security-association FOO..."?
> 
> What does "show ipsec security-associations" show?
> 
> What Junos version?  There was a memory leak or file descriptor leak
> in older Junos that killed the ipsec daemon after a long uptime, but I
> don't recall anything that would cause it to fail right after reboot.
> But you can try "restart ipsec-key-management" anyway.
> 
>> On Sat, Jun 15, 2019 at 09:02:30PM -0500, Scott Harvanek wrote:
>> Hey guys,
>> 
>> Getting something interesting after a reboot;
>> 
>> Jun 16 01:58:45  MX480.1 kernel: ipsec_find_sa_in_so_gen(1999): Couldn't 
>> dereference the sa name = XX
>> 
>> When trying to bring up the IPSec tunnel for ospf3 peering ( which never 
>> establishes ), any ideas what this means? Do I need to restart the ipsec 
>> key daemon?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX480 ospf3 ipsec jammed?

2019-06-17 Thread Scott Harvanek

show ipsec security-associations
Security association: tusldc2-distribution
    Direction SPI AUX-SPI Mode   Type Protocol
    inbound   256 0   transport  manual AH
    outbound  256 0   transport  manual   AH

Junos: 16.1R4-S2.2

Scott H.
Login, LLC

On 6/16/19 11:20 AM, Scott Harvanek wrote:

I’ll get those outputs when at a terminal but the configuration did not change 
and this was working pre reboot :/

The only other change was a failed MPC that was replaced.

Downstream devices are sending HELLOs but this 480 is not indicating it’s 
receiving them via ospf3 stats output which is weird but connectivity is good.

-Scott H


On Jun 16, 2019, at 2:08 AM, Anderson, Charles R  wrote:

Silly question, does the sa name match between "ospf3...ipsec-sa FOO"
and "security ipsec security-association FOO..."?

What does "show ipsec security-associations" show?

What Junos version?  There was a memory leak or file descriptor leak
in older Junos that killed the ipsec daemon after a long uptime, but I
don't recall anything that would cause it to fail right after reboot.
But you can try "restart ipsec-key-management" anyway.


On Sat, Jun 15, 2019 at 09:02:30PM -0500, Scott Harvanek wrote:
Hey guys,

Getting something interesting after a reboot;

Jun 16 01:58:45  MX480.1 kernel: ipsec_find_sa_in_so_gen(1999): Couldn't
dereference the sa name = XX

When trying to bring up the IPSec tunnel for ospf3 peering ( which never
establishes ), any ideas what this means? Do I need to restart the ipsec
key daemon?

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MX480 ospf3 ipsec jammed?

2019-06-15 Thread Scott Harvanek

Hey guys,

Getting something interesting after a reboot;

Jun 16 01:58:45  MX480.1 kernel: ipsec_find_sa_in_so_gen(1999): Couldn't 
dereference the sa name = XX


When trying to bring up the IPSec tunnel for ospf3 peering ( which never 
establishes ), any ideas what this means? Do I need to restart the ipsec 
key daemon?


--
Scott H.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp