[j-nsp] Splitting Dot1q VLAN across Logical Systems

2013-01-24 Thread Skeeve Stevens
Hey all,

I want to build this scenario.

2 * MX80, with a trunk between then.

On the trunk (as an example) there would be two VLANs.

I would like to take VLAN 100 on Router-A Logical System A to Router-B
Logical System A, while at the same taking VLAN 200 on Router-A Logical
System B to Router-B Logical System B.

Does this make sense?

I'm hearing I have to allocate a whole physical interface to a Logical
System which means I can't use a VLAN from it for another Logical System.

Does this make sense with what I am looking to do?

Thanks ;-)
*

*
*Skeeve Stevens, CEO - *eintellego Pty Ltd
ske...@eintellego.net ; www.eintellego.net

Phone: 1300 753 383; Cell +61 (0)414 753 383 ; skype://skeeve

facebook.com/eintellego ;  http://twitter.com/networkceoau
linkedin.com/in/skeeve

twitter.com/networkceoau ; blog: www.network-ceo.net

The Experts Who The Experts Call
Juniper - Cisco – IBM - Brocade - Cloud
-
Check out our Juniper promotion website!  eintellego.mx
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Splitting Dot1q VLAN across Logical Systems

2013-01-24 Thread Benny Amorsen
Skeeve Stevens skeeve+juniper...@eintellego.net writes:

 I'm hearing I have to allocate a whole physical interface to a Logical
 System which means I can't use a VLAN from it for another Logical System.

 Does this make sense with what I am looking to do?

You should be able to assign logical interfaces to each logical system.
E.g. place xe-0 unit 100 in logical system A and xe-0 unit 200 in
logical system B on both routers.


/Benny

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Splitting Dot1q VLAN across Logical Systems

2013-01-24 Thread Skeeve Stevens
Thank you everyone for the responses... public and privately, it's cleared
up a few things...
*

*
*Skeeve Stevens, CEO - *eintellego Pty Ltd
ske...@eintellego.net ; www.eintellego.net

Phone: 1300 753 383; Cell +61 (0)414 753 383 ; skype://skeeve

facebook.com/eintellego ;  http://twitter.com/networkceoau
linkedin.com/in/skeeve

twitter.com/networkceoau ; blog: www.network-ceo.net

The Experts Who The Experts Call
Juniper - Cisco – IBM - Brocade - Cloud
-
Check out our Juniper promotion website!  eintellego.mx


On Thu, Jan 24, 2013 at 10:45 PM, Benny Amorsen benny+use...@amorsen.dkwrote:

 Skeeve Stevens skeeve+juniper...@eintellego.net writes:

  I'm hearing I have to allocate a whole physical interface to a Logical
  System which means I can't use a VLAN from it for another Logical System.
 
  Does this make sense with what I am looking to do?

 You should be able to assign logical interfaces to each logical system.
 E.g. place xe-0 unit 100 in logical system A and xe-0 unit 200 in
 logical system B on both routers.


 /Benny


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] FPC/PFE route scaling

2013-01-24 Thread Thomas Nikolajsen
I'm having a hard time understanding route scaling on the bigger M and
MX boxes.

You are not alone.
Official documentation is very sparse in this area.

In a CFEB based box maximum FIB is around 500k due to the limited SRAM
on the CFEB holding the FIB.
On a CFEB-E this is increased.

Now looking at the M320 or MX240 the FIB is present on all FPC boards
in each PFE, correct?

Yes

What is the architectural difference when using a
distributed setup, and how does it affect maximum FIB size? Eg. On an
FPC3-E2 in an M320 what can be expected to be the limit?

On M320 maximum FIB size is 1M ipv4 or 768k ipv6 routes.
Expected limit depends on many factors; around 700k ipv4 routes seems
feasible here.
The many factors include use of firewall filters, number of aggregated
interfaces  ECMP.

Also use of features like LFA will influence FIB use somewhat.
http://www.juniper.net/us/en/local/pdf/whitepapers/2000345-en.pdf has
some info on LFA,
it also describes different types of FIB entries;
M320 FPC can accommodate different amount of them (has different jtree
memory use).

On E3 FPCs expected limit can be somewhat higher:
http://www.juniper.net/techpubs/en_US/junos10.4/topics/task/configuratio
n/junos-software-jtree-memory-repartitioning.html

On MX960 (MX240 should be same) maximum FIB size on Trio based PFE is 2M
ipv4 or ipv6 routes.
Expected limit should be much closer to maximum here.

 -thomas

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Splitting Dot1q VLAN across Logical Systems

2013-01-24 Thread joel jaeggli

On 1/24/13 3:24 AM, Skeeve Stevens wrote:

Hey all,

I want to build this scenario.

2 * MX80, with a trunk between then.

On the trunk (as an example) there would be two VLANs.

I would like to take VLAN 100 on Router-A Logical System A to Router-B
Logical System A, while at the same taking VLAN 200 on Router-A Logical
System B to Router-B Logical System B.

Does this make sense?

I'm hearing I have to allocate a whole physical interface to a Logical
System which means I can't use a VLAN from it for another Logical System.

Does this make sense with what I am looking to do?

you:

create  bridge group
put two vlans inside it
create an irb for each vlan
add each irb to a different logical system or routing instance

something like that.

Thanks ;-)
*

*
*Skeeve Stevens, CEO - *eintellego Pty Ltd
ske...@eintellego.net ; www.eintellego.net

Phone: 1300 753 383; Cell +61 (0)414 753 383 ; skype://skeeve

facebook.com/eintellego ;  http://twitter.com/networkceoau
linkedin.com/in/skeeve

twitter.com/networkceoau ; blog: www.network-ceo.net

The Experts Who The Experts Call
Juniper - Cisco – IBM - Brocade - Cloud
-
Check out our Juniper promotion website!  eintellego.mx
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Splitting Dot1q VLAN across Logical Systems

2013-01-24 Thread Aaron Dewell
Not true. Logical interfaces are allocated to logical systems, not physical
interfaces. No problem with what you're doing.
On Jan 24, 2013 4:28 AM, Skeeve Stevens skeeve+juniper...@eintellego.net
wrote:

 Hey all,

 I want to build this scenario.

 2 * MX80, with a trunk between then.

 On the trunk (as an example) there would be two VLANs.

 I would like to take VLAN 100 on Router-A Logical System A to Router-B
 Logical System A, while at the same taking VLAN 200 on Router-A Logical
 System B to Router-B Logical System B.

 Does this make sense?

 I'm hearing I have to allocate a whole physical interface to a Logical
 System which means I can't use a VLAN from it for another Logical System.

 Does this make sense with what I am looking to do?

 Thanks ;-)
 *

 *
 *Skeeve Stevens, CEO - *eintellego Pty Ltd
 ske...@eintellego.net ; www.eintellego.net

 Phone: 1300 753 383; Cell +61 (0)414 753 383 ; skype://skeeve

 facebook.com/eintellego ;  http://twitter.com/networkceoau
 linkedin.com/in/skeeve

 twitter.com/networkceoau ; blog: www.network-ceo.net

 The Experts Who The Experts Call
 Juniper - Cisco – IBM - Brocade - Cloud
 -
 Check out our Juniper promotion website!  eintellego.mx
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MX Memory Allocation Problems

2013-01-24 Thread GIULIANO (WZTECH)

Hi,

We have an MX80-5 router with 108 BGP sessions.

All the BGP sessions for less than few routes.

Only 2 BGP is full routing.

We are running 12.2R2 version ... of JUNOS Software.

After a reboot this morning putting the box in the following mode:


set chassis network-services enhanced-ip ...


The chassis starts to allocate memory without any reason.

Along the day it was 82%, 84% and now is beating a critical limit of 95%.

I thing that we will need to restart the box again to cleanup memory 
allocation.


Did you ever see this kind of issue before ?

The total number of routes is not been suffering any update.

And the box is running J-Flow IPFX mode.

We have started a conversation with J-TAC but still remaining as no 
solution for the problem.


Does anyone can give some update about it ?  Any similar issue like this 
before ?


Thanks a lot,

Giuliano

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Redundancy with MX

2013-01-24 Thread Stephen Hon
Ouch… I picked a single MX480 chassis design over a dual MX80 because of
the unavailability of the MS-DPC card in the MX80.

We're very new to Juniper here with close to no practical experience.
Nonetheless, we're migrating away from Brocade NetIron MLX to the MX and
we figured that dual RE and SCB would helpful relative to ISSU and NSR but
I guess the general consensus is that it's preferable to have separate
routers over redundant RE's.

I'm wondering though, would dividing some of the routing duties into
logical systems help to protect from a massive system-wide problem? From
what I understand the logical systems spin up their own set of processes
and have their own configuration so it would seem that there could be some
level of protection.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Redundancy with MX

2013-01-24 Thread Caillin Bathern
There are some per-logical-system processes but there are also some that
are chassis wide.  Logical systems also do not support some features,
including I believe most MS-DPC functions, FA-LSPs (go figure) and some
others.  You will also always have a single cos and chassis process for
all logical systems so no real help for a crash there.  Also,
maintenance/provisioning tools will almost never work properly with
logical systems for some reason or another so I would recommend keeping
logical systems limited to the lab for testing larger scenarios on less
equipment.. 

Cheers,
Caillin

-Original Message-
From: juniper-nsp-boun...@puck.nether.net
[mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of Stephen Hon
Sent: Friday, 25 January 2013 9:53 AM
To: juniper-nsp@puck.nether.net
Subject: Re: [j-nsp] Redundancy with MX

Ouch... I picked a single MX480 chassis design over a dual MX80 because
of the unavailability of the MS-DPC card in the MX80.

We're very new to Juniper here with close to no practical experience.
Nonetheless, we're migrating away from Brocade NetIron MLX to the MX and
we figured that dual RE and SCB would helpful relative to ISSU and NSR
but I guess the general consensus is that it's preferable to have
separate routers over redundant RE's.

I'm wondering though, would dividing some of the routing duties into
logical systems help to protect from a massive system-wide problem? From
what I understand the logical systems spin up their own set of processes
and have their own configuration so it would seem that there could be
some level of protection.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
--
Message  protected by MailGuard: e-mail anti-virus, anti-spam and
content filtering.http://www.mailguard.com.au/mg


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Redundancy with MX

2013-01-24 Thread joel jaeggli

On 1/24/13 2:53 PM, Stephen Hon wrote:

Ouch… I picked a single MX480 chassis design over a dual MX80 because of
the unavailability of the MS-DPC card in the MX80.

yeah that's a consideration if you need an msdpc.

We're very new to Juniper here with close to no practical experience.
Nonetheless, we're migrating away from Brocade NetIron MLX to the MX and
we figured that dual RE and SCB would helpful relative to ISSU and NSR but
I guess the general consensus is that it's preferable to have separate
routers over redundant RE's.
dual RE and and NSR work and are useful... if you every have to replace 
a failing cb or RE and you do so without a hitch, you'll be pretty 
impressed. software upgrades even without ISSU are simpler and less 
impactful (and easier to recover from) than with only one RE
that said tradeoffs are tradeoffs and everyone has a slightly different 
point at which they compromise to meet their 
price/availability/functional needs.

I'm wondering though, would dividing some of the routing duties into
logical systems help to protect from a massive system-wide problem? From
what I understand the logical systems spin up their own set of processes
and have their own configuration so it would seem that there could be some
level of protection.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Redundancy with MX

2013-01-24 Thread james jones
Are you looking to do active-standby or active-active mc-lag?

On Mon, Jan 21, 2013 at 3:48 PM, Andre Christian 
andre.christ...@o3bnetworks.com wrote:

 Marcus - I am building about 10 PoPs and opted for the dual mx-80 design.
 Also looked at making the PoPs all layer 2 with a pair of exs.

 Plan to use MC-LAG where applicable.

 On Jan 21, 2013, at 3:43 PM, Markus H hauschild.mar...@gmail.com
 wrote:

  Hi,
 
  I wonder what kind of redundancy the community would prefer for
  small-medium sized PoPs.
  This is what I have come up with so far:
 
  a) 2xMX80
  Pro: Two seperate devices so less prone to config errors and chassis
 failure
  Con: Using redundant uplinks is more complicated (LB would need to be
  done via routing protocol)
 
  b) 1xMX240/480 with redundant SCB and RE
  Pro: Easier to use redundant uplinks (LACP)
  Con: Config error as well as chassis failure brings the whole PoP down
 
  Any further arguments? Best practices? What did you deploy?
 
 
  Best regards,
  Markus
  ___
  juniper-nsp mailing list juniper-nsp@puck.nether.net
  https://puck.nether.net/mailman/listinfo/juniper-nsp

 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Redundancy with MX

2013-01-24 Thread Saku Ytti
On (2013-01-24 17:53 -0500), Stephen Hon wrote:

 I'm wondering though, would dividing some of the routing duties into
 logical systems help to protect from a massive system-wide problem? From
 what I understand the logical systems spin up their own set of processes
 and have their own configuration so it would seem that there could be some
 level of protection.

I would personally avoid lsys as much as possible. I would not out-right
ban the feature, but I would want extremely good justification for it, I
haven't figured out one yet, but I'm sure there are reasons to run it.

But you are right, each lsys runs own copy of RPD. And this is probably
only way to use more than 4GB of memory today.
I'm sure there are some failure scenarios where fault won't propagate
across LSYS borders. Like maybe one LSYS is VPN BGP and another is INET
BGP, and when some UPDATE from DFZ crashes your INET RPD possibly VPN RPD
is alive and kicking.

Still I'd be afraid that the added complexity bites me more often than
saves me.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp