Re: [j-nsp] MPC-3D-16XGE-SFP

2016-08-31 Thread Josh Reynolds
O I'm sorry you are absolutely right. I'm thinking about EX4500 ports
for some reason.

I remember having to use a 10G uplink port on a switch to connect in to a
port on that line card for a 1g device as an emergency bandaid now that I
think about it.

On Aug 31, 2016 7:02 PM, "Dragan Jovicic"  wrote:

> Those are specifically 10G ports not 1G.
> We run these on some of our MX960 and MX480, both toward core and edge.
> They are fine high density 10G cards.
>
> Things to note. Card has 4 PFE; each MQ chip is good for ~70Gbps give or
> take +/- 10Gbps depending on packet sizes.
> This packet memory bandwidth is shared between both wan-facing and
> fabric-facing ports.
>
> Meaning, if you run wan-fabric traffic you get 35Gbps max. If you run
> wan-wan traffic you can get near line-rate (this is how fabricless mx80
> gets ~80Gbps).
>
> This is the definition of "full line rate" with these MPC2 cards.
>
> Regards
>
> Dragan
>
>
> On Thu, Sep 1, 2016 at 12:39 AM, Josh Reynolds 
> wrote:
>
>> Is it actually SFPP? If so, I've had a couple of these. With a regular SCB
>> you can only run line rate on 12 ports. With the enhanced SCB you can run
>> line rate on all 16, if memory serves. Supports GE and 10G SFP modules.
>>
>> I don't know about any warnings or concerns on your chassis as I ran these
>> on MX960's.
>>
>> On Aug 31, 2016 5:33 PM, "John Brown"  wrote:
>>
>>  Hi,
>>
>> I've received some pretty good pricing on the MPC-3D-16XGE-SFP card,
>> and was wondering what the list.wisdom is ??
>>
>> We are an ISP.  That will be the usage.
>> Some ports will have BGP, many will be static routed.
>>
>> Will this run full line rate on all 16 ports ?
>> Can I run multiple ISP type clients on this card ?
>>
>> What should I worry about ?
>>
>> Going into a MX480 chassis with MPC2 and MPC3 cards existing.
>>
>> Thanks
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MPC-3D-16XGE-SFP

2016-08-31 Thread Dragan Jovicic
Those are specifically 10G ports not 1G.
We run these on some of our MX960 and MX480, both toward core and edge.
They are fine high density 10G cards.

Things to note. Card has 4 PFE; each MQ chip is good for ~70Gbps give or
take +/- 10Gbps depending on packet sizes.
This packet memory bandwidth is shared between both wan-facing and
fabric-facing ports.

Meaning, if you run wan-fabric traffic you get 35Gbps max. If you run
wan-wan traffic you can get near line-rate (this is how fabricless mx80
gets ~80Gbps).

This is the definition of "full line rate" with these MPC2 cards.

Regards

Dragan


On Thu, Sep 1, 2016 at 12:39 AM, Josh Reynolds  wrote:

> Is it actually SFPP? If so, I've had a couple of these. With a regular SCB
> you can only run line rate on 12 ports. With the enhanced SCB you can run
> line rate on all 16, if memory serves. Supports GE and 10G SFP modules.
>
> I don't know about any warnings or concerns on your chassis as I ran these
> on MX960's.
>
> On Aug 31, 2016 5:33 PM, "John Brown"  wrote:
>
>  Hi,
>
> I've received some pretty good pricing on the MPC-3D-16XGE-SFP card,
> and was wondering what the list.wisdom is ??
>
> We are an ISP.  That will be the usage.
> Some ports will have BGP, many will be static routed.
>
> Will this run full line rate on all 16 ports ?
> Can I run multiple ISP type clients on this card ?
>
> What should I worry about ?
>
> Going into a MX480 chassis with MPC2 and MPC3 cards existing.
>
> Thanks
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MPC-3D-16XGE-SFP

2016-08-31 Thread Josh Reynolds
Is it actually SFPP? If so, I've had a couple of these. With a regular SCB
you can only run line rate on 12 ports. With the enhanced SCB you can run
line rate on all 16, if memory serves. Supports GE and 10G SFP modules.

I don't know about any warnings or concerns on your chassis as I ran these
on MX960's.

On Aug 31, 2016 5:33 PM, "John Brown"  wrote:

 Hi,

I've received some pretty good pricing on the MPC-3D-16XGE-SFP card,
and was wondering what the list.wisdom is ??

We are an ISP.  That will be the usage.
Some ports will have BGP, many will be static routed.

Will this run full line rate on all 16 ports ?
Can I run multiple ISP type clients on this card ?

What should I worry about ?

Going into a MX480 chassis with MPC2 and MPC3 cards existing.

Thanks
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MPC-3D-16XGE-SFP

2016-08-31 Thread John Brown
 Hi,

I've received some pretty good pricing on the MPC-3D-16XGE-SFP card,
and was wondering what the list.wisdom is ??

We are an ISP.  That will be the usage.
Some ports will have BGP, many will be static routed.

Will this run full line rate on all 16 ports ?
Can I run multiple ISP type clients on this card ?

What should I worry about ?

Going into a MX480 chassis with MPC2 and MPC3 cards existing.

Thanks
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] IPv6 traceroute over 6PE

2016-08-31 Thread James Jun
On Wed, Aug 31, 2016 at 03:50:26PM -0500, kwor...@gmail.com wrote:
> I have a simple lab setup using logical systems on an MX80 and I???m trying 
> to get trace route to work from CE1 <-> PE1 <-> P1 <-> PE2 <-> CE2 using ipv6 
> and I always get loss on the 2nd hop although it completes after the 
> timeouts.  Any clues would be appreciated.

Have you tried enabling icmp-tunneling under [protocols mpls] on hop 2?  It 
seems like your P router is popping label and trying to return using inet.0

James
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] IPv6 traceroute over 6PE

2016-08-31 Thread kworm83
I have a simple lab setup using logical systems on an MX80 and I’m trying to 
get trace route to work from CE1 <-> PE1 <-> P1 <-> PE2 <-> CE2 using ipv6 and 
I always get loss on the 2nd hop although it completes after the timeouts.  Any 
clues would be appreciated.

Traceroute output:

kevin@lab> traceroute ::2 logical-system CE1
traceroute6 to ::2 (::2) from ::2, 64 hops max, 12 byte packets
 1  ::1 (::1)  0.886 ms  0.831 ms  0.506 ms
 2  * * *
 3  ::1 (::1)  0.804 ms  0.579 ms  0.599 ms
 4  ::2 (::2)  0.702 ms  0.608 ms  0.722 ms

kevin@lab> traceroute ::2 logical-system CE2
traceroute6 to ::2 (::2) from ::2, 64 hops max, 12 byte packets
 1  ::1 (::1)  0.679 ms  0.544 ms  2.186 ms
 2  * * *
 3  ::1 (::1)  0.771 ms  0.619 ms  0.594 ms
 4  ::2 (::2)  0.724 ms  0.605 ms  0.622 ms

The config of the logical systems is simple:

CE1 {
interfaces {
lt-0/0/10 {
unit 8 {
description "Link to PE1";
encapsulation ethernet;
peer-unit 7;
family inet {
address 10.0.0.2/24;
}
family inet6 {
address ::2/64;
}
}
}
}
routing-options {
rib inet6.0 {
static {
route 0::/0 next-hop ::1;
}
}
}
}
CE2 {
interfaces {
lt-0/0/10 {
unit 10 {
description "Link to PE2";
encapsulation ethernet;
peer-unit 9;
family inet {
address 10.1.1.2/24;
}
family inet6 {
address ::2/64;
}
}
}
}
routing-options {
rib inet6.0 {
static {
route 0::/0 next-hop ::1;
}
}
}
}
P1 {
interfaces {
lt-0/0/10 {
unit 4 {
description "Link to PE2";
encapsulation ethernet;
peer-unit 3;
family inet {
address 10.4.4.1/24;
}
family iso;
family inet6;
family mpls;
}
unit 6 {
description "Link to PE1";
encapsulation ethernet;
peer-unit 5;
family inet {
address 10.5.5.1/24;
}
family iso;
family inet6;
family mpls;
}
}
lo0 {
unit 6 {
family inet {
address 75.75.75.1/32;
}
family iso {
address 49.0001.0750.7507.5001.00;
}
family mpls;
}
}
}
protocols {
rsvp {
interface lt-0/0/10.6;
interface lt-0/0/10.4;
interface lo0.6;
}
mpls {
interface lt-0/0/10.6;
interface lt-0/0/10.4;
interface lo0.6;
}
isis {
level 1 disable;
interface lt-0/0/10.4;
interface lt-0/0/10.6;
interface lo0.6 {
passive;
}
}
}
}
PE1 {
interfaces {
lt-0/0/10 {
unit 5 {
description "Link to P1";
encapsulation ethernet;
peer-unit 6;
family inet {
address 10.5.5.2/24;
}
family iso;
family inet6;
family mpls;
}
unit 7 {
description "Link to CE1";
encapsulation ethernet;
peer-unit 8;
family inet {
address 10.0.0.1/24;
}
family inet6 {
address ::1/64;
}
}
}
lo0 {
unit 5 {
family inet {
address 75.75.75.2/32;
}
family iso {
address 49.0001.0750.7507.5002.00;
}
family mpls;
}
}
}
protocols {
rsvp {
interface lt-0/0/10.5;
interface lo0.5;
}
mpls {
ipv6-tunneling;
label-switched-path PE1-to-PE2 {
from 75.75.75.2;
to 75.75.75.3;
}
interface lt-0/0/10.5;
interface lo0.5;
}
isis {
level 1 disable;
interface lt-0/0/10.5;
interface lo0.5 {
passive;
}
}
}
routing-options {
rib inet6.0 {
static {