Re: [j-nsp] MX960 vs MX10K

2020-03-17 Thread Andrey Kostin
Your 960 will be choked if you are going to push a decent traffic volume 
through it. And circulation through backplane to and from service cards 
will only make it worse.


Just imho. Your choice.

Kind regards,
Andrey Kostin

Aaron Gould писал 2020-03-09 09:18:

In my case, 960 has a lot of slots, and I use slot 0 and slot 11 for
MPC-7E-MRATE to light up 100 gig east/west ring and 40 gig south to ACX
subrings, so I have plenty of slot space for my MS-MPC-128G nat 
module... If
I place it somewhere else, then I gotta cross the network to some 
extent to
get to it... also, my dual 100 gig inet connections are on a couple of 
those
960's where I colo the mpc-128g card, yeah, it's all right there.  Not 
the
case for dsl nat, that's across the network in a couple mx104's, but 
dsl

doesn't have near the speeds that my ftth and cm subs have.

-Aaron


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-09 Thread Aaron Gould
Just fyi, I'm running evpn-mpls between a couple dc's and ms-mpc-128g for my  
cable modem communities all in the same mx960 chassis's... been good so far.

-Aaron


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-09 Thread Aaron Gould
In my case, 960 has a lot of slots, and I use slot 0 and slot 11 for
MPC-7E-MRATE to light up 100 gig east/west ring and 40 gig south to ACX
subrings, so I have plenty of slot space for my MS-MPC-128G nat module... If
I place it somewhere else, then I gotta cross the network to some extent to
get to it... also, my dual 100 gig inet connections are on a couple of those
960's where I colo the mpc-128g card, yeah, it's all right there.  Not the
case for dsl nat, that's across the network in a couple mx104's, but dsl
doesn't have near the speeds that my ftth and cm subs have.

-Aaron

-Original Message-
From: juniper-nsp [mailto:juniper-nsp-boun...@puck.nether.net] On Behalf Of
Chris Kawchuk
Sent: Wednesday, March 4, 2020 9:33 PM
To: Tom Beecher
Cc: juniper-nsp
Subject: Re: [j-nsp] MX960 vs MX10K

Just to chime in --- for scale-out, wouldn't you be better offloading those
MS-MPC functions to another box? (i.e. VM/Dedicated Appliance/etc..?).

You burn slots for the MSMPC plus you burn the backplane crossing twice; so
it's at worst a neutral proposition to externalise it and add low-cost
non-HQoS ports to feed it.

or is it the case of limited space/power/RUs/want-it-all-in-one-box? and
yes, MS-MPC won't scale to Nx100G of workload.

- CK.



> On 5 Mar 2020, at 1:36 am, Tom Beecher  wrote:
> 
> It really depends on what you're going to be doing,but I still have quite
a
> few MX960s out there running pretty significant workloads without issues.
> 
> I would suspect you hit the limits of the MS-MPCs way before the limits of
> the chassis.
> 
> On Wed, Mar 4, 2020 at 6:56 AM Ibariouen Khalid 
wrote:
> 
>> dear Juniper community
>> 
>> is there any limitation of using MX960 as DC-GW compared to MX10K ?
>> 
>> juniper always recommends to use MX10K , but i my case i need MS-MPC
which
>> is not supported on MX10K and i want to knwo if i will have some
limitation
>> on MX960.
>> 
>> Thanks
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-06 Thread Andrey Kostin
I'd be +1 for this. For DC GW the main concern should be reliability and 
simplicity. If you are going to bring EVPN there, then having fancy 
services mixed on the same chassis may affect your uptime.
Also I'd take MX480 instead of 960 because of architecture compromises 
of the latter. I'm also wondering, if MX960 fits in terms of number of 
ports and capacity with some slots occupied by service cards, maybe 
MX1003 + MX480 (or virtualized services) would do the job?


Kind regards,
Andrey


Chris Kawchuk писал 2020-03-04 22:32:

Just to chime in --- for scale-out, wouldn't you be better offloading
those MS-MPC functions to another box? (i.e. VM/Dedicated
Appliance/etc..?).

You burn slots for the MSMPC plus you burn the backplane crossing
twice; so it's at worst a neutral proposition to externalise it and
add low-cost non-HQoS ports to feed it.

or is it the case of limited space/power/RUs/want-it-all-in-one-box?
and yes, MS-MPC won't scale to Nx100G of workload.

- CK.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-05 Thread Mark Tinka



On 5/Mar/20 18:29, Saku Ytti wrote:

>
> If you do it on d) it's done the NPU where the neighbour is, entirely
> on the NPU.

Not yet available for IPv6.

Which reminds me - let me see where Juniper are with this ER.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-05 Thread Alexander Arseniev via juniper-nsp
--- Begin Message ---

Hello,
Ok , when saying "not stateful in any meaningful way" I believe You 
meant data plane encryption/decryption only - barebone IPSec without IKE 
exchange and without anti-replay, or do You?
And JUNOS BFD variant (c) requires "anchor PFE" - actually not the PFE 
as "forwarding chip" but "PFE" as short way of saying "linecard CPU that 
runs PPMD" which processes BFD packets from all linecards.

Thanks
Alex


-- Original Message --
From: "Saku Ytti" 
To: "Alexander Arseniev" 
Cc: "Juniper List" 
Sent: 05/03/2020 16:29:57
Subject: Re: Re[2]: [j-nsp] MX960 vs MX10K


On Thu, 5 Mar 2020 at 18:05, Alexander Arseniev  wrote:



 I would expect the "IPSEC anchor PFE", just like it is done with BFD et
 al a.t.m.
 That anchor PFE maintains IKE exchange sequences/anti-replay etc and any
 IKE/IPSec packet arriving on a different PFE would be redirected there.
 Same thing really what currently happens on a Services card.


I'm not sure what you mean by BFD here. BFD can be done in various ways

a) RPD
b) PPMd on RE CPU
c) PPMd on LC CPU
d) Inline on NPU

If you do it on d) it's done the NPU where the neighbour is, entirely
on the NPU.

And sure there is signalling in IPSEC, just like there is in BGP,
which is not done in hardware. But actual bit pushing is done in
hardware.


--
  ++ytti


--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-05 Thread Saku Ytti
On Thu, 5 Mar 2020 at 18:05, Alexander Arseniev  wrote:


> I would expect the "IPSEC anchor PFE", just like it is done with BFD et
> al a.t.m.
> That anchor PFE maintains IKE exchange sequences/anti-replay etc and any
> IKE/IPSec packet arriving on a different PFE would be redirected there.
> Same thing really what currently happens on a Services card.

I'm not sure what you mean by BFD here. BFD can be done in various ways

a) RPD
b) PPMd on RE CPU
c) PPMd on LC CPU
d) Inline on NPU

If you do it on d) it's done the NPU where the neighbour is, entirely
on the NPU.

And sure there is signalling in IPSEC, just like there is in BGP,
which is not done in hardware. But actual bit pushing is done in
hardware.


-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-05 Thread Alexander Arseniev via juniper-nsp
--- Begin Message ---


-- Original Message --
From: "Saku Ytti" 


IPSEC isn't stateful in any meaningful way If you can implement MACSec
it shouldn't take much more transistors to do IPSEC.


I always thought maintaining anti-replay counters/IKEv exchange 
sequences etc is a stateful job, just like TCP handshake/SEQ numbers, 
no?





Indeed current gen (post EA, i.e. ZT and YT) Trio does IPSEC in every port.

I would expect the "IPSEC anchor PFE", just like it is done with BFD et 
al a.t.m.
That anchor PFE maintains IKE exchange sequences/anti-replay etc and any 
IKE/IPSec packet arriving on a different PFE would be redirected there.

Same thing really what currently happens on a Services card.
Thanks
Alex




--- End Message ---
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Saku Ytti
On Thu, 5 Mar 2020 at 05:52, Chris Kawchuk  wrote:

> Only question is if it needs stateful-ness or not (IPSEC, CGNAT etc...), but 
> only the OP can answer that.

IPSEC isn't stateful in any meaningful way If you can implement MACSec
it shouldn't take much more transistors to do IPSEC.

Indeed current gen (post EA, i.e. ZT and YT) Trio does IPSEC in every port.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Chris Kawchuk
Only question is if it needs stateful-ness or not (IPSEC, CGNAT etc...), but 
only the OP can answer that.

- CK.


> On 5 Mar 2020, at 2:39 pm, Mark Tinka  wrote:
> 
> 
> 
> On 5/Mar/20 05:32, Chris Kawchuk wrote:
> 
>> Just to chime in --- for scale-out, wouldn't you be better offloading those 
>> MS-MPC functions to another box? (i.e. VM/Dedicated Appliance/etc..?).
>> 
>> You burn slots for the MSMPC plus you burn the backplane crossing twice; so 
>> it's at worst a neutral proposition to externalise it and add low-cost 
>> non-HQoS ports to feed it.
>> 
>> or is it the case of limited space/power/RUs/want-it-all-in-one-box? and 
>> yes, MS-MPC won't scale to Nx100G of workload.
> 
> And along that line, are the services the OP needs on the MS-MPC not
> available natively in the MX1/960/480/240 line cards?
> 
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Mark Tinka



On 5/Mar/20 05:32, Chris Kawchuk wrote:

> Just to chime in --- for scale-out, wouldn't you be better offloading those 
> MS-MPC functions to another box? (i.e. VM/Dedicated Appliance/etc..?).
>
> You burn slots for the MSMPC plus you burn the backplane crossing twice; so 
> it's at worst a neutral proposition to externalise it and add low-cost 
> non-HQoS ports to feed it.
>
> or is it the case of limited space/power/RUs/want-it-all-in-one-box? and yes, 
> MS-MPC won't scale to Nx100G of workload.

And along that line, are the services the OP needs on the MS-MPC not
available natively in the MX1/960/480/240 line cards?

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Chris Kawchuk
Just to chime in --- for scale-out, wouldn't you be better offloading those 
MS-MPC functions to another box? (i.e. VM/Dedicated Appliance/etc..?).

You burn slots for the MSMPC plus you burn the backplane crossing twice; so 
it's at worst a neutral proposition to externalise it and add low-cost non-HQoS 
ports to feed it.

or is it the case of limited space/power/RUs/want-it-all-in-one-box? and yes, 
MS-MPC won't scale to Nx100G of workload.

- CK.



> On 5 Mar 2020, at 1:36 am, Tom Beecher  wrote:
> 
> It really depends on what you're going to be doing,but I still have quite a
> few MX960s out there running pretty significant workloads without issues.
> 
> I would suspect you hit the limits of the MS-MPCs way before the limits of
> the chassis.
> 
> On Wed, Mar 4, 2020 at 6:56 AM Ibariouen Khalid  wrote:
> 
>> dear Juniper community
>> 
>> is there any limitation of using MX960 as DC-GW compared to MX10K ?
>> 
>> juniper always recommends to use MX10K , but i my case i need MS-MPC which
>> is not supported on MX10K and i want to knwo if i will have some limitation
>> on MX960.
>> 
>> Thanks
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
>> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Mark Tinka



On 4/Mar/20 20:50, Luis Balbinot wrote:
> The MPC7E-MRATE is only good if you have to add a few 100G ports to a large
> chassis (i.e. MX960) that has lots of 10G interfaces and/or service cards.
> It's about 2/3 of the price of a new MX10003 with 12x100G.

That's my point :-).

We have several MX480's that have a ton of 10Gbps ports, but only need a
handful of 100Gbps ports. The MPC7E works out to be a little cheaper
than the MX1000 in that regard.

For cases where we need more than a handful of 100Gbps ports for edge
applications, the MX1 is cheaper than an MX480 with MPC7E's or MPC10E's.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Luis Balbinot
The MPC7E-MRATE is only good if you have to add a few 100G ports to a large
chassis (i.e. MX960) that has lots of 10G interfaces and/or service cards.
It's about 2/3 of the price of a new MX10003 with 12x100G.

On Wed, Mar 4, 2020 at 12:45 PM Mark Tinka  wrote:

>
>
> On 4/Mar/20 17:18, Tom Beecher wrote:
> > Likely, but if you only need like 4  :)
>
> Then try the MPC7E :-). Cheaper than the MPC10E.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Mark Tinka


On 4/Mar/20 17:18, Tom Beecher wrote:
> Likely, but if you only need like 4  :)

Then try the MPC7E :-). Cheaper than the MPC10E.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Tom Beecher
Likely, but if you only need like 4  :)

On Wed, Mar 4, 2020 at 10:01 AM Mark Tinka  wrote:

>
> On 4/Mar/20 16:53, Giuliano C. Medalha wrote:
>
> With the new MPC10 you can get 10 x 100G or 15 x 100G per slot in mx240 ,
> mx480 or mx960
>
> But you will need premium 3 chassis with scbe3 boards to have maximum
> capacity.
>
>
> An MX10008/10016 chassis can get you 24x 100Gbps per slot. That's going to
> be a lot cheaper than an MPC10E-15C-MRATE (and other bits you may need to
> upgrade for the performance).
>
> Mark.
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Mark Tinka


On 4/Mar/20 16:53, Giuliano C. Medalha wrote:
> With the new MPC10 you can get 10 x 100G or 15 x 100G per slot in
> mx240 , mx480 or mx960
>
> But you will need premium 3 chassis with scbe3 boards to have maximum
> capacity.

An MX10008/10016 chassis can get you 24x 100Gbps per slot. That's going
to be a lot cheaper than an MPC10E-15C-MRATE (and other bits you may
need to upgrade for the performance).

Mark.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Giuliano C. Medalha
With the new MPC10 you can get 10 x 100G or 15 x 100G per slot in mx240 , mx480 
or mx960

But you will need premium 3 chassis with scbe3 boards to have maximum capacity.



Get Outlook for iOS<https://aka.ms/o0ukef>

From: juniper-nsp  on behalf of Tom 
Beecher 
Sent: Wednesday, March 4, 2020 11:47:29 AM
To: Mark Tinka 
Cc: juniper-nsp 
Subject: Re: [j-nsp] MX960 vs MX10K

You can still get 100G ports on the 960 chassis with MPC5E/6/7s , depending
on what kind of density you require.

On Wed, Mar 4, 2020 at 9:42 AM Mark Tinka  wrote:

>
>
> On 4/Mar/20 16:36, Tom Beecher wrote:
> > It really depends on what you're going to be doing,but I still have
> quite a
> > few MX960s out there running pretty significant workloads without issues.
> >
> > I would suspect you hit the limits of the MS-MPCs way before the limits
> of
> > the chassis.
>
> The classic MX chassis are nowhere close to running out of ideas.
>
> But Juniper have to always be pushing the tech., so emphasis will be on
> the MX1000 (although not necessarily at the expense of the MX960/480/240).
>
> I still believe if your use-case is not overly complicated, you may find
> the MX960/480 to be cheaper if you don't need 100Gbps ports.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpuck.nether.net%2Fmailman%2Flistinfo%2Fjuniper-nspdata=02%7C01%7Cgiuliano%40wztech.com.br%7C0677ff9683cb447dc8ca08d7c04b78df%7C584787b077bd4312bf8815412b8ae504%7C1%7C0%7C637189302716146887sdata=JcCFyUbjkOfxyyPrKRn%2F3ihFfrh1AMWL1hSXyJSIrHo%3Dreserved=0
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpuck.nether.net%2Fmailman%2Flistinfo%2Fjuniper-nspdata=02%7C01%7Cgiuliano%40wztech.com.br%7C0677ff9683cb447dc8ca08d7c04b78df%7C584787b077bd4312bf8815412b8ae504%7C1%7C0%7C637189302716156884sdata=JkwRRASmBkKYk7KbLOkqY%2F%2BRuI%2Ffq9fNoVJdQN5KrRk%3Dreserved=0

WZTECH is registered trademark of WZTECH NETWORKS.
Copyright © 2018 WZTECH NETWORKS. All Rights Reserved.

IMPORTANTE:
As informações deste e-mail e o conteúdo dos eventuais documentos anexos são 
confidenciais e para conhecimento exclusivo do destinatário. Se o leitor desta 
mensagem não for o seu destinatário, fica desde já notificado de que não poderá 
divulgar, distribuir ou, sob qualquer forma, dar conhecimento a terceiros das 
informações e do conteúdo dos documentos anexos. Neste caso, favor comunicar 
imediatamente o remetente, respondendo este e-mail ou telefonando ao mesmo, e 
em seguida apague-o.

CONFIDENTIALITY NOTICE:
The information transmitted in this email message and any attachments are 
solely for the intended recipient and may contain confidential or privileged 
information. If you are not the intended recipient, any review, transmission, 
dissemination or other use of this information is prohibited. If you have 
received this communication in error, please notify the sender immediately and 
delete the material from any computer, including any copies.

WZTECH is registered trademark of WZTECH NETWORKS.
Copyright © 2018 WZTECH NETWORKS. All Rights Reserved.

IMPORTANTE:
As informações deste e-mail e o conteúdo dos eventuais documentos anexos são 
confidenciais e para conhecimento exclusivo do destinatário. Se o leitor desta 
mensagem não for o seu destinatário, fica desde já notificado de que não poderá 
divulgar, distribuir ou, sob qualquer forma, dar conhecimento a terceiros das 
informações e do conteúdo dos documentos anexos. Neste caso, favor comunicar 
imediatamente o remetente, respondendo este e-mail ou telefonando ao mesmo, e 
em seguida apague-o.

CONFIDENTIALITY NOTICE:
The information transmitted in this email message and any attachments are 
solely for the intended recipient and may contain confidential or privileged 
information. If you are not the intended recipient, any review, transmission, 
dissemination or other use of this information is prohibited. If you have 
received this communication in error, please notify the sender immediately and 
delete the material from any computer, including any copies.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Mark Tinka



On 4/Mar/20 16:47, Tom Beecher wrote:
> You can still get 100G ports on the 960 chassis with MPC5E/6/7s ,
> depending on what kind of density you require.

I didn't say the MX960/480 doesn't support 100Gbps ports; I said they
would be cheaper on an MX1 if you need more than a handful per slot.

We have some MPC7E's with 100Gbps ports on some of our MX480's. Because
we needed so few, it was cheaper than getting an MX1. But there are
instances where an MX1 makes more sense because we have a large need
of 100Gbps ports per slot in those areas.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Tom Beecher
You can still get 100G ports on the 960 chassis with MPC5E/6/7s , depending
on what kind of density you require.

On Wed, Mar 4, 2020 at 9:42 AM Mark Tinka  wrote:

>
>
> On 4/Mar/20 16:36, Tom Beecher wrote:
> > It really depends on what you're going to be doing,but I still have
> quite a
> > few MX960s out there running pretty significant workloads without issues.
> >
> > I would suspect you hit the limits of the MS-MPCs way before the limits
> of
> > the chassis.
>
> The classic MX chassis are nowhere close to running out of ideas.
>
> But Juniper have to always be pushing the tech., so emphasis will be on
> the MX1000 (although not necessarily at the expense of the MX960/480/240).
>
> I still believe if your use-case is not overly complicated, you may find
> the MX960/480 to be cheaper if you don't need 100Gbps ports.
>
> Mark.
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Mark Tinka



On 4/Mar/20 16:36, Tom Beecher wrote:
> It really depends on what you're going to be doing,but I still have quite a
> few MX960s out there running pretty significant workloads without issues.
>
> I would suspect you hit the limits of the MS-MPCs way before the limits of
> the chassis.

The classic MX chassis are nowhere close to running out of ideas.

But Juniper have to always be pushing the tech., so emphasis will be on
the MX1000 (although not necessarily at the expense of the MX960/480/240).

I still believe if your use-case is not overly complicated, you may find
the MX960/480 to be cheaper if you don't need 100Gbps ports.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Tom Beecher
It really depends on what you're going to be doing,but I still have quite a
few MX960s out there running pretty significant workloads without issues.

I would suspect you hit the limits of the MS-MPCs way before the limits of
the chassis.

On Wed, Mar 4, 2020 at 6:56 AM Ibariouen Khalid  wrote:

> dear Juniper community
>
> is there any limitation of using MX960 as DC-GW compared to MX10K ?
>
> juniper always recommends to use MX10K , but i my case i need MS-MPC which
> is not supported on MX10K and i want to knwo if i will have some limitation
> on MX960.
>
> Thanks
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Mark Tinka



On 4/Mar/20 13:55, Ibariouen Khalid wrote:

> dear Juniper community
>
> is there any limitation of using MX960 as DC-GW compared to MX10K ?
>
> juniper always recommends to use MX10K , but i my case i need MS-MPC which
> is not supported on MX10K and i want to knwo if i will have some limitation
> on MX960.

Juniper's future lies in the MX1.

If your needs are not too complicated, the MX960/480 are still great
options.

For us, the MX480 is our edge workhorse. But where we need to deliver
100Gbps service ports, the MX1000 makes more sense.

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Ibariouen Khalid
mx10008

On Wed, Mar 4, 2020 at 12:59 PM Alexandre Guimaraes <
alexandre.guimar...@ascenty.com> wrote:

>
>
> What model of MX10k?
>
>
> Em 04/03/2020 08:56, "juniper-nsp em nome de Ibariouen Khalid" <
> juniper-nsp-boun...@puck.nether.net em nome de ibario...@gmail.com>
> escreveu:
>
> dear Juniper community
>
> is there any limitation of using MX960 as DC-GW compared to MX10K ?
>
> juniper always recommends to use MX10K , but i my case i need MS-MPC
> which
> is not supported on MX10K and i want to knwo if i will have some
> limitation
> on MX960.
>
> Thanks
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp=DwICAg=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM=d3qAF5t8mugacLDeGpoAguKDWyMVANad_HfrWBCDH1s=fZPomAfgI6F_gCmglyCCQEd7ffiHarAb7El2RzioVt8=LfmhcIovDqSZJMirXRIbBV7E4uNs9PqHzR_R6ZnPMKw=
>
>
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-04 Thread Alexandre Guimaraes


What model of MX10k?


Em 04/03/2020 08:56, "juniper-nsp em nome de Ibariouen Khalid" 
 escreveu:

dear Juniper community

is there any limitation of using MX960 as DC-GW compared to MX10K ?

juniper always recommends to use MX10K , but i my case i need MS-MPC which
is not supported on MX10K and i want to knwo if i will have some limitation
on MX960.

Thanks
___
juniper-nsp mailing list juniper-nsp@puck.nether.net

https://urldefense.proofpoint.com/v2/url?u=https-3A__puck.nether.net_mailman_listinfo_juniper-2Dnsp=DwICAg=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM=d3qAF5t8mugacLDeGpoAguKDWyMVANad_HfrWBCDH1s=fZPomAfgI6F_gCmglyCCQEd7ffiHarAb7El2RzioVt8=LfmhcIovDqSZJMirXRIbBV7E4uNs9PqHzR_R6ZnPMKw=


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] MX960 vs MX10K

2020-03-04 Thread Ibariouen Khalid
dear Juniper community

is there any limitation of using MX960 as DC-GW compared to MX10K ?

juniper always recommends to use MX10K , but i my case i need MS-MPC which
is not supported on MX10K and i want to knwo if i will have some limitation
on MX960.

Thanks
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp