Re: [j-nsp] 40Gig Ether for MX480

2019-07-19 Thread Mark Tinka



On 19/Jul/19 16:48, adamv0...@netconsultings.com wrote:

> Agree with the 40g dead in the future statement above but the 100 instead of 
> 40 cause it's cheaper argument I'm not actually getting.

Unless your customer says they only have 40Gbps ports, don't want N x
10Gbps, won't be buying 100Gbps ports anytime soon, and need the service
NOW!

Mark.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 40Gig Ether for MX480

2019-07-19 Thread Jonathan Lewis
We've used lots of these:

https://www.juniper.net/documentation/en_US/release-independent/junos/topics/reference/general/mpc5e-6x40ge-24x10ge.html

but if this is your first 40G port, that's probably not cost effective.  Also 
note, only half the ports can be powered up, so it's 24x10G, or 6x40G, or 
12x10G + 3x40G.

If you have a spare MIC slot, I suspect this is a much cheaper route:
https://www.juniper.net/documentation/en_US/release-independent/junos/topics/reference/general/mic-mx-series-40-gigabit-ethernet-qsfp.html

Message: 3
Date: Thu, 18 Jul 2019 16:58:55 -0600
From: John Brown 
To: juniper-nsp 
Subject: [j-nsp] 40Gig Ether for MX480
Message-ID:

Content-Type: text/plain; charset="UTF-8"

Hi,
I have a client that is wanting a 40Gig ether handoff.   What would
folks recommend for
an interface on a MX480 system?

The customer is also asking if we need to handle G.709 FEC

Thoughts and tips appreciated.

-- 
Respectfully,

John Brown, CISSP
Managing Member, CityLink Telecommunications NM, LLC



--
 Jon Lewis, MCP :)   |  I route
 |  therefore you are
_ http://www.lewis.org/~jlewis/pgp for PGP public key_

 


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 40Gig Ether for MX480

2019-07-19 Thread Luis Balbinot
Same. Juniper is running WAY too late on an ACX5048 replacement with
100G interfaces. We had great expectations for the ACX5448 until we
saw the price list being 3-4x higher than the 5048.

Regarding the original question, I'd also check the MPC5 if your
budget is restricted and you have slots to spare. You can get 12x10G
and 3x40G if you only need to serve that one customer over 40G.
Juniper's pricing for the MPC7-MRATE is also ridiculous at 2x the
price of an MX204.

On Fri, Jul 19, 2019 at 10:27 AM Aaron Gould  wrote:
>
> My ISP network is core/agg mpls rings of MX960's and ACX5048's960's
> connect 40 gig to 5048's using the MPC7E-MRATE in the MX960.
>
> Seems good to me so far
>
> Also use MX960 40 gig on MPC7E-MRATE to DC/CDN deployments of QFX5120's
> (pure Ethernet tagging).
>
> -Aaron
>
>
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 40Gig Ether for MX480

2019-07-19 Thread adamv0025
> Saku Ytti
> Sent: Friday, July 19, 2019 7:46 AM
> 
> On Fri, 19 Jul 2019 at 04:27, Jared Mauch  wrote:
> 
> > Is there a reason to not do 4x10G or 1x100G?  It’s cheap enough these
> days.
> > If they’re in-datacenter I can maybe understand 40G but outside the DC it’s
> unclear to me why someone would do this.
> 
> Agreed. 40GE future looks extremely bad. This gen is 25G lanes, next gen is
> 50G lanes. QSFP56 will support 8 or 4 lanes at 25G or 50G. So you can get
> perfect break-out, without wasting any capacity. Commonly today 40GE port
> density is identical to 100GE density, wasting 60% of your investment, just to
> avoid using gearboxes and retimers.
> 
Agree with the 40g dead in the future statement above but the 100 instead of 40 
cause it's cheaper argument I'm not actually getting.

Disclaimer, I'm in the business where at the customer edge it's not so much 
about the actual tx/rx rates, but rather about port quantities so I'd happily 
use 2:1 front to back card oversubscription. 
Now if we're talking about giving customers 100G ports instead of requested 40G 
ports -even though they don’t need it (truth be told they most likely don't 
even need 40) then what options do we have on MX?
MPC7s have come down in price significantly over the past year or so, and I can 
get max 4x 100G ports out of those whereas I can get 12x40G ports.
Now to go for MPC10s instead just to give each of those say 12 customers a 100G 
port even though they did not ask for 100GE and would barely use 40 in reality 
doesn't quite add up. 
Maybe once there will be new 12x400G card and MPC10 prices will plummet -then 
sure..
I'd say that unless you're in the business where your access pipes are all red 
hot and you need to bear the "premium" pricing of the latest fastest HW, then I 
don't think the model of buy capacity now cause you will definitely need it in 
future is the right one. Instead I'd suggest you buy capacity when you actually 
need it and chances are the state of the art has moved on and you don’t need to 
pay "premium" any more to fulfil your then timely capacity needs. 

adam

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 40Gig Ether for MX480

2019-07-19 Thread Aaron Gould
My ISP network is core/agg mpls rings of MX960's and ACX5048's960's
connect 40 gig to 5048's using the MPC7E-MRATE in the MX960.

Seems good to me so far

Also use MX960 40 gig on MPC7E-MRATE to DC/CDN deployments of QFX5120's
(pure Ethernet tagging).

-Aaron


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 40Gig Ether for MX480

2019-07-19 Thread Saku Ytti
On Fri, 19 Jul 2019 at 04:27, Jared Mauch  wrote:

> Is there a reason to not do 4x10G or 1x100G?  It’s cheap enough these days.
> If they’re in-datacenter I can maybe understand 40G but outside the DC it’s 
> unclear to me why someone would do this.

Agreed. 40GE future looks extremely bad. This gen is 25G lanes, next
gen is 50G lanes. QSFP56 will support 8 or 4 lanes at 25G or 50G. So
you can get perfect break-out, without wasting any capacity. Commonly
today 40GE port density is identical to 100GE density, wasting 60% of
your investment, just to avoid using gearboxes and retimers.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 40Gig Ether for MX480

2019-07-18 Thread Colton Conor
John did you google this?
https://www.juniper.net/documentation/en_US/release-independent/junos/topics/reference/general/mic-mx-series-40-gigabit-ethernet-qsfp.html


On Thu, Jul 18, 2019 at 5:59 PM John Brown  wrote:

> Hi,
> I have a client that is wanting a 40Gig ether handoff.   What would
> folks recommend for
> an interface on a MX480 system?
>
> The customer is also asking if we need to handle G.709 FEC
>
> Thoughts and tips appreciated.
>
> --
> Respectfully,
>
> John Brown, CISSP
> Managing Member, CityLink Telecommunications NM, LLC
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
>
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 40Gig Ether for MX480

2019-07-18 Thread Nathan Ward

> On 19/07/2019, at 1:26 PM, Jared Mauch  wrote:
> 
> Is there a reason to not do 4x10G or 1x100G?  It’s cheap enough these days. 
> 
> If they’re in-datacenter I can maybe understand 40G but outside the DC it’s 
> unclear to me why someone would do this.

40G doesn’t have potential hashing problems that 4x10G does, bundles means 
potential drama with protocols on some boxes.
Less of an issue on MX which is… generally pretty good with these things, but 
who knows what the other end hardware is.

4x10G discrete services (i.e. not a bundle) means you’ve likely still got 
balancing problems.

40G is easier to send over a single pair between DCs, too - 4x10G means muxes 
or similar.
40G works the same way of course, but does it in the optic.

--
Nathan Ward

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 40Gig Ether for MX480

2019-07-18 Thread Jared Mauch
Is there a reason to not do 4x10G or 1x100G?  It’s cheap enough these days. 

If they’re in-datacenter I can maybe understand 40G but outside the DC it’s 
unclear to me why someone would do this.

- Jared

> On Jul 18, 2019, at 6:58 PM, John Brown  wrote:
> 
> Hi,
> I have a client that is wanting a 40Gig ether handoff.   What would
> folks recommend for
> an interface on a MX480 system?
> 
> The customer is also asking if we need to handle G.709 FEC
> 
> Thoughts and tips appreciated.
> 
> -- 
> Respectfully,
> 
> John Brown, CISSP
> Managing Member, CityLink Telecommunications NM, LLC
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] 40Gig Ether for MX480

2019-07-18 Thread John Brown
Hi,
I have a client that is wanting a 40Gig ether handoff.   What would
folks recommend for
an interface on a MX480 system?

The customer is also asking if we need to handle G.709 FEC

Thoughts and tips appreciated.

-- 
Respectfully,

John Brown, CISSP
Managing Member, CityLink Telecommunications NM, LLC
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp