Re: [j-nsp] 802.3ad LAG between ASR 1002-X and Juniper MX204
Yes, you'd better drop all the hash+loadbalance+linkindex conf (by the way, on MX the "hash-key" knob is only for DPC cards, 10+ years old). However about the LAG itself, if you want something reliable you really should use LACP instead of static LAG. Static LAGs, a good way to get your traffic lost... > Le 19 juil. 2019 à 22:02, Gert Doering a écrit : > > On Fri, Jul 19, 2019 at 07:56:47PM +, Eric Van Tol wrote: >> On 7/19/19, 3:40 PM, "Gert Doering" wrote: >>That sounds a bit weird... why should the device care how the other >>end balances its packets? Never heard anyone state this, and I can't >>come up with a reason why. >> >> *sigh* >> >> I'd been focusing way too much on the config portion of the documentation >> that I completely skimmed over the very first paragraph: >> >> "MX Series routers with Aggregated Ethernet PICs support symmetrical >> load balancing on an 802.3ad LAG. This feature is significant when >> two MX Series routers are connected transparently through deep >> packet inspection (DPI) devices over an LAG bundle. > > Yes, *that* makes total sense :-) (I was thinking about "is it something > with stateful inspection?" but since this - inside MX or Cisco - usually > operates "on the ae/port-channel level" and not the individual member, > it didn't make sense either) ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] 802.3ad LAG between ASR 1002-X and Juniper MX204
On 7/19/19, 3:40 PM, "Gert Doering" wrote: That sounds a bit weird... why should the device care how the other end balances its packets? Never heard anyone state this, and I can't come up with a reason why. *sigh* I'd been focusing way too much on the config portion of the documentation that I completely skimmed over the very first paragraph: "MX Series routers with Aggregated Ethernet PICs support symmetrical load balancing on an 802.3ad LAG. This feature is significant when two MX Series routers are connected transparently through deep packet inspection (DPI) devices over an LAG bundle. DPI devices keep track of flows and require information of a given flow in both forward and reverse directions. Without symmetrical load balancing on an 802.3ad LAG, the DPIs could misunderstand the flow, leading to traffic disruptions. By using this feature, a given flow of traffic (duplex) is ensured for the same devices in both directions." Carry on, nothing to see here... ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] 802.3ad LAG between ASR 1002-X and Juniper MX204
Hi, On Fri, Jul 19, 2019 at 07:33:59PM +, Eric Van Tol wrote: > Hi all, > I need to bring up a 2x10G LAG between an MX204 and a customer's ASR 1002-X > and I want to make sure the links get load balanced as closely and reliably > as possible. Junos docs say, "The hash-computation for the forward and > reverse flow must be identical." They go on to detail how to configure a link > index to each physical port and that Trio chipsets require symmetrical > load-balancing. That sounds a bit weird... why should the device care how the other end balances its packets? Never heard anyone state this, and I can't come up with a reason why. gert -- "If was one thing all people took for granted, was conviction that if you feed honest figures into a computer, honest figures come out. Never doubted it myself till I met a computer with a sense of humor." Robert A. Heinlein, The Moon is a Harsh Mistress Gert Doering - Munich, Germany g...@greenie.muc.de signature.asc Description: PGP signature ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
[j-nsp] 802.3ad LAG between ASR 1002-X and Juniper MX204
Hi all, I need to bring up a 2x10G LAG between an MX204 and a customer's ASR 1002-X and I want to make sure the links get load balanced as closely and reliably as possible. Junos docs say, "The hash-computation for the forward and reverse flow must be identical." They go on to detail how to configure a link index to each physical port and that Trio chipsets require symmetrical load-balancing. The LAG will be bridged through to one of the 40G uplinks on the MX204. Here's my config for the Juniper side: chassis { aggregated-devices { ethernet { device-count 1; } } fpc 0 { pic 1 { hash-key { family { multiservice { payload { ip { layer-3; } } } } } } } } interfaces { et-0/0/0 { per-unit-scheduler; flexible-vlan-tagging; encapsulation flexible-ethernet-services; } unit 2 { encapsulation vlan-bridge; vlan-id 3190; family bridge; } } xe-0/1/0 { gigether-options { 802.3ad { ae0; link-index 0; } } } xe-0/1/1 { gigether-options { 802.3ad { ae0; link-index 1; } } } ae0 { encapsulation ethernet-bridge; aggregated-ether-options { no-flow-control; minimum-links 1; link-speed 10g; } unit 0 { family bridge; } } } forwarding-options { hash-key { family multiservice { payload { ip { layer-3; } } symmetric-hash; } } enhanced-hash-key { family multiservice { no-mac-addresses; } symmetric; } } On the Cisco, I am going to suggest: port-channel load-balance-hash-algo src-dst-ip ! interface TenGigabitEthernet0/0/0 no ip address channel-group 1 link 1 ! interface TenGigabitEthernet0/0/1 no ip address channel-group 1 link 2 ! interface Port-channel1 no negotiation auto ip address 10.45.98.10 255.255.255.0 load-balance flow ! Can anyone tell me if there is anything I'm missing here? I did not include here my CoS config to help with serialization delay issues. I don't have an ASR 1002-X to test with and the ASR 920s I do have available to me don’t have full command parity with the 1002-X. I’m also not sure of the chipset differences between the 1002-X and the 920 model. Any suggestions appreciated. Thanks, evt ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] 40Gig Ether for MX480
On 19/Jul/19 16:48, adamv0...@netconsultings.com wrote: > Agree with the 40g dead in the future statement above but the 100 instead of > 40 cause it's cheaper argument I'm not actually getting. Unless your customer says they only have 40Gbps ports, don't want N x 10Gbps, won't be buying 100Gbps ports anytime soon, and need the service NOW! Mark. ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] 40Gig Ether for MX480
We've used lots of these: https://www.juniper.net/documentation/en_US/release-independent/junos/topics/reference/general/mpc5e-6x40ge-24x10ge.html but if this is your first 40G port, that's probably not cost effective. Also note, only half the ports can be powered up, so it's 24x10G, or 6x40G, or 12x10G + 3x40G. If you have a spare MIC slot, I suspect this is a much cheaper route: https://www.juniper.net/documentation/en_US/release-independent/junos/topics/reference/general/mic-mx-series-40-gigabit-ethernet-qsfp.html Message: 3 Date: Thu, 18 Jul 2019 16:58:55 -0600 From: John Brown To: juniper-nsp Subject: [j-nsp] 40Gig Ether for MX480 Message-ID: Content-Type: text/plain; charset="UTF-8" Hi, I have a client that is wanting a 40Gig ether handoff. What would folks recommend for an interface on a MX480 system? The customer is also asking if we need to handle G.709 FEC Thoughts and tips appreciated. -- Respectfully, John Brown, CISSP Managing Member, CityLink Telecommunications NM, LLC -- Jon Lewis, MCP :) | I route | therefore you are _ http://www.lewis.org/~jlewis/pgp for PGP public key_ ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] 40Gig Ether for MX480
Same. Juniper is running WAY too late on an ACX5048 replacement with 100G interfaces. We had great expectations for the ACX5448 until we saw the price list being 3-4x higher than the 5048. Regarding the original question, I'd also check the MPC5 if your budget is restricted and you have slots to spare. You can get 12x10G and 3x40G if you only need to serve that one customer over 40G. Juniper's pricing for the MPC7-MRATE is also ridiculous at 2x the price of an MX204. On Fri, Jul 19, 2019 at 10:27 AM Aaron Gould wrote: > > My ISP network is core/agg mpls rings of MX960's and ACX5048's960's > connect 40 gig to 5048's using the MPC7E-MRATE in the MX960. > > Seems good to me so far > > Also use MX960 40 gig on MPC7E-MRATE to DC/CDN deployments of QFX5120's > (pure Ethernet tagging). > > -Aaron > > > ___ > juniper-nsp mailing list juniper-nsp@puck.nether.net > https://puck.nether.net/mailman/listinfo/juniper-nsp ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] 40Gig Ether for MX480
> Saku Ytti > Sent: Friday, July 19, 2019 7:46 AM > > On Fri, 19 Jul 2019 at 04:27, Jared Mauch wrote: > > > Is there a reason to not do 4x10G or 1x100G? It’s cheap enough these > days. > > If they’re in-datacenter I can maybe understand 40G but outside the DC it’s > unclear to me why someone would do this. > > Agreed. 40GE future looks extremely bad. This gen is 25G lanes, next gen is > 50G lanes. QSFP56 will support 8 or 4 lanes at 25G or 50G. So you can get > perfect break-out, without wasting any capacity. Commonly today 40GE port > density is identical to 100GE density, wasting 60% of your investment, just to > avoid using gearboxes and retimers. > Agree with the 40g dead in the future statement above but the 100 instead of 40 cause it's cheaper argument I'm not actually getting. Disclaimer, I'm in the business where at the customer edge it's not so much about the actual tx/rx rates, but rather about port quantities so I'd happily use 2:1 front to back card oversubscription. Now if we're talking about giving customers 100G ports instead of requested 40G ports -even though they don’t need it (truth be told they most likely don't even need 40) then what options do we have on MX? MPC7s have come down in price significantly over the past year or so, and I can get max 4x 100G ports out of those whereas I can get 12x40G ports. Now to go for MPC10s instead just to give each of those say 12 customers a 100G port even though they did not ask for 100GE and would barely use 40 in reality doesn't quite add up. Maybe once there will be new 12x400G card and MPC10 prices will plummet -then sure.. I'd say that unless you're in the business where your access pipes are all red hot and you need to bear the "premium" pricing of the latest fastest HW, then I don't think the model of buy capacity now cause you will definitely need it in future is the right one. Instead I'd suggest you buy capacity when you actually need it and chances are the state of the art has moved on and you don’t need to pay "premium" any more to fulfil your then timely capacity needs. adam ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] 40Gig Ether for MX480
My ISP network is core/agg mpls rings of MX960's and ACX5048's960's connect 40 gig to 5048's using the MPC7E-MRATE in the MX960. Seems good to me so far Also use MX960 40 gig on MPC7E-MRATE to DC/CDN deployments of QFX5120's (pure Ethernet tagging). -Aaron ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp
Re: [j-nsp] 40Gig Ether for MX480
On Fri, 19 Jul 2019 at 04:27, Jared Mauch wrote: > Is there a reason to not do 4x10G or 1x100G? It’s cheap enough these days. > If they’re in-datacenter I can maybe understand 40G but outside the DC it’s > unclear to me why someone would do this. Agreed. 40GE future looks extremely bad. This gen is 25G lanes, next gen is 50G lanes. QSFP56 will support 8 or 4 lanes at 25G or 50G. So you can get perfect break-out, without wasting any capacity. Commonly today 40GE port density is identical to 100GE density, wasting 60% of your investment, just to avoid using gearboxes and retimers. -- ++ytti ___ juniper-nsp mailing list juniper-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/juniper-nsp