Hey Jason, > Jason Lixfeld > Sent: Wednesday, January 23, 2019 3:02 PM > > This is somewhat embarrassing. I was looking at the wrong side of the test > when I initially observed the issue and I didn’t clue into that till now, so > some > of the previous claims are false. > > Just for completeness, here’s the actual test topology: > > [ Tx Tester ] - et-0/0/2 - [ mx1 ] - et-0/0/0 - [ mx2 ] - et-0/0/2 - [ Rx > Tester ] > Is the test stream unidirectional please? -say from left (the mx1 side) to right (mx2 side) please? Or bidirectional please? Now that you're looking at the right router. Can you please run the: show pfe statistics traffic fpc 0 | match "cell|drops" on mx1 If it shows info cell drops then that means the PFE can't cope with the PPS rate. Since as Saku confirmed both interfaces et-0/0/2 and et-0/0/0 on mx1 are on the same PFE then the packet processing computational load for ingress and egress processing is not spread across two PFEs but rather executed on a single PFE which has to handle 200Gbps (100in+100out) worth of traffic @ 64bps, can't be bothered to calculate the pps rate there, but my guess is that the PFE can't handle the resulting PPS rate (as it is most likely above the PFE's overall (in+out) PPS budget) which is not that high on Gen3 (applies to most NPUs out there with 100g ports). If the chip is rated for 800G(400in+400out) extrapolating from my notes on MPC7 testing the 204PFE then should cope with ~200in+~200out @64bit (if your traffic is bidirectional you'd be at the limit.
> Incidentally, Olivier brought up hyper mode, so here are the results of that. > > et-0/0/2 @ mx1: > Input packets: 100000017 0 pps > > et-0/0/0 @ mx1: > Output packets: 76645888 8 pps > … This is an interesting result, Seems like hyper-mode helped with the ingress computation bit -is the flow-control disabled on all interfaces involved with this test please? (we don't want the mx2 to send pause frames to et-0/0/0 on mx1 when it can't cope with the ingress PPS rate, skewing the results) adam _______________________________________________ juniper-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/juniper-nsp

