Hello,
So using QFX10K as P core, it seems 6PE traceroute handling is broken
(15.1X63-D65).
QFX10K configured typically as a Juniper P box, and to get it to respond to
traceroute in 6PE environment, I have the following config below (pasted below).
Basically, you configure family inet6 on
On 16/May/18 16:31, Luca Salvatore via juniper-nsp wrote:
> It is feasible that we'll push more than 200Gb/s
> Any idea what performance is like above that level?
Should be fine.
The chipset is the 3rd generation Trio EA NPU. Same one used in the
MX10003 MPC; good for 400Gbps.
Mark.
We've been trying to track down why our 5100's are dropping traffic due
to lack of buffer space, even with very low link utilization.
It seems like they're classifying all our traffic as best-effort:
> show interfaces xe-0/0/49:0 extensive
Carrier transitions: 1, Errors: 0, Drops:
It is feasible that we'll push more than 200Gb/s
Any idea what performance is like above that level?
On Tue, May 15, 2018 at 12:59 PM Tim Jackson wrote:
> I think you're in the ~200gbps range for them if VXLAN is considered
> tunnel services. If not it should be line
So that port config tool. It looks like I can't do 24 10g. However, I can do 20
10g and a single 100g which makes no sense to me, but then again I know nothing
about the intricacies of modern ASIC design.
Sent from my iPhone
> On May 16, 2018, at 07:43, Mark Tinka wrote:
Once upon a time, Bill Blackford said:
> So that port config tool. It looks like I can't do 24 10g. However, I can do
> 20 10g and a single 100g which makes no sense to me, but then again I know
> nothing about the intricacies of modern ASIC design.
You can do 24x10G -
That port config tool sux ; but you can have 24x10g if you turn on the « per
PIC» small selector.
> Le 16 mai 2018 à 18:15, Bill Blackford a écrit :
>
> So that port config tool. It looks like I can't do 24 10g. However, I can do
> 20 10g and a single 100g which makes no
And additionally, 24x10g is the default when you unpack and plug the box.
> Le 16 mai 2018 à 18:28, Olivier Benghozi a
> écrit :
>
> That port config tool sux ; but you can have 24x10g if you turn on the « per
> PIC» small selector.
>
>> Le 16 mai 2018 à 18:15,
If you don't need the lossless queue then you can axe most of it. This
is what I've done:
set class-of-service shared-buffer ingress percent 100
set class-of-service shared-buffer ingress buffer-partition lossless percent 5
set class-of-service shared-buffer ingress buffer-partition lossy percent
On 2018-05-16 18:06, Brian Rak wrote:
> We've been trying to track down why our 5100's are dropping traffic
> due to lack of buffer space, even with very low link utilization.
There's only 12 Mbyte of buffer space on the Trident II chip. If you
get 10 Gbit/s bursts simultaneously on two ports,
On 5/16/2018 7:02 PM, Thomas Bellman wrote:
On 2018-05-16 18:06, Brian Rak wrote:
We've been trying to track down why our 5100's are dropping traffic
due to lack of buffer space, even with very low link utilization.
There's only 12 Mbyte of buffer space on the Trident II chip. If you
get
11 matches
Mail list logo