Re: [j-nsp] Junos 18.X on QFX5100

2019-05-28 Thread Franz Georg Köhler
On Sun, May 26, 2019 at 03:15:48PM +0200, Thomas Bellman  
wrote:
> 
> So far, the only problem I have seen is that the Jet Service Daemon
> (jsd) and the na-grpc-daemon starts eating 100% CPU after a few weeks
> on 18.3, but not the other versions.  Restarting them helps; for a
> few weeks, then they suddenly eat CPU again.  It should also be possible
> to disable them if you don't use them (I haven't gotten around to do
> that myself, though).

We see the same behaviour with 18.3R1 and regularily need to kill jsd
and na-grpc-daemon.

We run 18.2 to 19.1 and see those processes eating up CPU only on
18.3R1.

We see some IPV6 problems in a VC environment, i.E. PR1413543 and
PR1370329 (RA not working)

We also see problems in IPV6 forwarding, when connected hosts would not
be able to reach the outside until they either ping the IRB gateway or
traffic comes in from the outside.




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Hyper Mode on MX

2019-03-07 Thread Franz Georg Köhler
On Thu, Mar 07, 2019 at 12:31:48PM +0100, Olivier Benghozi 
 wrote:
> By the way HyperMode is only useful if you expect some very high
> throughput with very small packets (none of the MPCs are linerate
> using very small packets, but HyperMode brings it closer).

Thanks.
While we actually don't need that performance really I was wondering if
would be a good idea to enable it on new installations preventively.

* Padding of Ethernet frames with VLAN.
Isn't that a very basic functionality and would break ethernet
switching?

* Node Virtualization
This is Junos Node Slicing?



Best regards,

Franz Georg Köhler
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Hyper Mode on MX

2019-03-07 Thread Franz Georg Köhler
Hello,

I wonder if it is gererally a good idea to enable HyperMode on MX or if
there are reasons not do do so?

We are currently running MX960 with FPC7.


Best regards,

Franz Georg Köhler
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX5100 red alarm after power-off

2019-02-13 Thread Franz Georg Köhler
On Mi, Feb 13, 2019 at 11:08:16 +, Giovanni Bellac via juniper-nsp 
 wrote:
> 
> after powering off a QFX5100-48T (request system power-off) the fans
> are spinning down and the ALARM LED is lightning red. The switch is
> working and looking as expected without any error messages.
> 
> Is this a normal behavior ? Has someone a spare unit for a short test ?

Is the switch still powered off or is this after power on (how would you
issue that command while the switch is down?)?


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] vme.0 IPV6 management IP on QFX5100

2018-10-05 Thread Franz Georg Köhler
On Fri, Oct 05, 2018 at 03:02:35PM +0200, netrav...@gmail.com 
 wrote:
> 
> Does this only apply to the QFX series switch you tried?
> Not an EX model?

I did not try it on EX, only on QFX.
But they state it doesn't work on EX as well here:
https://forums.juniper.net/t5/Ethernet-Switching/IPv6-on-vme/td-p/307594


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] vme.0 IPV6 management IP on QFX5100

2018-10-05 Thread Franz Georg Köhler
On Mon, Sep 28, 2015 at 06:15:49PM +0200, Franz Georg Köhler 
 wrote:
> 
> I'm trying to set up IPV6 management IP on QFX 5100 VCF.
> IPV6 is not reachable from the outside, while IPV4 works.

It turned out that IPV6 is just not supported on vme interface:
https://forums.juniper.net/t5/Ethernet-Switching/IPv6-on-vme/td-p/307594

Curiously, this has not changed over the past years and there is no
change forseeable...



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Support contracts in Virtual Chassis

2018-08-27 Thread Franz Georg Köhler
Hello everyone,

with Virtual Chassis, do all VC members need to be in service contract
with Juniper or just the routing engines in order to have TAC support
software issues on the VC?

I wonder if extra contract for EX4300 mixed with QFX5100 RE is
neccessary as the EX have "Enhanced Limited Lifetime Warranty"
anyway




Best regards,

Franz Georg
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] cdn.juniper.net slow?

2018-03-29 Thread Franz Georg Köhler
On Mi, Mär 28, 2018 at 12:37:23 -0400, Jared Mauch <ja...@puck.nether.net> 
wrote:
> are you having performance issues with other Akamai sites or just with this 
> one?

Hello,

seeing issues just with this one. Akamai CDN is usually faster.
BTW, I have mistaken Kbit/s and KByte/. I actually get up to 1 Mbyte/s
download, but often it stalls at 300 kbyte/s.
Are you used to get faster downloads from cdn.juniper.net?



Best regards,

Franz Georg Köhler
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] cdn.juniper.net slow?

2018-03-28 Thread Franz Georg Köhler
Is cdn.juniper.net always slow? It only delivers between 500 and 1000 kilobit 
per
second to me while the traceroute looks fine and I am used to much faster
downloads from Akamai:

$ wget 
"https://cdn.juniper.net/software/junos/18.1R1.9/junos-install-mx-x86-64-18.1R1.9.tgz[...];
--2018-03-28 17:30:14--  
https://cdn.juniper.net/software/junos/18.1R1.9/junos-install-mx-x86-64-18.1R1.9.tgz[...]
Auflösen des Hostnamen »cdn.juniper.net (cdn.juniper.net)«... 23.37.55.189
Verbindungsaufbau zu cdn.juniper.net (cdn.juniper.net)|23.37.55.189|:443... 
verbunden.
HTTP-Anforderung gesendet, warte auf Antwort... 200 OK
Länge: 2726046587 (2,5G) [application/octet-stream]
In »»junos-install-mx-x86-64-18.1R1.9.tgz[...]«« speichern.

junos-install-mx-x86-64-18.1R1.9.tgz?SM_US 
100%[>]
   2,54G   933KB/s   in 39m 37s

2018-03-28 18:09:51 (1,09 MB/s) - »»junos-install-mx-x86-64-18.1R1.9.tgz[...]«« 
gespeichert [2726046587/2726046587]


$ mtr -r -w 23.37.55.189
Start: Wed Mar 28 18:13:48 2018
HOST: hermes Loss%   Snt   Last   
Avg  Best  Wrst StDev
  1.|-- gw-corpserv.dabuk47DB.frankfurt.de.velia.net0.0%10   59.7   
7.9   1.3  59.7  18.2
  2.|-- gauss.router.frankfurt.de.velia.net 0.0%100.3   
0.3   0.3   0.3   0.0
  3.|-- ae4.cr-antares.fra10.core.heg.com   0.0%100.5   
0.4   0.3   0.9   0.0
  4.|-- ae2.cr-polaris.fra1.core.heg.com0.0%100.4   
1.4   0.4   9.9   3.0
  5.|-- ???100.0100.0   
0.0   0.0   0.0   0.0
  6.|-- a23-37-55-189.deploy.static.akamaitechnologies.com  0.0%100.8   
0.8   0.8   0.9   0.0


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] qfx-5200 bcm fragmentation oddity

2017-09-11 Thread Franz Georg Köhler
On Tue, Aug 29, 2017 at 01:35:49PM -0400, Jared Mauch  
wrote:
> has anyone seen where a qfx-5200 sends fragment needed when it’s not
> needed if the DF bit is set in the packet?

If packet is too large and DF bit is not set, the router will
fragment the packet if needed. If DF bit is set, the router must not
fragment and therefore will notify with ICMP fragmentation needed
because it cannot be forwarded without fragmentation or reducement in
size.





___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] vme.0 IPV6 management IP on QFX5100

2015-09-28 Thread Franz Georg Köhler

Hello,


I'm trying to set up IPV6 management IP on QFX 5100 VCF.
IPV6 is not reachable from the outside, while IPV4 works.
The switch can ping itsself but does not see any IPV6 neighbors.

Any idea what goes wrong here?

> show configuration interfaces vme
unit 0 {
family inet {
address x.x.x.22/30;
}
family inet6 {
address x:x:x:x::46/64 {
primary;
preferred;
}
}
}
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp