Re: [j-nsp] force-64bit

2016-06-01 Thread Phil Rosenthal

> On Jun 1, 2016, at 10:35 AM, Tim Hoffman  wrote:
> 
> 64bit RPD is newer, and by nature will have more bugs - so don't run this 
> unless you need it. Check this with "show task memory" - this will show what 
> you have used of the RPD accessible memory. As Phil notes, you'd need 
> significant RIB scale (which does exist in larger networks) to require this…

I suspect that there is not that high of a risk of bugs due to this change, in 
all likelihood, the only changes required for this was a different compiler and 
perhaps the use of a few 64 bit instead of 32 bit variables — but even with a 
low risk of bugs, if there is no benefit, I’m not sure what the point of adding 
even a low risk of bugs.

Best Regards,
-Phil

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] force-64bit

2016-06-01 Thread Phil Rosenthal
I’ll ask the obvious question — do you actually have a ‘need’ for this?

Even on systems with many peers, 5+ full tables, and a full IGP mesh, I haven’t 
seen rpd much over 1GB of ram in use.  64bit rpd would only be beneficial if 
you have a need for a rpd process using more than 4GB of ram.

Is this a theoretical use case, or is there an actual need?

Best Regards,
-Phil Rosenthal
> On Jun 1, 2016, at 3:58 AM, Theo Voss <m...@theo-voss.de> wrote:
> 
> Hi,
> 
> has anybody enabled „system processes force-64bit“ on 64bit Junos? Have you 
> done this during daily ops or during a maintenance window? According to 
> Juniper documentation [1] rpd must not be restarted to enable 64-bit mode: 
> „You need not restart the routing protocol process (rpd) to use the 64-bit 
> mode.“...
> 
> Thanks in advance for your comments! ;-)
> 
> https://www.juniper.net/documentation/en_US/junos14.2/topics/reference/configuration-statement/routing-edit-system-processes.html
> 
> 
> Cheers,
> Theo Voss
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] RE-S-X6-64G-BB

2016-05-25 Thread Phil Rosenthal

> On May 25, 2016, at 5:37 PM, Mark Tinka <mark.ti...@seacom.mu> wrote:
> 
> 
> 
> On 25/May/16 23:33, Phil Rosenthal wrote:
> 
>> There is a different network card driver, so it would require a different 
>> kernel.
> 
> Which needs time, porting and testing...
> 
> Mark.


Oh I know, I was just saying that this is probably the biggest technical 
reason, and it would therefore potentially impact Junos for older RE's too, 
since they will all run the same kernel.

I think Juniper made the right call -- if you have a reason to "need" the 
bleeding edge RE, you should also be fine with running the bleeding edge Junos.

-Phil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RE-S-X6-64G-BB

2016-05-25 Thread Phil Rosenthal

> On May 25, 2016, at 5:03 PM, Mark Tinka  wrote:
> 
> 
> 
> On 25/May/16 21:50, raf wrote:
> 
>> 
>> 
>> This is really strange. I don't see technical reason why 14, 13 or
>> even old one could not use a newer RE. After all it was just a newer
>> CPU and more RAM.
>> It should work a least with one core and 4G enabled.
> 
> Time involved in porting and testing.
> 
There is a different network card driver, so it would require a different 
kernel.

-Phil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RE-S-X6-64G-BB

2016-05-25 Thread Phil Rosenthal

> On May 25, 2016, at 2:57 PM, Saku Ytti  wrote:
> 
> I would personally be very interested in jumping to 16.1 as soon as
> practice, as BGP is supposedly in its own thread. Maybe RPD in its own
> core. So that might bring lot of stability.

RPD is already essentially in it's own core in 15.1, since the kernel is 
finally SMP.  I don't see how there would be any benefit to forcing affinity, 
if that's what you are implying?

-Phil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RE-S-X6-64G-BB

2016-05-25 Thread Phil Rosenthal

> On May 25, 2016, at 1:59 PM, Colton Conor  wrote:
> 
> So how long before Junos 15.1R4 or higher will be the offical JTAC
> Recommended Junos Software Version for MX Series with NG MPCs? Right now
> it's Junos 14.1R7


Based on how things have gone in the past, the official recommendation won't 
move for 1-2 years.

-Phil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RE-S-X6-64G-BB

2016-05-25 Thread Phil Rosenthal

> On May 25, 2016, at 12:31 PM, Colton Conor  wrote:
> 
> Assuming we are not going to be using these new RE's to load any 3rd party
> software on them, the RE-S-X6-64G-BB will just be a quicker processor with
> more ram compared to an older RE right? Are there any other benefits?
> Juniper is offering the RE-S-X6-64G-BB for the same price as
> the RE-S-1800X4-32G. Not sure why one would not go with the new RE with
> more ram?

This new RE requires Junos 15.1R4 minimum.  If you have a reason to use 14.x or 
13.x, then this RE will not work for you.


> On May 25, 2016, at 10:34 AM, Saku Ytti  wrote:
> 
> I don't see those corner cases as particularly useful. I can't help to
> wonder, is VM a white-label play in disguise? Are some customers not
> running IOS-XR/JunOS at all, just not starting that VM, instead
> running own VM with under NDA documents how to program the hardware?
> Or is the 3rd party VM just marketing gimmick, because they get VM
> 'for free', as they need it for their own infrastructure, to provide
> better redundancy, upgradability and loose coupling to underlaying
> control-plane HW. So as it is going to be there anyhow, no harm done
> investing some marketing efforts to see if market figures out if there
> is application for 3rd party VMs.

I would bet money on this being the case. I would assume that a certain company 
that has a large search engine is of the general opinion "We like the hardware, 
but we do not want to use your software in any way. We can write our own 
software."

-Phil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Multi Core on JUNOS?

2015-10-02 Thread Phil Rosenthal
> On Oct 2, 2015, at 5:11 PM, Colton Conor  wrote:
>
> Does anyone have an update on when Juniper will release SMP (symmetrical
> multi processor) aka the ability to use multiple cores? Do you think the
> second core on the MX80 or MX104 will ever be used? Does the RE-2000 in the
> MX240/480 have one or 2 cores?
>

I have heard that this is planned for Junos 15.

-Phil
>> On Mon, May 11, 2015 at 7:04 AM, Mark Tinka  wrote:
>>
>>
>>
>>> On 11/May/15 13:27, Olivier Benghozi wrote:
>> http://www.juniper.net/documentation/en_US/junos13.3/topics/reference/configuration-statement/routing-edit-system-processes.html
>> <
>> http://www.juniper.net/documentation/en_US/junos13.3/topics/reference/configuration-statement/routing-edit-system-processes.html
>>>
>>>
>>> "Statement introduced in Junos OS Release 13.3 R4"
>>
>> We decided not to enable this now because I understand the plan is for
>> 64-bit mode to become the default in later versions of Junos.
>>
>> Mark.
>> ___
>> juniper-nsp mailing list juniper-nsp@puck.nether.net
>> https://puck.nether.net/mailman/listinfo/juniper-nsp
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Policy-statement to match on metrics less than, greater than, or within a range

2015-08-27 Thread Phil Rosenthal

 On Aug 27, 2015, at 7:15 AM, Alexander Arseniev arsen...@btinternet.com 
 wrote:
 
 There is a floor for MED and it is 0.
 What You could do is :
 
 term 1 then { metric subtract 1000; next term }
 term 2 from metric 0; then { local-preference 100; accept } 
 
 You won't be able to keep the original MED though :-(
 HTH
 Thanks
 Alex

Thanks!

This is obviously much less elegant than Cisco's solution (which is honestly 
pretty stunning to me to ever say such a thing), but is still usable.

Hopefully someone from Juniper is reading this and decides to fix this by 
adding a proper solution.

-Phil

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Policy-statement to match on metrics less than, greater than, or within a range

2015-08-26 Thread Phil Rosenthal
Hello all,

On Cisco, it is possible to write a route policy as such:

route-policy test
if med le 1000 then
set local-preference 100
  endif
  end-policy

Is there any way to do the same thing with Juniper? It seems that the “from 
metric” statement only accepts a static value (comparable to if med eq 1000”).


Thanks in advance :)

-Phil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MX240 SCBE2 10G ports

2015-08-19 Thread Phil Rosenthal
 On Aug 19, 2015, at 8:51 AM, John Center john.cen...@outlook.com wrote:

 Hi,

 Are there any limitations in using the SCBE2's 10G ports? I've heard
 that they can't be used as regular data ports.   Is this true?  I saw
 that Rob Hass asked a similar question in December, but it looks like no
 one replied to him.


Currently, you cannot use those ports for any purpose at all. They are
not enabled in software. Juniper has also been unclear on what
purpose, if any, these ports would ever have.

-Phil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX240 SCBE2 10G ports

2015-08-19 Thread Phil Rosenthal

 On Aug 19, 2015, at 11:42 AM, John Center john.cen...@outlook.com wrote:
 Thanks, Phil.  Doesn't make much sense then.  If these ports were 
 usable, it would make the MX240 much more attractive from our perspective.
 

I suggest you bring this up with your Juniper sales rep :)

Juniper is very much driven by customer feedback.

-Phil
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Junos power-off not graceful

2015-07-29 Thread Phil Rosenthal

 On Jul 29, 2015, at 5:53 AM, Mark Tinka mark.ti...@seacom.mu wrote:
 
 
 We once experienced a complete power outage to some MX480 devices that
 caused MPC failure. Those had to be replaced.
 

I would suspect that this was caused by a power fluctuation just before the 
power outage -- a surge or brownout. 

 Other items at risk could be hard drives inside the RE.
 
 It's hard to say - in some cases, you may be lucky, you may not be.
 Hence the graceful to increase your chances of a good outcome.

During early testing before adding MX960 to our network, we did have a pull 
the plug test on a chassis, and had no issues of any hardware failure.

One thing you will absolutely encounter, however, is a longer boot time on the 
following boot as a fsck must be completed on any filesystem not properly 
unmounted.  There is of course a risk that if the router happens to be making a 
large write at the exact moment of power loss, that the fsck will not be able 
to automatically complete, and you will need to manually push it through this 
process, so be prepared with a serial console afterwards.

If this type of power-off scares you, this should point to changes in your 
disaster procedures.  In the event of a real disaster, you will not have time 
to manually do a shutdown -- so I would suggest writing software that is 
triggered by a low charge level from your UPS system and a Generator failing to 
start, which causes the MX to have a graceful shutdown. In the event of a UPS 
failure, you will obviously not have any time to make even an automated 
shutdown, and this type of pull the plug test is what will happen.

Best Regards,
-Phil Rosenthal
ISPrime
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] jtree0 Memory full on MX480?

2015-07-21 Thread Phil Rosenthal
Over the years, we have run into a couple of issues that translated to either 
exhausting FPC memory or corrupting the JTree. Currently, life is good on 
13.3R6, which we run on all MX's globally. I haven't run into this specific 
issue, and I am just assuming that behavior is improved.

Best Regards,
-Phil
 On Jul 21, 2015, at 8:49 PM, Jeff Meyers jeff.mey...@gmx.net wrote:
 
 Hi,
 
 yes, an upgrade is absolutely possible but since there are no major issues 
 with that release, we didn't do that yet. Are you just assuming a newer 
 software improves that or did Juniper really do something on that side?
 
 
 Best,
 Jeff
 
 Am 22.07.2015 um 02:45 schrieb Phil Rosenthal:
 Disabling Basic-Table certainly bought you some time.
 
 Agree that it still does not look good. I suspect that you are running into 
 a software issue.  11.4 is no longer a supported version, 12.3 is the 
 minimum supported today, with 13.3R6 as the recommended version.  Is it 
 possible for you to upgrade?
 
 Best Regards,
 -Phil
 On Jul 21, 2015, at 7:23 PM, Jeff Meyers jeff.mey...@gmx.net wrote:
 
 Hi Phil,
 
 sure:
 
 
 {master}
 jeff@cr0 show configuration | display set | match rpf-check
 
 {master}
 nico@FRA4.cr0 show version
 Hostname: cr0
 Model: mx480
 JUNOS Base OS boot [11.4R9.4]
 JUNOS Base OS Software Suite [11.4R9.4]
 JUNOS Kernel Software Suite [11.4R9.4]
 JUNOS Crypto Software Suite [11.4R9.4]
 JUNOS Packet Forwarding Engine Support (M/T Common) [11.4R9.4]
 JUNOS Packet Forwarding Engine Support (MX Common) [11.4R9.4]
 JUNOS Online Documentation [11.4R9.4]
 JUNOS Voice Services Container package [11.4R9.4]
 JUNOS Border Gateway Function package [11.4R9.4]
 JUNOS Services AACL Container package [11.4R9.4]
 JUNOS Services LL-PDF Container package [11.4R9.4]
 JUNOS Services PTSP Container package [11.4R9.4]
 JUNOS Services Stateful Firewall [11.4R9.4]
 JUNOS Services NAT [11.4R9.4]
 JUNOS Services Application Level Gateways [11.4R9.4]
 JUNOS Services Captive Portal and Content Delivery Container package 
 [11.4R9.4]
 JUNOS Services RPM [11.4R9.4]
 JUNOS Services HTTP Content Management package [11.4R9.4]
 JUNOS AppId Services [11.4R9.4]
 JUNOS IDP Services [11.4R9.4]
 JUNOS Services Crypto [11.4R9.4]
 JUNOS Services SSL [11.4R9.4]
 JUNOS Services IPSec [11.4R9.4]
 JUNOS Runtime Software Suite [11.4R9.4]
 JUNOS Routing Software Suite [11.4R9.4]
 
 {master}
 nico@FRA4.cr0 show route summary
 Autonomous system number: X
 Router ID: A.B.C.D
 
 inet.0: 546231 destinations, 1747898 routes (545029 active, 11 holddown, 
 2994 hidden)
  Direct:   1143 routes,   1140 active
   Local:   1144 routes,   1144 active
OSPF: 81 routes, 18 active
 BGP: 1745429 routes, 542631 active
  Static:100 routes, 95 active
IGMP:  1 routes,  1 active
 
 Basic-Table.inet.0: 212783 destinations, 215070 routes (212778 active, 5 
 holddown, 0 hidden)
  Direct:   2283 routes,   1140 active
   Local:   2288 routes,   1144 active
OSPF: 17 routes, 17 active
 BGP: 210387 routes, 210382 active
  Static: 95 routes, 95 active
 
 inet6.0: 23331 destinations, 39242 routes (23330 active, 1 holddown, 113 
 hidden)
  Direct:451 routes,368 active
   Local:373 routes,373 active
   OSPF3:  9 routes,  9 active
 BGP:  38399 routes,  22571 active
  Static: 10 routes,  9 active
 
 Basic-Table.inet6.0: 12295 destinations, 12295 routes (12292 active, 3 
 holddown, 0 hidden)
  Direct:366 routes,366 active
   Local:373 routes,373 active
   OSPF3:  8 routes,  8 active
 BGP:  11539 routes,  11536 active
  Static:  9 routes,  9 active
 
 {master}
 
 
 I actually thought this Basic-Table was inactive. It is not so I'm going 
 to deactive it now. Since it was holding  200k routes, this is for sure a 
 lot. Doing that made the syslog message disappear but it didn't actually 
 free up as much as I was hoping for:
 
 GOT: Jtree memory segment 0 (Context: 0x44976cc8)
 GOT: ---
 GOT: Memory Statistics:
 GOT:16777216 bytes total
 GOT:14613176 bytes used
 GOT: 2145824 bytes available (865792 bytes from free pages)
 GOT:3024 bytes wasted
 GOT:   15192 bytes unusable
 GOT:   32768 pages total
 GOT:6338 pages used (2568 pages used in page alloc)
 GOT:   24739 pages partially used
 GOT:1691 pages free (max contiguous = 380)
 
 
 Still doesn't look to glorious, right?
 
 
 Best,
 Jeff
 
 
 Am 22.07.2015 um 01:06 schrieb Phil Rosenthal:
 Can you paste the output of these commands:
 show conf | display set | match rpf-check
 show ver
 show route sum
 
 DPC should have enough memory for ~1M FIB.  This can get divided in half 
 if you

Re: [j-nsp] jtree0 Memory full on MX480?

2015-07-21 Thread Phil Rosenthal
Disabling Basic-Table certainly bought you some time.

Agree that it still does not look good. I suspect that you are running into a 
software issue.  11.4 is no longer a supported version, 12.3 is the minimum 
supported today, with 13.3R6 as the recommended version.  Is it possible for 
you to upgrade?

Best Regards,
-Phil
 On Jul 21, 2015, at 7:23 PM, Jeff Meyers jeff.mey...@gmx.net wrote:
 
 Hi Phil,
 
 sure:
 
 
 {master}
 jeff@cr0 show configuration | display set | match rpf-check
 
 {master}
 nico@FRA4.cr0 show version
 Hostname: cr0
 Model: mx480
 JUNOS Base OS boot [11.4R9.4]
 JUNOS Base OS Software Suite [11.4R9.4]
 JUNOS Kernel Software Suite [11.4R9.4]
 JUNOS Crypto Software Suite [11.4R9.4]
 JUNOS Packet Forwarding Engine Support (M/T Common) [11.4R9.4]
 JUNOS Packet Forwarding Engine Support (MX Common) [11.4R9.4]
 JUNOS Online Documentation [11.4R9.4]
 JUNOS Voice Services Container package [11.4R9.4]
 JUNOS Border Gateway Function package [11.4R9.4]
 JUNOS Services AACL Container package [11.4R9.4]
 JUNOS Services LL-PDF Container package [11.4R9.4]
 JUNOS Services PTSP Container package [11.4R9.4]
 JUNOS Services Stateful Firewall [11.4R9.4]
 JUNOS Services NAT [11.4R9.4]
 JUNOS Services Application Level Gateways [11.4R9.4]
 JUNOS Services Captive Portal and Content Delivery Container package 
 [11.4R9.4]
 JUNOS Services RPM [11.4R9.4]
 JUNOS Services HTTP Content Management package [11.4R9.4]
 JUNOS AppId Services [11.4R9.4]
 JUNOS IDP Services [11.4R9.4]
 JUNOS Services Crypto [11.4R9.4]
 JUNOS Services SSL [11.4R9.4]
 JUNOS Services IPSec [11.4R9.4]
 JUNOS Runtime Software Suite [11.4R9.4]
 JUNOS Routing Software Suite [11.4R9.4]
 
 {master}
 nico@FRA4.cr0 show route summary
 Autonomous system number: X
 Router ID: A.B.C.D
 
 inet.0: 546231 destinations, 1747898 routes (545029 active, 11 holddown, 2994 
 hidden)
  Direct:   1143 routes,   1140 active
   Local:   1144 routes,   1144 active
OSPF: 81 routes, 18 active
 BGP: 1745429 routes, 542631 active
  Static:100 routes, 95 active
IGMP:  1 routes,  1 active
 
 Basic-Table.inet.0: 212783 destinations, 215070 routes (212778 active, 5 
 holddown, 0 hidden)
  Direct:   2283 routes,   1140 active
   Local:   2288 routes,   1144 active
OSPF: 17 routes, 17 active
 BGP: 210387 routes, 210382 active
  Static: 95 routes, 95 active
 
 inet6.0: 23331 destinations, 39242 routes (23330 active, 1 holddown, 113 
 hidden)
  Direct:451 routes,368 active
   Local:373 routes,373 active
   OSPF3:  9 routes,  9 active
 BGP:  38399 routes,  22571 active
  Static: 10 routes,  9 active
 
 Basic-Table.inet6.0: 12295 destinations, 12295 routes (12292 active, 3 
 holddown, 0 hidden)
  Direct:366 routes,366 active
   Local:373 routes,373 active
   OSPF3:  8 routes,  8 active
 BGP:  11539 routes,  11536 active
  Static:  9 routes,  9 active
 
 {master}
 
 
 I actually thought this Basic-Table was inactive. It is not so I'm going to 
 deactive it now. Since it was holding  200k routes, this is for sure a lot. 
 Doing that made the syslog message disappear but it didn't actually free up 
 as much as I was hoping for:
 
 GOT: Jtree memory segment 0 (Context: 0x44976cc8)
 GOT: ---
 GOT: Memory Statistics:
 GOT:16777216 bytes total
 GOT:14613176 bytes used
 GOT: 2145824 bytes available (865792 bytes from free pages)
 GOT:3024 bytes wasted
 GOT:   15192 bytes unusable
 GOT:   32768 pages total
 GOT:6338 pages used (2568 pages used in page alloc)
 GOT:   24739 pages partially used
 GOT:1691 pages free (max contiguous = 380)
 
 
 Still doesn't look to glorious, right?
 
 
 Best,
 Jeff
 
 
 Am 22.07.2015 um 01:06 schrieb Phil Rosenthal:
 Can you paste the output of these commands:
 show conf | display set | match rpf-check
 show ver
 show route sum
 
 DPC should have enough memory for ~1M FIB.  This can get divided in half if 
 you are using RPF. If you have multiple routing instances, this also can 
 contribute to the problem.
 
 Best Regards,
 -Phil Rosenthal
 On Jul 21, 2015, at 6:56 PM, Jeff Meyers jeff.mey...@gmx.net wrote:
 
 Hello list,
 
 we seem to be running into limits with a MX480 with RE-2000 and 2x 
 DPCE-4XGE-R since we are seeing these new messages in the syslog:
 
 
 Jul 22 00:50:36  cr0 fpc0 RSMON: Resource Category:jtree 
 Instance:jtree0-seg0 Type:free-dwords Available:83072 is less than LWM 
 limit:104857, rsmon_syslog_limit()
 Jul 22 00:50:36  cr0 fpc0 RSMON: Resource Category:jtree 
 Instance:jtree1-seg0 Type:free-pages Available:1326 is less than LWM 
 limit:1638, rsmon_syslog_limit

Re: [j-nsp] jtree0 Memory full on MX480?

2015-07-21 Thread Phil Rosenthal
Can you paste the output of these commands:
show conf | display set | match rpf-check
show ver
show route sum

DPC should have enough memory for ~1M FIB.  This can get divided in half if you 
are using RPF. If you have multiple routing instances, this also can contribute 
to the problem.

Best Regards,
-Phil Rosenthal
 On Jul 21, 2015, at 6:56 PM, Jeff Meyers jeff.mey...@gmx.net wrote:
 
 Hello list,
 
 we seem to be running into limits with a MX480 with RE-2000 and 2x 
 DPCE-4XGE-R since we are seeing these new messages in the syslog:
 
 
 Jul 22 00:50:36  cr0 fpc0 RSMON: Resource Category:jtree Instance:jtree0-seg0 
 Type:free-dwords Available:83072 is less than LWM limit:104857, 
 rsmon_syslog_limit()
 Jul 22 00:50:36  cr0 fpc0 RSMON: Resource Category:jtree Instance:jtree1-seg0 
 Type:free-pages Available:1326 is less than LWM limit:1638, 
 rsmon_syslog_limit()
 Jul 22 00:50:36  cr0 fpc1 RSMON: Resource Category:jtree Instance:jtree0-seg0 
 Type:free-pages Available:1316 is less than LWM limit:1638, 
 rsmon_syslog_limit()
 Jul 22 00:50:37  cr0 fpc1 RSMON: Resource Category:jtree Instance:jtree0-seg0 
 Type:free-dwords Available:84224 is less than LWM limit:104857, 
 rsmon_syslog_limit()
 Jul 22 00:50:37  cr0 fpc0 RSMON: Resource Category:jtree Instance:jtree1-seg0 
 Type:free-dwords Available:84864 is less than LWM limit:104857, 
 rsmon_syslog_limit()
 
 
 Here is some more output from the FPC:
 
 
 jeff@cr0 request pfe execute target fpc0 command show rsmon
 SENT: Ukern command: show rsmon
 GOT:
 GOT: categoryinstancetypetotal  lwm_limit hwm_limit free
 GOT:  ---   - - 
 GOT:jtree jtree0-seg0   free-pages32768  1638  4915 1245
 GOT:jtree jtree0-seg0  free-dwords  209715210485731457279680
 GOT:jtree jtree0-seg1   free-pages32768  1638  491522675
 GOT:jtree jtree0-seg1  free-dwords  2097152104857314572  1451200
 GOT:jtree jtree1-seg0   free-pages32768  1638  4915 1267
 GOT:jtree jtree1-seg0  free-dwords  209715210485731457281088
 GOT:jtree jtree1-seg1   free-pages32768  1638  491523743
 GOT:jtree jtree1-seg1  free-dwords  2097152104857314572  1519552
 GOT:jtree jtree2-seg0   free-pages32768  1638  4915 1266
 GOT:jtree jtree2-seg0  free-dwords  209715210485731457281024
 GOT:jtree jtree2-seg1   free-pages32768  1638  491523732
 GOT:jtree jtree2-seg1  free-dwords  2097152104857314572  1518848
 GOT:jtree jtree3-seg0   free-pages32768  1638  4915 1232
 GOT:jtree jtree3-seg0  free-dwords  209715210485731457278848
 GOT:jtree jtree3-seg1   free-pages32768  1638  491523731
 GOT:jtree jtree3-seg1  free-dwords  2097152104857314572  1518784
 LOCAL: End of file
 
 {master}
 jeff@cr0 request pfe execute target fpc0 command show jtree 0 memory 
 extensive
 SENT: Ukern command: show jtree 0 memory extensive
 GOT:
 GOT: Jtree memory segment 0 (Context: 0x44976cc8)
 GOT: ---
 GOT: Memory Statistics:
 GOT:16777216 bytes total
 GOT:15299920 bytes used
 GOT: 1459080 bytes available (660480 bytes from free pages)
 GOT:3024 bytes wasted
 GOT:   15192 bytes unusable
 GOT:   32768 pages total
 GOT:   26528 pages used (2568 pages used in page alloc)
 GOT:4950 pages partially used
 GOT:1290 pages free (max contiguous = 373)
 GOT:
 GOT:  Partially Filled Pages (In bytes):-
 GOT:   UnitAvail Overhead
 GOT:  8   6743440
 GOT: 16   1078400
 GOT: 2413296 4792
 GOT: 32  2880
 GOT: 48 283210400
 GOT:
 GOT:  Free Page Lists(Pg Size = 512 bytes):-
 GOT:Page Bucket Avail(Bytes)
 GOT:1-1   140288
 GOT:2-2   112640
 GOT:3-376800
 GOT:4-449152
 GOT:5-5 7680
 GOT:6-615360
 GOT:7-725088
 GOT:8-8 8192
 GOT:   9-11 5632
 GOT:  12-17 6656
 GOT:  18-2622016
 GOT:   27-32768   190976
 GOT:
 GOT:  Fragmentation Index = 0.869, (largest free = 190976)
 GOT:  Counters:
 GOT:   465261655 allocs (0 failed)
 GOT:   0 releases(partial 0)
 GOT:   463785484 frees
 GOT:   0 holds
 GOT:   9 pending frees(pending bytes 88)
 GOT:   0 pending forced
 GOT:   0 times free blocked
 GOT:   0 sync writes
 GOT:  Error Counters:-
 GOT:   0 bad params
 GOT:   0 failed frees
 GOT:   0 bad cookie
 GOT:
 GOT: Jtree memory segment 1 (Context: 0x449f87e8)
 GOT: ---
 GOT: Memory

Re: [j-nsp] MX104 Limitations

2015-06-24 Thread Phil Rosenthal
Comments inline below.

 On Jun 24, 2015, at 9:08 AM, Colton Conor colton.co...@gmail.com wrote:
 
 We are considering upgrading to a Juniper MX104, but another vendor (not
 Juniper) pointed out the following limitations about the MX104 in their
 comparison. I am wondering how much of it is actually true about the MX104?
 And if true, is it really that big of a deal?:
 
None of these are showstoppers for everyone, but depending on your 
requirements, some of these might or not be a problem.

In essentially all of these, there is a question of Well, what are you 
comparing it against?, as most things in that size/price range will have 
compromises as well.

Obviously this list came from someone with a biased viewpoint of nothing but 
problems with Juniper -- A Competitor.  Consider that there are also positives.
For example, In Software, most people here would rank JunOS  Cisco IOS  
Brocade  Arista  Force10

From question 12, it seems that you are considering Alcatel Lucent 7750 as 
your alternative -- Unfortunately you won't find nearly as many people with 
ALU experience, so it will be a bit harder to get fair commentary comparing 
the two.  It might also be harder to find engineers to manage them.

 1.   No fabric redundancy due to fabric-less design. There is no switch
 fabric on the MX104, but there is on the rest of the MX series. Not sure if
 this is a bad or good thing?

The Switch Fabric is itself very reliable, and not the most likely point of 
failure.  In fact, in all of my years, I have not had a switch fabric fail on 
any switch/router from any vendor.
I consider a redundant switch fabric nice to have.
For us, MX480 makes much more sense than MX104. and MX480 has a redundant SF.

 
 2.   The Chassis fixed ports are not on an FRU.  If a fixed port fails,
 or if data path fails, entire chassis requires replacement.
 
True.  That said, I have not had a failure on any Juniper MX 10G ports in 
production.
The only failures we have had are a few RE SSD failures, and an undetermined 
MPC failure that was causing occasional resets.

Our experiences in the past with Cisco and Brocade has had much higher failure 
rates of fixed ethernet ports.

 3.   There is no mention of software support for MACSec on the MX104,
 it appears to be a hardware capability only at this point in time with
 software support potentially coming at a later time.
 

We do not use this.

 4.   No IX chipsets for the 10G uplinks (i.e. no packet
 pre-classification, the IX chip is responsible for this function as well as
 GE to 10GE i/f adaptation)
 

The pre-classification may or may not be an issue for you.

GE to 10GE adaption, I think you would be doing something very wrong if your 
goal was to connect gig-e's to these ports.

 5.   QX Complex supports HQoS on MICs only, not on the integrated 4
 10GE ports on the PMC. I.e. no HQoS support on the 10GE uplinks
 

True. May or may not be an issue for you. There is some QoS capability on the 
built in ports, but it is very limited. 16x10G and 32x10G MPC cards have 
somewhat more QoS capability than these on the MX240/480/960. HQoS is only on 
the -Q cards which are much more expensive on either MX104 or the bigger MX 
chassis.


 6.   Total amount of traffic that can be handled via HQoS is restricted
 to 24Gbps. Not all traffic flows can be shaped/policed via HQoS due to a
 throughput restriction between the MQ and the QX. Note that the MQ can
 still however perform basic port based policing/shaping on any flows. HQoS
 support on the 4 installed MICs can only be enabled via a separate license.
 Total of 128k queues on the chassis
 

In most environments, there are a limited number of ports where HQoS is needed, 
so this may or may not be an issue.

 7.   1588 TC is not supported across the chassis as the current set of
 MICs do not support edge time stamping.  Edge timestamping is only
 supported on the integrated 10G ports.  MX104 does not presently list 1588
 TC as being supported.
 
We do not use TC, but more comments on 12 at the bottom.

 8.   BFD can be supported natively in the TRIO chipset.  On the MX104,
 it is not supported in hardware today.  BFD is run from the single core
 P2020 MPC.
 
 9.   TRIO based cards do not presently support PBB; thus it is
 presently not supported on the MX104. PBB is only supported on older EZChip
 based MX hardware.  Juniper still needs a business case to push this forward
 
No comments on these 2.

 10.   MX104 operating temperature: -40 to 65C, but MX5, MX10, MX40, MX80
 and MX80-48T are all 0-40C all are TRIO based. Seems odd that the MX104
 would support a different temperature range. There are only 3 temperature
 hardened MICs for this chassis on the datasheet: (1) 16 x T1/E1 with CE,
 (2) 4 x chOC3/STM1  1 x chOC12/STM4 with CE, (3) 20 x 10/100/1000 Base-T.
 
The MX104 is essentially a next-generation MX80. One of the major design goals 
was temperature hardening, enabling it for use in places like 

Re: [j-nsp] JTAC Recommended Junos Software Versions Old?

2015-05-01 Thread Phil Rosenthal
We were hit by this.

13.3R4 is safe from this issue, and 13.3R6 is apparently fixed but we
have not yet upgraded.

I believe the issue is related to minor differences in hardware
because we do not have problems with 13.3R5 on any routers except for
one, which has essentially identical hardware and software to the
others.

The issue takes approximately 48 hours to present itself, and was very
obvious when it was happening.

Power off / power on of the effected MPC will recover it temporarily.

We ran into this with MX960 and the 32x10 MPC4.

Regards,
-Phil

On May 1, 2015, at 3:08 PM, Gavin Henry ghe...@suretec.co.uk wrote:

 About memory leak on PFE with inline jflow, this is PR1071289, affected 
 releases 13.3R5, 14.1R4, 14.2R1.

 How long does it take to show?

 --
 Kind Regards,
 Gavin Henry.
 ___
 juniper-nsp mailing list juniper-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/juniper-nsp
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] inline jflow

2013-12-11 Thread Phil Rosenthal
On Dec 8, 2013, at 1:09 PM, moki vom...@gmail.com wrote:


 when i execute the command
 show services accounting flow inline-jflow fpc-slot 0
 The counters don't grow
  Flow information
FPC Slot: 0
Flow Packets: 9811498, Flow Bytes: 7364152991
Active Flows: 4294967295, Total Flows: 4134755
Flows Exported: 3838520, Flow Packets Exported: 388170
Flows Inactive Timed Out: 3620229, Flows Active Timed Out: 514527


It sounds like you have exceeded the limits of the flow table.

Try raising “forwarding-options sampling instance sample-ins1 input rate to 
500 to see if you reliably get flows again.  If so, you can gradually step the 
sampling rate down to a level that reliably handles your traffic, or add an 
MS-MPC to increase inline jflow capacity if you really do “need” 1:1 sampling. 
Depending on your needs, sampled flows may be totally adequate.

-Phil



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp