[j-nsp] rib-sharding and NSR update

2024-05-10 Thread Andrey Kostin via juniper-nsp

Hi juniper-nsp,

Just hit exactly the same issue as described in the message found in the 
list archives:


Gustavo Santos
Mon Jan 4 15:13:18 EST 2021

Hi,

We got another MX10003 and we are updating it before get in production.
Reading the 19.4R3 release notes, we noticed that two
features update-threading  and  rib-sharding and I really liked what it
"promises" as faster BGP updates .

But there is a catch. We can't use this new feature with non-stop 
routing

enabled.

The question is , are these features worth the non-stop routing loss?

Regards
"
bgp {
##
## Warning: Can't be configured together with routing-options
nonstop-routing
##
rib-sharding;
##
## Warning: Update threading can't be configured together with
routing-options nonstop-routing
##
update-threading;
}
"

That message seems didn't get any response.
However, I found an explanation at the bottom the page: 
https://www.juniper.net/documentation/us/en/software/junos/cli-reference/topics/ref/statement/rib-sharding-edit-protocols-bgp.html

Support for NSR with sharding introduced in Junos OS Release 22.2.
BGP sharding supports IPv4, IPv6, L3VPN and BGP-LU from Junos OS Release 
20.4R1.


Still need to test and confirm on this platform, but on another router 
it already works.


--
Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper publishes Release notes as pdf

2024-03-18 Thread Andrey Kostin via juniper-nsp

Thanks, Joe.

Right, pdf only for SR releases has been a while, but not very long, the 
change happened just few months ago. My personal preference would be to 
read html that can adapt to screen size, etc. Imo the value of pdf is to 
be able to print a paper copy, but it's hard to imagine that somebody 
would print release notes in the present time.


Kind regards,
Andrey

Joe Horton via juniper-nsp писал(а) 2024-03-15 21:36:

Correct.

SR releases – PDF only, and I think it has been that way a while.
R release – html/web based + PDF

And understand, I’ll pass along the feedback to the docs team.

Joe



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Juniper publishes Release notes as pdf

2024-03-15 Thread Andrey Kostin via juniper-nsp



Hi Juniper-NSP readers,

Did anybody mention that recent Junos Release Notes are now published as 
pdf, instead of usual web page?
Here is the example: 
https://supportportal.juniper.net/s/article/22-2R3-S3-SRN?language=en_US

What do you think about it?
For me, it's very inconvenient. To click links to PR or copy one 
paragraph I now have to download the pdf and open it in Acrobat. Please 
chime in and maybe our voices will be heard.


Kind regards,
Andrey Kostin
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] mx304 alarm seen after junos upgrade

2024-03-04 Thread Andrey Kostin via juniper-nsp

Hi Aaron,

Maybe this can be helpful:
https://supportportal.juniper.net/s/article/MX304-MinorFPC-0-firmware-outdated

Kind regards,
Andrey Kostin

Aaron1 via juniper-nsp писал(а) 2024-02-29 21:55:

Resolved… with the following…

FPC showed a difference between what was running and what is
available… reminiscent of IOS-XR upgrades and subsequent fpd/fpga
upgrades.


show system firmware

FPC 0ZL30634 DPLL  9   6022.0.0  7006.0.0
OK

request system firmware upgrade fpc slot 0

request chassis fpc restart slot 0

Aaron


On Feb 29, 2024, at 8:14 PM, Aaron Gould  wrote:

Anyone ever seen this alarm on an MX304 following a Junos upgrade?

I went from ...

22.2R3-S1.9 - initially had this
22.4R2-S2.6 - upgrade
23.2R1-S2.5 - final

now with23.2R1-S2.5, i have an issue with more than one, 100g 
interfaces being able to operate.  I have a 100g on et-0/0/4 and 
another one on et-0/0/12... BUT, they won't both function at the same 
time.  4 works, 12 doesn't... reboot mx304, 4 doesn't work, but 12 
does. Very weird.


root@304-1> show system alarms
6 alarms currently active
Alarm time   Class  Description
2024-02-29 06:00:25 CST  Minor  200 ADVANCE Bandwidth (in gbps)s(315) 
require a license
2024-02-29 06:00:25 CST  Minor  OSPF protocol(282) usage requires a 
license
2024-02-29 06:00:25 CST  Minor  LDP Protocol(257) usage requires a 
license

2024-02-28 09:35:10 CST  Minor *FPC 0 firmware outdated*
2024-02-28 09:29:45 CST  Major  Host 0 fxp0 : Ethernet Link Down
2024-02-28 09:28:15 CST  Major  Management Ethernet Links Down


root@304-1> show chassis alarms
3 alarms currently active
Alarm time   Class  Description
2024-02-28 09:35:10 CST  Minor *FPC 0 firmware outdated*
2024-02-28 09:29:45 CST  Major  Host 0 fxp0 : Ethernet Link Down
2024-02-28 09:28:15 CST  Major  Management Ethernet Links Down


--
-Aaron
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] igmp snooping layer 2 querier breaks ospf in other devices

2024-02-01 Thread Andrey Kostin via juniper-nsp

Hi Aaron,

It's not clear from your explanation where l2circuits with ospf are 
connected and how they are related to this irb/vlan.
Do you really need a querier in this case? IIRC, querier is needed when 
only hosts are present on LAN and a switch has to send igmp queries. In 
your case, you have a router with irb interface that should work as igmp 
querier by default. Not sure if it helps though.


Kind regards,
Andrey

Aaron Gould via juniper-nsp писал(а) 2024-01-31 14:54:


I'm having an issue where igmp snooping layer 2 querier breaks ospf in
other devices which are in l2circuits

Has anyone ever come across this issue, and have a work-around for it?

I have the following configured and devices in vlan 100 can join
multicast just fine.  But there are other unrelated l2circuits that
carry traffic for devices in other vlans and inside this l2circuit is
ospf hellos that seem to be getting broken by this configuration

set interfaces irb unit 100 family inet address 10.100.4.1/27
set protocols ospf area 0.0.0.1 interface irb.100 passive
set protocols igmp interface irb.100 version 3
set protocols pim interface irb.100
set protocols igmp-snooping vlan vlan100 l2-querier source-address 
10.100.4.1


Model: acx5048
Junos: 17.4R2-S11


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Junos 21+ Killing Finger Muscle Memory...

2023-07-12 Thread Andrey Kostin via juniper-nsp

Hi Mark,
100% agree if it could help.
Very annoying. If UX designer touched it, he or she probably never 
actually worked with Junos.


Kind regards,
Andrey

Mark Tinka via juniper-nsp писал(а) 2023-07-12 04:49:

So, this is going to be a very priviledged post, and I have been
spending the last several months mulling over even complaining about
it either on here, or with my SE.

But a community friend sent me the exact same annoyance he is having
with Junos 21 or later, which has given me a final reason to just go
ahead and moan about it:

tinka@router> show rout
 ^
'rout' is ambiguous.
Possible completions:
  route    Show routing table information
  routing  Show routing information
{master}
tinka@router>

I'm going to send this to my Juniper SE and AM. Not sure what they'll
make of it, as it is a rather priviledged complaint - but in truth, it
does make working with Junos on a fairly historically commonly used
command rather cumbersome, and annoying.

The smile that comes to my face when I touch a box running Junos 20 or
earlier and run this specific command, is unconscionably satisfying
:-).

Mark.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ACX7100-48L

2023-06-13 Thread Andrey Kostin via juniper-nsp

Aaron Gould via juniper-nsp писал(а) 2023-06-12 11:22:


interestingly, the PR is said to be fixed in 22.2R2-EVO, wouldn't that
follow that it should be fixed in my version? 22.2R3.13-EVO

me@lab-7100-2> show version
...
Junos: 22.2R3.13-EVO



The fix should be already implemented in the version you use.

Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Hi Saku,

Saku Ytti писал(а) 2023-06-09 12:09:

On Fri, 9 Jun 2023 at 18:46, Andrey Kostin  wrote:


I'm not in this market, have no qualification and resources for
development. The demand in such devices should be really massive to
justify a process like this.


Are you not? You use a lot of open source software, because someone
else did the hard work, and you have something practical.

The same would be the thesis here,  You order the PCI NPU from newegg,
and you have an ecosystem of practical software to pull from various
sources. Maybe you'll contribute something back, maybe not.


Well, technically maybe I could do it. But putting it in production is 
another story. I have to not only make it run but also make sure that 
there are people who can support it 24x7. I think you said it before and 
I agree that the cost of capital investment in routers is just a small 
fraction in expenses for service providers. Cable infrastructure, 
facilities, payroll, etc. make a bigger part, but risk of a router 
failure extends to business risks like reputation and financial loss and 
may have a catastrophic impact. We all know how long and difficult can 
be troubleshooting and fixing a complex issue with vendor's TAC but I 
consider the price we pay hardware vendors for their TAC support 
partially as a liability insurance.



Very typical network is a border router or two, which needs features
and performance, then switches to connect to compute. People who have
no resources or competence to write software could still be users in
this market.


Sounds more like a datacenter setup, and for DC operator it could be 
attractive to do at scale. For a traditional ISP with relatively small 
PoPs spread across the country it may be not the case.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp
Thank you very much, Jeff, for sharing your experience. Will watch 
closely Release Notes for upcoming Junos releases. And kudos to Juniper 
for finding and fixing it, 1,5 week is very fast reaction!.


Kind regards,
Andrey

Litterick, Jeff  (BIT) писал(а) 2023-06-09 12:41:

This is why we got the MX304.  It was a test to replace our MX10008
Chassis, which we bought a few of because we had to get at a
reasonable price into 100G at high density at multiple sites a few
years back now.  Though we really only need 4 line cards, with 2 being
for redundancy.   The MX1004 was not available at the time back then
(Wish it had been.  The MX10008 is a heavy beast indeed and we had to
use fork lifts to move them around into the data centers).But
after handling the MX304 we will most likely for 400G go to the
MX10004 line for the future and just use the MX304 at very small edge
sites if needed.   Mainly due to full FPC redundancy requirements at
many of our locations.   And yes we had multiple full FPC failures in
the past on the MX10008 line.  We went through at first an RMA cycle
with multiple line cards which in the end was due to just 1 line cards
causing full FPC failure on a different line card in the chassis
around every 3 months or so.   Only having everything redundant across
both FPCs allowed us not to have serious downtime.



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Saku Ytti писал(а) 2023-06-09 10:35:


LGA8371 socketed BRCM TH4. Ostensibly this allows a lot more switches
to appear in the market, as the switch maker doesn't need to be
friendly with BRCM. They make the switch, the customer buys the chip
and sockets it. Wouldn't surprise me if FB, AMZN and the likes would
have pressed for something like this, so they could use cheaper
sources to make the rest of the switch, sources which BRCM didn't want
to play ball with.


Can anything else be inserted in this socket? If not, then what's the 
point? For server CPUs there are many models with different clocking and 
number of cores, so socket provides a flexibility. If there is only one 
chip that fits the socket, then the socket is a redundant part.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Mark Tinka писал(а) 2023-06-09 10:26:

On 6/9/23 16:12, Saku Ytti wrote:


I expect many people in this list have no need for more performance
than single Trio YT in any pop at all, yet they need ports. And they
are not adequately addressed by vendors. But they do need the deep
features of NPU.


This.

There is sufficient performance in Trio today (even a single Trio chip
on the board) that people are willing to take an oversubscribed box or
line card because in real life, they will run out of ports long before
they run out of aggregate forwarding capacity.

The MX204, even though it's a pizza box, is a good example of how it
could do with 8x 100Gbps ports, even though Trio on it will only
forward 400Gbps. Most use-cases will require another MX204 chassis,
just for ports, before the existing one has hit anywhere close to
capacity.


Agree, there is a gap between 204 and 304, but don't forget that they 
belong to different generations. 304 is shiny new with a next level 
performance that's replacing MX10k3. The previous generation was 
announced to retire, but life of MX204 was extended because Juniper 
realized that they don't have anything atm to replace it and probably 
will lose revenue. Maybe this gap was caused by covid that slowed down 
the new platform. And possibly we may see a single NPU model based on 
the new gen chip, because chips for 204 are finite. At least it would be 
logical to make it, considering success of MX204.


Really, folk are just chasing the Trio capability, otherwise they'd
have long solved their port-count problems by choosing any
Broadcom-based box on the market. Juniper know this, and they are
using it against their customers, knowingly or otherwise. Cisco was
good at this back in the day, over-subscribing line cards on their
switches and routers. Juniper have always been a little more purist,
but the market can't handle it because the rate of traffic growth is
being out-paced by what a single Trio chip can do for a couple of
ports, in the edge.


I think that it's not rational to make another chipset with lower 
bandwidth, easier to limit an existing more powerful chip. Then it leads 
to MX5/MX10/MX40/MX80 hardware and licensing model. It could be a single 
Trio6 with up to 1.6T in access ports and 1.6T in uplink ports with low 
features. Maybe it will come, who knows, let's watch ;)


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Saku Ytti писал(а) 2023-06-09 10:12:

On Fri, 9 Jun 2023 at 16:58, Andrey Kostin via juniper-nsp
 wrote:


Not sure why it's eye-watering. The price of fully populated MX304 is
basically the same as it's predecessor MX10003 but it provides 3.2T BW
capacity vs 2.4T. If you compare with MX204, then MX304 is about 20%
expensive for the same total BW, but MX204 doesn't have redundant RE 
and
if you use it in redundant chassis configuration you will have to 
spend
some BW on "fabric" links, effectively leveling the price if 
calculated

for the same BW. I'm just comparing numbers, not considering any real


That's not it, RE doesn't attach to fabric serdes.


Sorry, I mixed two different points. I wanted to say that redundant RE 
adds more cost to MX304, unrelated to forwarding BW. But if you want to 
have MX204s in redundant configuration, some ports have to be sacrificed 
for connectivity between them. We have two MX204s running in pair with 
2x100G taken for links between them and remaining BW is 6x100G for 
actual forwarding in/out. In this case it's kind of at the same level 
for price/100G value.




I expect many people in this list have no need for more performance
than single Trio YT in any pop at all, yet they need ports. And they
are not adequately addressed by vendors. But they do need the deep
features of NPU.


I agree, and that's why I asked about HQoS experience, just to add more 
inexpensive low-speed switch ports via trunk but still be able to treat 
them more like separate ports from a router perspective.



I keep hoping that someone is so disruptive that they take the
nvidia/gpu approach to npu. That is, you can buy Trio PCI from newegg
for 2 grand, and can program it as you wish. I think this market
remains unidentified and even adjusting to cannibalization would
increase market size.
I can't understand why JNPR is not trying this, they've lost for 20
years to inflation in valuation, what do they have to lose?


I'm not in this market, have no qualification and resources for 
development. The demand in such devices should be really massive to 
justify a process like this.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Hi Mark,

Not sure why it's eye-watering. The price of fully populated MX304 is 
basically the same as it's predecessor MX10003 but it provides 3.2T BW 
capacity vs 2.4T. If you compare with MX204, then MX304 is about 20% 
expensive for the same total BW, but MX204 doesn't have redundant RE and 
if you use it in redundant chassis configuration you will have to spend 
some BW on "fabric" links, effectively leveling the price if calculated 
for the same BW. I'm just comparing numbers, not considering any real 
topology, which is another can of worms. Most probably it's not worth to 
try to scale MX204s to more than a pair of devices, at least I wouldn't 
do it and consider it ;)
I'd rather call eye-watering prices for MPC7 and MPC10 to upgrade 
existing MX480 routers if you still to use their low-speed ports. Two 
MPC10s with SCB3s upgrade cost more than MX304, but gives 30% less BW 
capacity. For MPC7 this ratio is even worse.
This brings a question, does anybody have an experience with HQoS on 
MX304? I mean just per-subinterface queueing on an interface to a 
switch, not BNG subscribers CoS which is probably another big topic. At 
least I'm not dare yet to try MX304 in BNG role, maybe later ;)


Kind regards,
Andrey

Mark Tinka via juniper-nsp писал(а) 2023-06-08 12:04:


Trio capacity aside, based on our experience with the MPC7E, MX204 and
MX10003, we expect it to be fairly straight forward.

What is holding us back is the cost. The license for each 16-port line
card is eye-watering. While I don't see anything comparable in ASR99xx
Cisco-land (in terms of form factor and 100Gbps port density), those
prices are certainly going to force Juniper customers to look at other
options. They would do well to get that under control.



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX304 Port Layout

2023-06-09 Thread Andrey Kostin via juniper-nsp

Hi Jeff,

Thank you very mush for sharing this information. Do you know in what 
publicly available release it's going to be fixed? Knowing PR number 
would be the best but I guess it may be internal-only.


Kind regards,
Andrey

Litterick, Jeff  (BIT) via juniper-nsp писал(а) 2023-06-08 18:03:

No, that is not quite right.  We have 2 chassis of MX304 in Production
today and 1 spare all with Redundant REs   You do not need all the
ports filled in a port group.   I know since we mixed in some 40G and
40G is ONLY supported on the bottom row of ports so we have a mix and
had to break stuff out leaving empty ports because of that limitation,
and it is running just fine.But you do have to be careful which
type of optics get plugged into which ports.  IE Port 0/2 vs Port 1/3
in a grouping if you are not using 100G optics.

The big issue we ran into is if you have redundant REs then there is a
super bad bug that after 6 hours (1 of our 3 would lock up after
reboot quickly and the other 2 would take a very long time) to 8 days
will lock the entire chassis up solid where we had to pull the REs
physical out to reboot them. It is fixed now, but they had to
manually poke new firmware into the ASICs on each RE when they were in
a half-powered state,  Was a very complex procedure with tech support
and the MX304 engineering team.  It took about 3 hours to do all 3
MX304s  one RE at a time.   We have not seen an update with this
built-in yet.  (We just did this back at the end of April)



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX BNG with both local server and dhcp relay

2023-01-23 Thread Andrey Kostin via juniper-nsp
I didn't have any v6-specific issues with DHCP relay in Junos 21.4. If 
you're going to rely on option-82, consider to turn on proxy mode. 
Without it Junos didn't update Circuit-ID in RENEW packets sent unicast 
from clients to DHCP server. Although it could be fixed in last 
releases, worth to check.


Kind regards,
Andrey

Dave Bell писал(а) 2023-01-13 04:10:

Thanks Andrey,

Yes, I believe you are correct. You can't switch from using local DHCP
server in the global routing table to DHCP relay once authenticated in
a different VRF.

I can split my services onto different interfaces coming into the BNG,
though since you need to decapsulate them first, they end up on the
same demux interface anyway.

I analysed a lot of traceoptions and packet captures. My relay didn't
receive a single packet, and the logs indicated that it was not
looking for DHCP configuration in my VRF that has forwarding
configured.

I think my only option is to move everything over to DHCP forwarding
in all cases, though this seems quite flaky for v6...

Regards,
Dave

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX BNG with both local server and dhcp relay

2023-01-10 Thread Andrey Kostin via juniper-nsp

Hi Dave,

Don't have experience with your specific case, just a common sense 
speculation. When you configure local dhcp server it usually specifies a 
template interface, like demux0.0, pp0.0, psX.0. Probably in your case a 
conflict happens when junos tries to enable both server and relay on the 
same subscriber interface. Maybe if you could dynamically enable dhcp 
server or relay for a particular subscriber interface it could solve the 
issue. Regarding interface separation, I'm not sure if it's possible to 
have more than one demux or pp interface, I believe only demux0 is 
supported. With ps interfaces you however can have many of them and if 
you can aggregate subscribers to pseudowires by service, you could 
enable dhcp server or relay depending on psX interface. However, 
pseudowires might be not needed and excessive for your design.
Did you try to analyze DHCP and AAA traceoptions and capture DHCP 
packets, BTW?


Kind regards,
Andrey

Dave Bell via juniper-nsp писал(а) 2023-01-05 08:50:

Hi,

I'm having issues with DHCP relay on a Juniper MX BNG, and was 
wondering if

anyone had an insight on what may be the cause of my issue.

I've got subscribers terminating on the MX, authenticated by RADIUS, 
and
then placed into a VRF to get services. In the vast majority of cases 
the
IP addressing information is passed back by RADIUS, and so I'm using 
the

local DHCP server on the MX to deal with that side of things.

In one instance I require the use of an external DHCP server. I've got 
the

RADIUS server providing an Access-Accept for this subscriber, and also
returning the correct VRF in which to terminate the subscriber. I've 
also

tried passing back the external DHCP server via RADIUS.

In the VRF, I've got the DHCP relay configured, and there is 
reachability

to the appropriate server

The MX however seems reluctant to actually forward DHCP requests to 
this

server. From the logging, I can see that the appropriate attributes are
received and correctly decoded. The session gets relocated into the 
correct

routing instance, but then it tries to look for a local DHCP server.

I have the feeling that my issues are due to trying to use both the 
local

DHCP server and DHCP relay depending on the subscriber scenario. If I
change the global configuration of DHCP from local server to DHCP 
relay, my
configuration works as expected though with the detriment of the 
scenario
where the attributes returned via RADIUS no longer work due to it not 
being

able to find a DHCP relay.

Since the MX decides how to authenticate the subscriber based on where 
the

demux interface is configured, I think ideally I would need to create a
different demux interface for these type of subscribers that I can then 
set

to be DHCP forwarded, thought I don't seem to be able to convince the
router to do that yet.

Has anyone come across this, and found a workable solution?

Regards,
Dave
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] ACX7100 route scale

2023-01-03 Thread Andrey Kostin via juniper-nsp

Thanks, Mihai, for sharing this very useful info!

Kind regards,
Andrey

Mihai via juniper-nsp писал(а) 2022-12-31 07:20:

I found the info here:

https://www.juniper.net/documentation/us/en/software/junos/routing-policy/topics/ref/statement/system-packet-forwarding-options-hw-db-profile.html


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX VRRP on VXLAN enviroment

2022-12-15 Thread Andrey Kostin via juniper-nsp

Hi Cristian,

I tried to reproduce the issue by reverting the configuration, but it 
didn't occur. It's still unclear to me why only v4 was affected and v6 
was not. Furthermore, in stable state (which it was in my case) vrrp 
backup is silent so no mac move events can happen and I confirmed it 
with vrrp statistics that I collected before disabling VRRP, backup 
router wasn't sending anything.
After re-activating the interface on backup router I also didn't see any 
vrrp packet sent from it. Eventually I ended up with configuring 
exception for VRRP MACs and will watch how it goes:


https://www.juniper.net/documentation/us/en/software/junos/evpn-vxlan/multicast-l2/topics/ref/statement/exclusive-mac-edit-protocols-l2-learning-global-mac-move.html

Kind regards,
Andrey

Cristian Cardoso писал(а) 2022-12-14 11:20:

Hi Andrey

In my case, what you said happened, as I modified the arp suppression
configuration of evpn-vxlan, since this was silently dropping mac's
and dropping VRRPv4 only, in IPv6 this did not happen.

set protocols evpn duplicate-mac-detection detection-threshold 20
set protocols evpn duplicate-mac-detection detection-window 5
set protocols evpn duplicate-mac-detection auto-recovery-time 5

With the above configurations, I never had a problem with VRRPv4
crashing in my environment.

Environment with VRRP is already working since the email that responds
in 2021 without any drop or problems.

kind regards,

Cristian Cardoso


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Updating SRX300, now slower than before

2022-07-13 Thread Andrey Kostin via juniper-nsp
Have nothing directly related to these releases, but we have a few 
SRX320 and they always felt slow in comparison to SRX345 that we also 
use mainly for power supply redundancy. Now they are all on 19.4R3-Sx. 
320s time to time log LACP timeouts and ae interface flaps. 345s have 
the same configuration and never experienced anything like this.


Kind regards,
Andrey

Markus via juniper-nsp писал(а) 2022-07-12 17:16:

Hi list,

I'm moving a couple SRX300 that were running 15.1X49-D90.7 to a new
purpose and just updated them to 21.3R1.9 to be a bit more up-to-date
and now booting takes twice the time (5 mins?) and CLI input also
seems "lag-ish" sometimes. Did I just do a big mistake? If
routing/IPsec is unimpacted then it's OK, or could it be that the
SRX300 will now perform slower than before in terms of routing
packets/IPsec performance/throughput? Or is that stupid of me to
think? :)

Thank you!

Regards
Markus
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP export policy, group vs neighbor level

2022-02-07 Thread Andrey Kostin via juniper-nsp


I agree, there is no clarity for all possible situations and from my 
experience a) and c) should be correct and take special care. Changing 
existing policy doesn't drop a session (usually ;) and I saw when adding 
a new policy in the existing policy chain didn't drop BGP, but might be 
not always the case. If the router is RR may also affect it's behavior, 
I think almost every network engineer was hit by this Juniper "feature".


Kind regards,
Andrey

Raph Tello писал(а) 2022-02-05 03:55:

Hey,

not really clear to me what that KB is exactly saying.

Does it say:

a) Peer will be reset when it previously hadn’t an individual
import/export policy statement but the group one and then an
individual one is configured

b) Peer will be reset each time it‘s individual policy is touched
while there is another policy in the group

or

c) Peer is reset the first time it receives it‘s own policy under
the group

Unfortunate that this seems to be not really well documented.





- Tello


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] BGP export policy, group vs neighbor level

2022-02-04 Thread Andrey Kostin via juniper-nsp

Hi,
this KB article just came in:
https://kb.juniper.net/InfoCenter/index?page=content=KB12008=SUBSCRIPTION
Symptoms:
Why does modifying a policy on a BGP neighbor in a group cause that 
particular peer to be reset, when another policy is applied for the 
whole peer group?

Solution:
Changing the export policy on a member (peer) in a group will cause that 
member to be reset, as there is no graceful way to modify a group 
parameter for a particular peer. Junos can gracefully change the export 
policy, only when it is applied to the complete group.


It's not much helpful but just provides a confirmation.

Kind regards,
Andrey

Raph Tello via juniper-nsp писал(а) 2022-02-04 09:33:

I would also like to hear opinions about having ipv4 and ipv6 ebgp peer
sessions in the same group and using the same policy instead of having 
two

separate groups and two policies (I saw this kind policy at
https://bgpfilterguide.nlnog.net/guides/small_prefixes/#junos).

It would nicely pack things together. Could that be considered kind of 
new

best practice?

On Thu 3. Feb 2022 at 16:12, Raph Tello  wrote:


Hi list,

I wonder what kind of bgp group configuration would allow me to change 
the
import/export policy of a single neighbor without resetting the 
session of

this neighbor nor any other session of other neighbors. Similar to
enabling/disabling features on a single session without resetting the
sessions of others.

Let‘s say I have a bgp group IX-peers and each peer in that group has 
its
own import/export policy statement but all reference the same 
policies. Now

a single IX-peer needs a different policy which is going to change
local-pref, so I would replace the policy chain of that peer with a
different one.

Would this cause a session reset because the peer would be moved out 
of

the update group?

(I wonder mainly about group>peer>policy vs. group>policy vs. each 
peer

it‘s own group)

- Tello


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] labeled-unicast to ldp redistribution ?

2021-12-20 Thread Andrey Kostin via juniper-nsp

Thanks for details, it looks a little illogical though.
I mentioned that the best BGP route is received from the same OSPF/LDP 
neighbor in the same island. And it looks like it's using P2P IPs for 
BGP session. Is there some reason why it's not run between loopback IPs? 
 Just shot in the dark, maybe with sessions between loopbacks BGP would 
rely on OSPF for next-hop resolution and it can change the behavior?


Kind regards,
Andrey Kostin

Alexandre Snarskii писал(а) 2021-12-20 12:31:

On Mon, Dec 20, 2021 at 09:08:40AM -0500, Andrey Kostin wrote:

Hi Alexandre,

Not sure that I completely understood the issue. When connectivity
between islands recovers, what is the primary route for regular BGP
routes' protocol next-hop?


It's not the connectivity between islands, it's the connectivity
within IGP island that recovers. Assume the following simple
topology:

   A == B
   ||
   C == D

Routers A and B form one IGP island, C and D - other, and there are
two ibgp-lu links between islands with ldp->ibgp-lu->ldp 
redistribution.


In normal situation, route A to B goes via direct link (igp/ldp),
when link A-B breaks, A switches to ibgp-lu route from C.
When link A-B recovers, A does not switch back to direct link and
still uses A->C route (in best case it's just suboptimal, in worst
case it results in routing loops).


Looks like it should be OSPF with route
preference lower than BGP and in this case it should be labeled by LDP
and propagated. Only if OSPF route for a protocol next-hop is not the
best, the next-hop from BGP-LU will be used.


Unfortunately, it's expected behaviour, but not what I see in lab.
Oversimplified: just two routers, one p2p link with all three ospf/ldp/
ibgp-lu enabled,

show route xx.xxx.xxx.78/32 table inet.0

inet.0:
xx.xxx.xxx.78/32   *[OSPF/10] 5d 04:58:59, metric 119
>  to xxx.xx.xxx.21 via ae0.6

(so, ospf route is the best one in inet.0)

show ldp database session xx.xxx.xxx.7 | match 
"database|xx.xxx.xxx.78/32"

Input label database, xxx.xx.xxx.8:0--xx.xxx.xxx.7:0
  66742  xx.xxx.xxx.78/32
Output label database, xxx.xx.xxx.8:0--xx.xxx.xxx.7:0
   5743  xx.xxx.xxx.78/32

so the label is present and not filtered (.7 is the router-id of .21),

show route xx.xxx.xxx.78/32 receive-protocol bgp xxx.xx.xxx.21

inet.3: 467 destinations, 1125 routes (467 active, 0 holddown, 0 
hidden)

Restart Complete
  Prefix  Nexthop  MED LclprefAS path
* xx.xxx.xxx.78/32xxx.xx.xxx.2119  100I

so, it's received and is the best route in inet.3 (best, because
there are no ldp route in inet.3 at all:

show route .. table inet.3

xx.xxx.xxx.78/32   *[BGP/10] 02:10:43, MED 19, localpref 100
  AS path: I, validation-state: unverified
>  to xxx.xx.xxx.21 via ae0.6, Push 69954

), and, finally,

show ldp route extensive xx.xxx.xxx.78/32
DestinationNext-hop intf/lsp/table  
Next-hop address
 xx.xxx.xxx.78/32  ae0.6
xxx.xx.xxx.21

   Session ID xxx.xx.xxx.8:0--xx.xxx.xxx.7:0

xxx.xx.xxx.21

   Bound to outgoing label 5743, Topology entry: 0x776dd88
   Ingress route status: Inactive
   Route type: Egress route, BGP labeled route
   Route flags: Route deaggregate

suggests that presence of ibgp-lu route prevented ldp route from being
installed to inet.3 and being used.

PS: idea from KB32600 (copy ibgp-lu route from inet.3 to inet.0 and 
then

use "from protocol bgp rib inet.0" in ldp egress policy) does not work
too. Well, in this case presence of ibgp-lu route does not prevent ldp
route from being installed into inet.3 and used as best route (when
present, of course), but when ldp/igp route is missed, route received
with ibgp-lu not gets redistributed into ldp.




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] labeled-unicast to ldp redistribution ?

2021-12-20 Thread Andrey Kostin via juniper-nsp

Hi Alexandre,

Not sure that I completely understood the issue. When connectivity 
between islands recovers, what is the primary route for regular BGP 
routes' protocol next-hop? Looks like it should be OSPF with route 
preference lower than BGP and in this case it should be labeled by LDP 
and propagated. Only if OSPF route for a protocol next-hop is not the 
best, the next-hop from BGP-LU will be used.


Kind regards,
Andrey Kostin

Alexandre Snarskii via juniper-nsp писал(а) 2021-12-17 12:29:

Hi!

Scenario: router is a part of ospf/ldp island and also have ibgp
labeled-unicast rib inet.3 link to other ospf/ldp island. In normal
situations, some routes are known through ospf/ldp, however, during
failures they may appear from ibgp-lu and redistributed to ldp just
fine. However, when failure ends and route is known via ospf/ldp again,
it's NOT actually in use. Instead, 'show ldp route extensive' shows
this route as:

   Ingress route status: Inactive
   Route type: Egress route, BGP labeled route
   Route flags: Route deaggregate

and there are only ibgp route[s] in inet.3 table.

Are there any way to make ldp ignore 'BGP labeled' flag and install
route to inet.3 ? (other than making all routes be known not only
via ospf/ldp but also via ibgp-lu too).

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-09-09 Thread Andrey Kostin via juniper-nsp

Hi Nathan,



You want to look in the example configs. Start from an understanding
of what you want the RADIUS messages to have in them. You can do this
with just a static Users file in your test environment with just one
subscriber, and then look at moving that in to sqlippool or similar,
with whatever logic you need to get those attributes in to the right
place. Framed-IP-Address obviously, but maybe also Framed-IP-Netmask
etc. - better to experiment with the attributes and get them right
without the sqlippool complexity.

https://wiki.freeradius.org/modules/Rlm_sqlippool This is alright (it
appears outdated on the surface, but is up to date I think)
https://github.com/FreeRADIUS/freeradius-server/blob/v3.0.x/raddb/mods-available/sqlippool
This is the example config and has some more detail than the above.
https://github.com/FreeRADIUS/freeradius-server/blob/v3.0.x/raddb/mods-config/sql/ippool/postgresql/queries.conf
This is useful to understand some of the internals



I started to play with sqlippool and have a couple of questions.
Does sqlippool or any other module supports IPv6? I haven't found 
anything about it in the documentation.
Is ippool module used only as example for database schema? It looks like 
it doesn't need to be enabled for sqlippool operation.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-11 Thread Andrey Kostin via juniper-nsp

Nathan Ward писал 2021-08-10 20:53:


Yeah the FreeRADIUS docs are hard to navigate - but getting better.

You want to look in the example configs. Start from an understanding
of what you want the RADIUS messages to have in them. You can do this
with just a static Users file in your test environment with just one
subscriber, and then look at moving that in to sqlippool or similar,
with whatever logic you need to get those attributes in to the right
place. Framed-IP-Address obviously, but maybe also Framed-IP-Netmask
etc. - better to experiment with the attributes and get them right
without the sqlippool complexity.

https://wiki.freeradius.org/modules/Rlm_sqlippool This is alright (it
appears outdated on the surface, but is up to date I think)
https://github.com/FreeRADIUS/freeradius-server/blob/v3.0.x/raddb/mods-available/sqlippool
This is the example config and has some more detail than the above.
https://github.com/FreeRADIUS/freeradius-server/blob/v3.0.x/raddb/mods-config/sql/ippool/postgresql/queries.conf
This is useful to understand some of the internals



Thanks for links. I'm pretty well familiar with radius users file syntax 
but freeradius modules calls puzzles me a little.




A good setup for IPv4 DHCP relay is:

lo0 addresses on BNG-1
192.168.0.1/32 - use as giaddr
10.0.0.1/32
10.0.1.1/32
10.0.2.1/32
10.0.3.1/32

lo0 addresses on BNG-2
192.168.0.2/32 - use as giaddr
10.0.0.1/32
10.0.1.1/32
10.0.2.1/32
10.0.3.1/32

DHCP server:
Single shared network over all these subnets:
Subnet 192.168.0.0/24 - i.e. covering giaddrs
  No pool
Subnet 10.0.0.0/24
  pool 10.0.0.2-254
Subnet 10.0.1.0/24
  pool 10.0.1.2-254
Subnet 10.0.2.0/24
  pool 10.0.2.2-254
Subnet 10.0.3.0/24
  pool 10.0.3.2-254

This causes your giaddrs to be in the shared network with the subnets
you want to assign addresses from (i.e. the ones with pools), so the
DHCP server can match them up, but, with no pool in the 192.168.0.0/24
subnet you don’t assign addresses out of that network.

Otherwise you have to have a unique /32 for each BNG in each subnet
and you burn lots of addresses that way.


How is potential IP conflict handled in this case if BNGs are connected 
to the switched LAN segment? In my case with vlan per customer it can 
happen when a client requests the lease and can get replies from same IP 
but different MACs. BNGs can also see each other and report IP conflict.


Kind regards,

Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-10 Thread Andrey Kostin via juniper-nsp

Andrey Kostin via juniper-nsp писал 2021-08-10 16:44:


So far, I started to play with KEA dhcp server and stumbled on "shared
subnet" with multiple pools topic. I have two clients connected. The
first pool has only one IP available to force the client who comes
last to use the second pool. The first client successfully gets .226
IP from the first pool, but the second client fails.


Found the problem with KEA config, I didn't read docs thoroughly and 
missed "shared-networks" statement. It works this way:


"shared-networks": [
{
"name": "ftth",
"relay": {
"ip-addresses": [ "Y.Y.Y.Y" ]
},


 "subnet4": [
 {
 "subnet": "X.X.X.224/28",
 "pools": [ { "pool": "X.X.X.226 - X.X.X.226" } ],

 "option-data": [
 {
 // For each IPv4 subnet you most likely need to 
specify at

 // least one router.
 "name": "routers",
 "data": "X.X.X.225"
 }
 ]
 },
 {
 "subnet": "X.X.X.240/28",
 "pools": [ { "pool": "X.X.X.242 - X.X.X.245" } ],

 "option-data": [
 {
 // For each IPv4 subnet you most likely need to 
specify at

 // least one router.
 "name": "routers",
 "data": "X.X.X.241"
 }
 ]
}
],

However it puzzled me why KEA didn't send anything in response to BNG, 
but it's a different topic.
Meanwhile I set unique IP on lo0 as primary and now it appears in 
giaddr. On demux interfaces Junos uses an IP that matches the subnet of 
leased IP. So it looks like in this case preferred IP setting doesn't 
affect address selection process in any way.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-10 Thread Andrey Kostin via juniper-nsp

Nathan Ward via juniper-nsp писал 2021-08-10 08:00:
On 10/08/2021, at 10:40 PM, Bjørn Mork via juniper-nsp 
 wrote:


Thank you Nathan and Bjorn for your explanations, they are very helpful!
I'll definitely look at ip pool management in RADIUS. I'm struggling to 
find a good freeradius documentation source, could you give some links?


So far, I started to play with KEA dhcp server and stumbled on "shared 
subnet" with multiple pools topic. I have two clients connected. The 
first pool has only one IP available to force the client who comes last 
to use the second pool. The first client successfully gets .226 IP from 
the first pool, but the second client fails.


My config has this:

"subnet4": [
{
"subnet": "X.X.X.224/28",
"pools": [ { "pool": "X.X.X.226 - X.X.X.226" } ],
"relay": {
"ip-addresses": [ "X.X.X.225" ]
},
"option-data": [
{
// For each IPv4 subnet you most likely need to 
specify at

// least one router.
"name": "routers",
"data": "X.X.X.225"
}
]
},
{
"subnet": "X.X.X.240/28",
"pools": [ { "pool": "X.X.X.242 - X.X.X.245" } ],
"relay": {
"ip-addresses": [ "X.X.X.225" ]
},
"option-data": [
{
// For each IPv4 subnet you most likely need to 
specify at

// least one router.
"name": "routers",
"data": "X.X.X.241"
}
],

In the log I get this:
Aug 10 15:51:17 testradius kea-dhcp4[44325]: WARN  
ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 d0:76:8f:a7:43:ca], cid=[no 
info], tid=0x485c2228: failed to allocate an IPv4 address in the subnet 
X.X.X.224/28, subnet-id 1, shared network
Aug 10 15:51:17 testradius kea-dhcp4[44325]: WARN  
ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 d0:76:8f:a7:43:ca], cid=[no info], 
tid=0x485c2228: failed to allocate an IPv4 address after 1 attempt(s)
Aug 10 15:51:17 testradius kea-dhcp4[44325]: WARN  
ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 d0:76:8f:a7:43:ca], cid=[no 
info], tid=0x485c2228: Failed to allocate an IPv4 address for client 
with classes: ALL, VENDOR_CLASS_.dslforum.org, UNKNOWN


Looks like KEA doesn't consider the second subnet as belonging to the 
same shared network despite the matching giaddr. I followed example in 
Kea documentation and expect that relay address matching giaddr should 
do the trick, but I feel maybe subnets have to be in the same bracket, 
however don't know how to put it there. At one moment I saw addresses 
leased from both pools but later it returned back to this. Maybe it was 
a transient state when previous lease didn't expire yet, I'm not sure.




Note that you also must have a unique address as the primary address
on the interface as the giaddr - which the the centralised dhcp server
talks to. If that giaddr is shared across BNGs, your replies will go
to the wrong place a large % of the time, and not get to the
subscriber.
The giaddr does not need to be an address in any of the subnets you
want to hand out addresses in - in isc dhcpd, you can configure the
giaddr in a subnet as part of the “shared network” you want to hand
out addresses from, which if you have a lot of BNGs saves you a
handful of addresses you can give to customers.


Good point, thanks. I find Juniper documentation on primary and 
preferred IP very confusing, for me it's always try and fail method to 
find a working combination. Even more confusing, few years ago I had a 
TAC case opened regarding the meaning of preferred address for IPv6 
assignment to pppoe subscriber and I was told by TAC that it's not 
supported for IPv6 at all. I think it changed in recent releases.
For example, there is unique IP on lo0 that is used as router-id etc., 
and also there should be one or more IPs that match subnets in address 
pools. In dynamic profile address is specified this way:
unnumbered-address "$junos-loopback-interface" preferred-source-address 
"$junos-preferred-source-address"
Currently I don't have neither primary or preferred specified on lo0 and 
.225 is somehow selected.
In my understanding preferred-source-address has to match subnet in 
address pool, otherwise it will fail to assign an address. And it also 
will be used as giaddr in this case. Which address should be primary and 
which preferred in this case?


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-09 Thread Andrey Kostin via juniper-nsp

Bjørn Mork via juniper-nsp писал 2021-08-06 15:27:

Thanks for your reply.


Probably stupid question, but here goes... How does a central server
make the IP usage more effective?  Are you sharing pools between
routers?


Yes, going to have at least two routers as BNG and trying to find a way 
to not lock IP addresses if they aren't needed.



In any case, you can do that with a sufficiently smart RADIUS server
too.  You don't have to let JUNOS manage the address pools even if it 
is

providing the DHCP frontend.


I understand that it could be an option, but for vlan-per-customer model 
radius authentication isn't really needed for DHCP clients. Auth is done 
for a parent VLAN-demux interface, so for DHCP sessions BNG will send 
only accounting. In this case it will require to develop "smart-enough" 
radius backend. If there is any solution already available I'd 
definitely look at it, but I'd try to avoid building a homebrew 
solution.



IMHO, having the DHCP frontend on the edge makes life so much easier.
Building a sufficiently redundant and robust centralized DHCP service 
is
hard.  And the edge router still has to do most of the same work 
anyway,

relaying broadcasts and injecting access routes.  The centralized DHCP
server just adds an unneccessary single point of failure.


I agree that it's a complication, but imo it's a reasonable tradeoff for 
effective IP space usage. For relatively big IP pools it would be 
sufficient saving. From KEA DHCP server documentation I see that 
different scenarios for HA are supported, so some redundancy can be 
achieved.


Another question that puzzles me is how to use multiple discontinuous 
pools with DHCP server. With Junos internal DHCP I can link DHCP pools 
in the same way as for PPPoE and just assign additional GW IP to lo0. 
With that Junos takes care of finding available IP in pools and use 
proper GW address. In case of external DHCP server, router has to insert 
relay option but how can it choose what subnet to use in this case if 
there are more than one available? This problem should be also actual 
for big cable segments, although for cable interface IP addresses are 
directly configured on the interface, but for Junos BNG a 
customer-facing interface is unnumbered.


Kind regards,

Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-06 Thread Andrey Kostin via juniper-nsp

Bjørn Mork via juniper-nsp писал 2021-08-06 12:38:

Andrey Kostin via juniper-nsp  writes:


What DHCP server do you use/would recommend to deploy for subscriber
management?


The one in JUNOS. Using RADIUS as backend.



Thanks, currently using it but looking for a central server for more 
effective IP usage.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] DHCP server recommendation for subscribers management

2021-08-06 Thread Andrey Kostin via juniper-nsp

Jerry Jones писал 2021-08-06 09:37:

Strongly suggest having active lease query or bulk active lease query

I believe kea has this support

Jerry Jones​


Thanks for reply, Jerry.
In my understanding active leasequery can be run between routers, so 
might be not needed on DHCP server, am I correct?
Interesting question what happens if we have two routers with 
synchronized DHCP bindings, will be DHCP demux interfaces created on the 
secondary router based on that? My guess is no, but need to test it. If 
then traffic switches from primary to secondary router, will the 
secondary be able to pass IP traffic right away or it will have to wait 
for next DHCP packet from a client to create demux interface?


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] DHCP server recommendation for subscribers management

2021-08-06 Thread Andrey Kostin via juniper-nsp

Hi Juniper-NSP community,

What DHCP server do you use/would recommend to deploy for subscriber 
management? Preferably packaged for CentOS. Required features are IPv4, 
IPv6 IANA, IPv6 IA_PD. Active leasequery support is desirable but 
optional.


--
Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX VRRP on VXLAN enviroment

2021-07-23 Thread Andrey Kostin via juniper-nsp

Cristian Cardoso via juniper-nsp писал 2021-07-19 14:15:

Hi
Thanks for the tip, I'll set it up here.



Are you trying to setup MX80 as end-host, without including it in EVPN? 
If so, then you can extend EVPN to MX80 and run virtual-gateway from it. 
No need for VRRP in this case.


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX204 Maximum Packet Rates

2021-06-18 Thread Andrey Kostin
It looks like "High performance mode" means configuring port speed in 
pic mode that may not be feasible in all cases depending on port 
configuration.

No data for HP mode provided...
And finally, from the example, where did they find fpc 5 on MX10003? ;)

Kind regards,
Andrey

aar...@gvtc.com писал 2021-05-21 13:54:

Interesting, that KB link mentions...

"From Junos 19.1R1, we support "High-performance mode" to enable WAN
Output block resource allocation. In this mode, better throughput is
achieved at line-rate traffic for small sized packets."

Maybe this will help others and OP achiever higher rates

-Aaron


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] evpn irb default gateway

2021-06-18 Thread Andrey Kostin


Hi Baldur,

There is PR1551063 for this case listed in the Release Notes, please 
check.


Kind regards,

Andrey

Baldur Norddahl писал 2021-05-12 19:34:
When I add this to the configuration the acx5448 irb will route 
traffic:


set routing-instances internet routing-options static route 0.0.0.0/1
next-hop 128.0.0.0 resolve no-readvertise

However this does not work:

set routing-instances internet routing-options static route 0.0.0.0/0
next-hop 128.0.0.0 resolve no-readvertise





Thanks,

Baldur




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] How to pick JUNOS Version

2020-08-19 Thread Andrey Kostin
Agree with Rx-S and with reasonably conservative approach, 
 should be >= 3. In S1, S2 you will probably get PR fixes 
affecting multiple previous releases but for a new R-specific PRs it 
takes time to be discovered and fixes implemented, which usually takes 
not less than 6 months. Also you may take into consideration that last 
releases in a train usually have longer support period.


Kind regards,
Andrey

Roger Wiklund писал 2020-08-19 11:12:
I'm not sure how long Arista can keep the single binary approach as 
they

expand their portfolio
and feature set. For example it makes very little sense to have full 
BNG

code on EX access switches, imge would be huge.

As for JTAC recommended release, it's a very generic recommendation not
taking specific use cases into consideration (Except for EVPN-VXLAN 
CRB/ERB)
Typically Juniper considers R3 releases to be mainstream adoptable 
(reality

is more like R3-S) but you will sleep better if you do proper
testing and to avoid regression bugs etc.

You can always ask your friendly SE for some guidance.

/Roger


On Wed, Aug 19, 2020 at 4:46 PM Colton Conor  
wrote:


How do you plan which JUNOS version to deploy on your network? Do you 
stick
to the KB21476 - JTAC Recommended Junos Software Versions or go a 
different
route? Some of the JTAC recommended code seems to be very dated, but 
that

is probably by design for stability.

https://kb.juniper.net/InfoCenter/index?page=content=KB21476=METADATA

Just wondering if JUNOS will ever go to a unified code model like 
Arista
does? The amount of PR's and bug issues in JUNOS seems overwhelming. 
Is
this standard across vendors? I am impressed that Juniper takes the 
times
to keep track of all these issues, but I am unimpressed that there are 
this

many bugs.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Experience and opinion about ACX5448

2020-03-19 Thread Andrey Kostin

Hi juniper-nsp,

Looking for your opinion about ACX5448, it's limitations and difference 
from QFX5120. I know that it's based on another chipset but more details 
would be appreciated.
Particularly interested in -D model, it looks attractive to not have 
extra DWDM gear just to connect a single device. How competitive is it 
from price perspective?


Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-17 Thread Andrey Kostin
Your 960 will be choked if you are going to push a decent traffic volume 
through it. And circulation through backplane to and from service cards 
will only make it worse.


Just imho. Your choice.

Kind regards,
Andrey Kostin

Aaron Gould писал 2020-03-09 09:18:

In my case, 960 has a lot of slots, and I use slot 0 and slot 11 for
MPC-7E-MRATE to light up 100 gig east/west ring and 40 gig south to ACX
subrings, so I have plenty of slot space for my MS-MPC-128G nat 
module... If
I place it somewhere else, then I gotta cross the network to some 
extent to
get to it... also, my dual 100 gig inet connections are on a couple of 
those
960's where I colo the mpc-128g card, yeah, it's all right there.  Not 
the
case for dsl nat, that's across the network in a couple mx104's, but 
dsl

doesn't have near the speeds that my ftth and cm subs have.

-Aaron


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] MX960 vs MX10K

2020-03-06 Thread Andrey Kostin
I'd be +1 for this. For DC GW the main concern should be reliability and 
simplicity. If you are going to bring EVPN there, then having fancy 
services mixed on the same chassis may affect your uptime.
Also I'd take MX480 instead of 960 because of architecture compromises 
of the latter. I'm also wondering, if MX960 fits in terms of number of 
ports and capacity with some slots occupied by service cards, maybe 
MX1003 + MX480 (or virtualized services) would do the job?


Kind regards,
Andrey


Chris Kawchuk писал 2020-03-04 22:32:

Just to chime in --- for scale-out, wouldn't you be better offloading
those MS-MPC functions to another box? (i.e. VM/Dedicated
Appliance/etc..?).

You burn slots for the MSMPC plus you burn the backplane crossing
twice; so it's at worst a neutral proposition to externalise it and
add low-cost non-HQoS ports to feed it.

or is it the case of limited space/power/RUs/want-it-all-in-one-box?
and yes, MS-MPC won't scale to Nx100G of workload.

- CK.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Next-table, route leaking, etc.

2020-02-25 Thread Andrey Kostin
Faced the same issue and found out that generated route works in my 
case. It may be not flexible enough if multiple active next-hops exist 
at the same time in the routing-instance, but it's ok for simple 
primary-backup scenario.


Kind regards,
Andrey Kostin


 Original message 
From: Nathan Ward <mailto:juniper-...@daork.net>>

Date: 2/9/20 6:08 PM (GMT-09:00)
To: Juniper NSP <mailto:juniper-nsp@puck.nether.net>>

Subject: [j-nsp] Next-table, route leaking, etc.

Hi all,

Something that’s always bugged me about JunOS, is when you import a 
route from another VRF on JunOS, the attributes follow it - i.e. if 
it is a discard route, you get a discard route imported.
(Maybe this happens on other platforms, I honestly can’t remember, 
it’s been a while..)


This is an issue where you have a VRF with say a full table in it, 
and want to generate a default discard for other VRFs to import if 
they want internet access. Works great if the VRF importing it is on 
a different PE, but, if it’s local it simply gets a discard route, 
so packets get dropped rather than doing a second lookup.


You can solve this, sort of, with a next-table route, but things can 
get a little messy, so hoping for something more elegant.


I’m trying to figure out if there’s a better way to do this, i.e. to 
make it as though packets following leaked routes behave as though 
they are from a different router.


Anyone got any magic tricks I’ve somehow missed?




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] arp from correct IP address

2020-01-27 Thread Andrey Kostin
Interesting. I have observed a while ago that "preferred" doesn't work 
for IPv6. Opened TAC case and eventually was told that "it doesn't work 
for IPv6". Turns out that it's also broken for IPv4, but we do PPPoE, so 
DHCP is running only for IPv6, so didn't get into IPv4 issue. The 
workaround in my case was to use broadband loopback address as primary, 
thanks that it's not so critical as IPv4 primary loopback.
As we are looking into possible IPoE implementation for some services, 
thanks for heads up.


Kind regards,
Andrey Kostin

Baldur Norddahl писал 2020-01-27 00:24:
Yes subscriber management has a lot of small but important things that 
are
not quite "done". Juniper should put on a task force to get all the 
bugs

sorted out. Could be a great system if they allow it to be.

For me the trouble with this is that without functioning ARP the 
customer
becomes "MAC locked". If he wants to upgrade his equipment, he has to 
call
us so we can clear his session. We have two routers and sometimes a 
user

somehow manages to register with different MAC addresses on the two.
Needless to say that creates a lot of trouble that will not sort itself
out. With functioning ARP I believe the wrong MAC address would be
corrected soon enough without intervention.

I wish I could just have a user defined radius variable and use that
instead of $junos-preferred-source-address. My script that generates 
that
radius configuration could easily calculate the correct source address 
and

program that in with the other radius variables for each user.

I am not creating a JTAC case on this before I have a fix for my other 
JTAC
cases (IPv6 is broken, dynamic VLAN with IP demux on top is broken, 
DHCP

combined with non-DHCP is likely also broken). So far I got IPv4 fixed
(access-internal routes ignored, work around use access routes), so 
they do

work on the problems I report.

Regards,

Baldur


Den man. 27. jan. 2020 kl. 04.53 skrev Chris Kawchuk 
:



Ran into the same bug.

$junos-preffered-source-address for an unnumbered for BNG functions 
does
NOT return the "closest/must suitable address" based on the IP+Subnet 
that
was given the subscriber... contrary to the BNG template 
doucmentation. It
just defaults the actual loopback of the router. (the dynamic template 
that
gets created against a demux0. subscriber says $preffered of 
"NONE")


This means that things like Subscriber "ARP liveliness detection" 
doesn't
work/cant work. (since the subscriber won't arp-respond to an ARP 
requests

where the source isn't in the local subnet)

I've had a JTAC case open on this for 8 months. Sent full configs, 
built a
full lab for them (so they could trigger it remotely), self full 
PCAPs.


MX204 + JunOS 18.3R + BNG (DHCP/IPoE naturally)

Also on MX80 w/same code - so it's the BNG code, not the platform 
doing it.


- Ck.




On 25 Jan 2020, at 10:27 pm, Baldur Norddahl  
wrote:


Hello

I have a problem where some customer routers refuse to reply to arp 
from

our juniper mx204. The arp will look like this:

11:57:46.934484 Out arp who-has 185.24.169.60 tell 185.24.168.248

The problem is that this should have been "tell 185.24.169.1" because 
the

client is in the 185.24.169.0/24 subnet. The interface is
"unnumbered-address lo0.1" with lo0.1 having both 185.24.168.248 and
185.24.169.1 among many others. A Linux box would select the nearest
address but apparently junos does not know how to do this.

Tried adding in "preferred-source-address 
$junos-preferred-source-address"
but this just results in "preferred-source-address NONE" and does 
nothing.
Also there is zero documentation on how junos will fill in that 
variable.


Is there a solution to this? Is there a radius variable I can set with 
the

preferred source address?

Regards,

Baldur
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN on QFX5200

2019-09-26 Thread Andrey Kostin

Hi Vincent,

Thank you for a good advice. I saw this page before, but now reviewed 
it. According to it only second opton could qualify and I'm going to 
test it. Anyway, for final solution QFX10K will be in consideration.


Kind regards,
Andrey

Vincent Bernat писал 2019-09-26 02:49:

Hello,

The QFX5110 is unable to route between a VXLAN and a layer 3 interface.
There is a hack documented here:




Such a setup is quite fragile. Only the QFX10k is able to act as a L3
gateway for VXLAN and be connected to non-VXLAN stuff. QFX5110 is only
able to act as a L3 gateway when routing between VXLANs.
--
Watch out for off-by-one errors.
- The Elements of Programming Style (Kernighan & Plauger)


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN on QFX5200

2019-09-25 Thread Andrey Kostin

Thank you for reply.
I meant a slightly different thing. Currently my setup is in lab stage 
with QFX5110 as spines and QFX5000 as leaves. I need to connect vlans 
running in EVPN-VXLAN fabric to an aggregation router, ideally two of 
them for redundancy. To have a redundant gateway for hosts sitting in 
VNIs I need to run EVPN L3 gateway somewere. It can be done either on 
aggregation routers or on QFX5110. Putting L3GW on routers means they 
have to run EVPN as well and effectively become leaves for VXLAN fabric. 
It may be a feasible solution in the future but for now we don't want to 
put EVPN-VXLAN in prod network. So, the another option is to run L3 
gateways on spines and somehow route them to agg routers. Possible 
connectivity options between edge routers and spines could be:
- have individual P2P routed links Spine-RTR and run BGP session between 
them. Balancing and redundancy in this case will be provided by BGP+ECMP 
and also limited by their capabilities.
- have LACP to both Spines from each RTR and then L3 interface on each 
spine, BGP from each spine to each RTR. Load balancing is provided by 
BGP multipath+ECMP+LACP. In this case LACP bundle from spines POV is 
switched. Direct connection between spines is necessary in this case. 
ROuters in this topology play CE role for VXLAN fabric but connected to 
spines instead of leaves.


Any recommendations or links to BCP are appreciated.

Kind regards,
Andrey

Vincent Bernat писал 2019-09-21 01:34:

❦ 20 septembre 2019 11:47 -04, Andrey Kostin :


I am not familiar with MPLS. You need to use QFX10k for the spines as
the QFX5k are not able to route VXLAN outside (or not able to route at
all).


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN on QFX5200

2019-09-20 Thread Andrey Kostin

Hi Vincent,

Thank you for elaborating on this, I had the same question when read 
your reply.
It may be not an issue for a small deployment but definitely should be 
considered in terms of BCP.


Could you advise about various external connectivity options for 
EVPN-VXLAN fabric? Let's say there are two spines that centrally route 
VXLAN vnis and some leaves. Spines are CEs from core MPLS network 
perspective. I understand that EVPN can be extended to the PE router and 
L3-gateways run on them, but probably not right now. What is a proper 
way to connect spines to PE router or pair of PE routers? I'm looking 
into running EBGP from each spine to [each] PE router over routed P2P 
interface. Are there possible flaws in this topology? Is direct 
connection needed between spines in this case?


Kins regards,
Andrey


Vincent Bernat писал 2019-09-20 02:25:

❦ 20 septembre 2019 11:55 +12, Liam Farr :


I'm running VXLAN with ingress-node-replication in prod, can you
explain what you mean by havoc?


When using EVPN, prefer using "set protocols evpn multicast-mode
ingress-replication". Using "set vlans XXX vxlan
ingress-node-replication" will send replicated packets to all VTEP,
including the ones not advertising the Type 3 route. See
:


Retains the QFX1 switch’s default setting of disabled for ingress
node replication for EVPN-VXLAN. With this feature disabled, if a
QFX1 switch that functions as a VTEP receives a BUM packet
intended, for example, for a physical server in a VLAN with the VNI of
1001, the VTEP replicates and sends the packet only to VTEPs on which
the VNI of 1001 is configured. If this feature is enabled, the VTEP
replicates and sends this packet to all VTEPs in its database,
including those that do not have VNI 1001 configured. To prevent a
VTEP from needlessly flooding BUM traffic throughout an EVPN-VXLAN
overlay network, we strongly recommend that if not already disabled,
you disable ingress node replication on each of the leaf devices by
specifying the delete vlans vlan-name vxlan ingress-node-replication
command.


In turn, this may exhaust the resources of the Broadcom
chipset (Trident2 or Trident2+) if you have a lot of VLANs and/or a lot
of VTEPs.

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN on QFX5200

2019-09-19 Thread Andrey Kostin

Hi Joe,

There are some documents on Junipers website describing principles and 
including configurations, like this:

https://www.juniper.net/us/en/training/jnbooks/day-one/data-center-technologies/data-center-deployment-evpn-vxlan/

Some parameters can vary, so it depends on what your requirements are.

You can also try to use this scrips to generate configs for your 
specific configuration:

https://github.com/JNPRAutomate/ansible-junos-evpn-vxlan/

Kind regards,
Andrey

Joe Freeman писал 2019-09-16 15:52:
Does anyone have a working example for EVPN configuration on the QFX 
5200's

that they'd be willing to share?

I've got four 5200's split between two DC's with 2x100G links between 
each

pair in a mesh. I'd like to run EVPN on them such that my network
infrastructure between the sites is transparent to the servers.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] RSVP-TE broken between pre and post 16.1 code?

2019-08-16 Thread Andrey Kostin

Could it be:
Number  PR1443811
Title   RSVP refresh-timer interoperability between 15.1 and 16.1+
Release Note

Path message with long refresh interval (equal to or more than 20 
minutes) from a node that does not support Refresh-interval Independent 
RSVP (RI-RSVP) is dropped by the receiver with RI-RSVP.


SeverityMinor
Status  Open
Last Modified   2019-07-26 08:51:18 EDT
Resolved In
Release junos
18.3R3  x
18.4R3  x
17.4R3  x
19.2R2  x
19.3R1  x
19.1R2  x
Product 	J Series, M Series, T Series, MX-series, EX Series, SRX Series, 
QFX Series, NFX Series, PTX Series

Functional Area software
Feature Group   Multiprotocol Label Switching (MPLS)
Workaround

1. Use default rsvp refresh-time config. No config is needed.
30 seconds in 15.1 and 20 minutes in 16.1+

2. If you must configure rsvp refresh-time, configure it to be less than 
20 minutes.

set protocols rsvp refresh-time 1199

Problem

Starting with Junos OS Release 16.1, RSVP Traffic Engineering (TE) 
protocol extensions to support Refresh-interval Independent RSVP 
(RI-RSVP) defined RFC 8370 for fast reroute (FRR) facility protection 
were introduced to allow greater scalability of label-switched paths 
(LSPs) faster convergence times and decrease RSVP signaling message 
overhead from periodic refreshes. RI-RSVP mode is enabled by default and 
includes protocol extensions to support RI-RSVP for FRR facility bypass 
originally specified in RFC 4090. The default refresh time for RSVP 
messages has increased from 30 seconds to 20 minutes.
In mixed environments, where a subset of LSPs traverse nodes that do not 
include this feature, Junos RSVP-TE running in enhanced FRR mode will 
automatically turn off the new protocol extensions in its signaling 
exchanges with nodes that do not support the new extensions. However, 
path messages with long refresh interval (equal to or more than 20 
minutes) from such nodes will be dropped by the receiver with RI-RSVP. 
It is assumed that non-RI-RSVP nodes should have lower refresh time 
because it is used for failure detection in non-RI-RSVP environments.


With this fix, configuring 'no-enhanced-frr-bypass' on 16.1+ nodes will 
solve the silent path message drop and will allow 20 minutes and higher 
refresh times to be used on non-RI-RSVP nodes.


Triggers

- 'protocols rsvp refresh-time 1200' or higher is used on a non-RI-RSVP 
node (Junos <16.1).

- There is a RI-RSVP (16.1 or later) node after non-RI-RSVP node.

Kind regards,
Andrey Kostin

adamv0...@netconsultings.com писал 2019-08-16 06:01:

From: Nathan Ward 
Sent: Friday, August 16, 2019 8:39 AM

> On 1/07/2019, at 9:59 PM, adamv0...@netconsultings.com wrote:
>
>> From: Michael Hare 
>> Sent: Friday, June 28, 2019 7:02 PM
>>
>> Adam-
>>
>> Have you accounted for this behavioral change?
>>
>>
https://kb.juniper.net/InfoCenter/index?page=content=KB32883=
>> print=LIST==currentpaging
>>
> Thank you, yes please we're aware of that, but even with this the
> issue is still present if the refresh timer is not <1200 or CSPF is enabled.

I’m confused by this one - what’s the refresh timer and CSPF got to do 
with

it?


Not much it's a bug, it appears form the logs that the path message
has "something different" in the ERO when CSPF is enabled triggering
the bug ...

LSPs on 16.1 will do self-ping after they come up before they put 
traffic on
them. The lo0 filter has to permit that, or you’ve got to disable 
self-ping.



LSPs will do self ping when switching onto a new/optimized path, not
when the LSP is first brought up -which in this case doesn’t happen.

Or am I parsing this weird, and you’re saying this is still an issue 
even with the

self ping disabled (or permitted in filters), under those conditions?


Yes that is correct, this problem appears even before the self-ping is
engaged (the LSP is not even signalled -the RESV msg is never sent as
a response to PATH msg in this case).

adam

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Rock-solid JUNOS for QFX5100

2019-08-12 Thread Andrey Kostin

Hi Ross,

We are on 14.1X53 for our prod QFX5100. Don't do BGP, VRRP and PIM on 
them but other features are similar to yours (tried PIM once in a while 
but it behaved weird and decided just don't do it). The only problem we 
saw with them is few third-party QSFP issues, but resolved them by 
manipulating auto-negotiation iirc.
I'm currently looking to 18.2R3 as potential candidate for next step and 
testing it on QFX5110 atm, according to release notes it has a bunch of 
fixes for bugs that were discovered in 17.x releases. Also 17.4R3 is 
going to be released in August, waiting for it for subscriber-management 
routers but it will have recent fixes for QFXs as well. In your case 
though it'll be interesting to know JTAC findings. If it's a new bug 
then it may take some time until it will be resolved.
I'm also very suspicious when S-releases are shown as "recommended". I 
may be mistaken, but in my understanding S-releases don't undergo full 
testing routine and verified only for implemented bugfixes.

Please share you investigation results with JTAC.

Kind regards,
Andrey Kostin


Ross Halliday писал 2019-08-12 09:19:

Dear List,

I'm curious if anybody can recommend a JUNOS release for QFX5100 that
is seriously stable. Right now we're on the previously-recommended
version 17.3R3-S1.5. Everything's been fine in testing, and suddenly
out of the blue there will be weird issues when I make a change. I
suspect maybe they are related to VSTP or LAG, or both.

1. Add a VLAN to a trunk port, all the access ports on that VLAN
completely stopped moving packets. Disable/delete disable all of the
broken interfaces restored function. This happened during the day. I
opened a JTAC ticket and they'd never heard of an issue like this, of
course we couldn't reproduce it. I no longer recall with confidence,
but I think the trunk port may have been a one-member LAG (replacement
of a downstream switch).

2. New trunk port (a two-port LACP LAG) not sending VSTP BPDUs for
some VLANs. I'm not sure if it was coincidence or always broken as I
had recently began feeding new VSTP BPDUs (thus the root bridge
changed) before I even looked at this. Other trunk ports did not
exhibit the same issue. Completely deleted the LAG and rolled back to
fix. This was on a fresh turnup and luckily wasn't in a topology that
could form a loop.

Features I'm using include:

- BGP
- OSPF
- PIM
- VSTP
- LACP
- VRRP
- IGMPv2 and v3
- Routing-instance
- CoS for multicast
- CoS for unicast
- CoS classification by ingress filter
- IPv4-only
- ~7k routes in FIB (total of all tables)
- ~1k multicast groups


There are no automation features, no MPLS, no MC-LAG, no EVPN, VXLAN,
etc. These switches are L3 boxes that hand off IP to an MX core.
Management is in the default instance/table, everything else is in a
routing instance.

These boxes have us scared to touch them outside of a window as
seemingly basic changes risk blowing the whole thing up. Is this a
case where an ancient version might be a better choice or is this
release a lemon? I recall that JTAC used to recommend two releases,
one being for if you didn't require "new features". I find myself
stuck between the adages of "If it ain't broke, don't fix it" and
"Software doesn't age like wine". Given how poorly multicast seems to
be understood by JTAC I'm very hesitant to upgrade to significantly
newer releases.

If anybody can give advice or suggestions I would appreciate it 
immensely!


Thanks
Ross

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Non-dhcp users with subscriber management

2019-07-15 Thread Andrey Kostin

Hi Baldur,

Maybe this feature could be useful for you despite it's documented in 
completely wrong place?

https://www.juniper.net/documentation/en_US/junos/topics/topic-map/dual-stack-pppoe-access-ndra.html#id-ip-demultiplexing-interfaces-on-packet-triggered-subscribers-services-overview

Kind regards,
Andrey

Baldur Norddahl писал 2019-07-04 13:10:

Hello

I am new to Juniper MX. I successfully managed to configure customer
vlan with dynamic profiles for dhcp users. I attached the important
parts of the configuration at the end of this message.

In the real network we are using q-in-q double tagged vlans, but to
make thing simple I am working with single tagged vlans for my lab. We
have customers vlan, which is each customer has a unique vlan
combination.

My configuration will first cause a radius server to be queried for
the validity of the vlan. Then the DHCP server is queried and finally
the subscriber is active. This is working now.

The problem is that I want customers to be able to configure without
using DHCP. Each customer has a static IP configuration. When using
DHCP the customer will always get the same IP address. We then tell
the user that he can optionally use DHCP. Or he can use a static
configuration if he likes that better.

This is an existing ISP network working as described. We are working
to replace the old BRAS with Juniper MX204. So it would be nice if we
can keep it working like it is today.

I am a bit stuck on where to go from here. Most of the examples I find
are all assuming DHCP. I am thinking that it should be possible to
supply the customer IP address via Radius instead of DHCP.

If needed, I could find out which users are using static configuration
without DHCP and then have Radius return something different for those
users.

Anyone have some advice for me?

Regards,

Baldur

The working DHCP configuration:

system {
    services {
    subscriber-management {
    maintain-subscriber {
    interface-delete;
    }
    enable;
    }
    }
    dynamic-profile-options {
    versioning;
    }
}
chassis {
    network-services enhanced-ip;
}
access-profile rad;
interfaces {
    et-0/0/0 {
    flexible-vlan-tagging;
    auto-configure {
    vlan-ranges {
    dynamic-profile DYNINTF-1VLANS-DHCP-INET {
    accept any;
    ranges {
    any;
    }
    }
    authentication {
    password 12345678;
    username-include {
    user-prefix vlan;
    vlan-tags;
    }
    }
    access-profile rad;
    }
    }
    lo0 {
    unit 0 {
    family inet {
    address 1.2.3.4/32;
    }
    }
    }
}
forwarding-options {
    dhcp-relay {
    server-group {
    dhcp-group-1 {
    1.2.3.5;
    }
    }
    active-server-group dhcp-group-1;
    group dhcp-group-1 {
    relay-option-82;
    interface et-0/0/0.0;
    }
    }
}
access {
    radius-server {
    1.2.3.6 {
    secret "xxx"; ## SECRET-DATA
    source-address 1.2.3.4;
    }
    }
    profile rad {
    accounting-order radius;
    authentication-order radius;
    radius {
    authentication-server 1.2.3.6;
    accounting-server 1.2.3.6;
    options {
    revert-interval 0;
    }
    }
    accounting {
    order radius;
    immediate-update;
    update-interval 15;
    }
    }
}
dynamic-profiles {
    DYNINTF-1VLANS-DHCP-INET {
    interfaces {
    "$junos-interface-ifd-name" {
    unit "$junos-interface-unit" {
    proxy-arp restricted;
    vlan-id "$junos-vlan-id";
    family inet {
    unnumbered-address lo0.0;
    }
    }
    }
    }
    }
}



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] subscriber management not inserting any routes

2019-05-27 Thread Andrey Kostin

Hi Baldur,

Does this command show anything for you?

mx5-lab-2> show system subscriber-management route

Route:  10.0.255.2/32
 Route Type:   Local
 Next-Hop index:   0
Route:  100.64.1.15/32
 Route Type:   Access-internal
 Interface:demux0.3221225501
 Next-Hop index:   707
Route:  2001:db8:::2/128
 Route Type:   Local
 Next-Hop index:   0
Route:  x:x:x:x::1a/128
 Route Type:   Access-internal
 Interface:demux0.3221225502
 Next-Hop index:   707
Route:  fe80::8ae0:f3ff:fe7c:4cc0/128
 Route Type:   Local
 Next-Hop index:   0

My config is different, I'm testing packet-triggered subscribers 
feature. In dynamic-profile I have source defined under family. Not sure 
if it applies to your case.


interfaces {
demux0 {
unit "$junos-interface-unit" {
demux-options {
underlying-interface "$junos-underlying-interface";
}
family inet {
demux-source {
$junos-subscriber-ip-address;
}
filter {
input "$junos-input-filter";
output "$junos-output-filter";
}
unnumbered-address "$junos-loopback-interface";
}
family inet6 {
filter {
input "$junos-input-ipv6-filter";
output "$junos-output-ipv6-filter";
}
demux-source {
"$junos-subscriber-ipv6-address";
}
unnumbered-address "$junos-loopback-interface";
}
}
}
}

Kind regards,
Andrey Kostin


Baldur Norddahl писал 2019-05-18 11:05:

Hello

I am having trouble with subscriber management not inserting any
routes. Information is picked up from radius, such as this:

baldur@interxion-edge1> show subscribers
Interface IP Address/VLAN ID  User
Name  LS:RI
demux0.3221225472 195.192.249.104 vlan.1970-37  
default:internet
demux0.3221225473 195.192.249.69 vlan.1970-77  
default:internet

...

baldur@interxion-edge1> show interfaces demux0.3221225472
  Logical interface demux0.3221225472 (Index 536870919) (SNMP ifIndex 
20007)

    Flags: Up VLAN-Tag [ 0x8100.1970 0x8100.37 ]  Encapsulation: ENET2
    Demux:
  Underlying interface: xe-0/1/1 (Index 168)
    Bandwidth: 0
    Input packets : 3342925
    Output packets: 0
    Protocol inet, MTU: 1500
    Max nh cache: 0, New hold nh limit: 0, Curr nh cnt: 0, Curr new
hold cnt: 0, NH drop cnt: 0
  Flags: Unnumbered
  Donor interface: lo0.1 (Index 329)
  Addresses, Flags: Is-Primary
    Local: 185.24.168.248

baldur@interxion-edge1> show route 195.192.249.104

internet.inet.0: 769284 destinations, 771001 routes (769284 active, 0
holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

195.192.249.64/26  *[BGP/170] 4w5d 12:58:36, MED 0, localpref 100,
from 185.24.171.254
  AS path: ?, validation-state: unverified
    >  to 10.10.124.2 via xe-0/1/0.0, Push 164140,
Push 16467(top)

---

The subscriber interface is receiving packets but never sends anything
out. Also no route is added although the router seems to be aware of
the intended subscriber IP address. The route shown above is a /26 to
another router. I am expecting the subscriber management system to
override that with a /32 for this specific subscriber.

My setup is like this:

interfaces {
    xe-0/1/1 {
    flexible-vlan-tagging;
    auto-configure {
    stacked-vlan-ranges {
    dynamic-profile Auto-VLAN-Demux {
    accept inet;
    ranges {
    1970-1970,any;
    }
    access-profile prof1;
    }
    authentication {
    password "$ABC123";
    username-include {
    user-prefix vlan;
    vlan-tags;
    }
    }
    access-profile prof1;
    }
    }
    }
}

dynamic-profiles {
    Auto-VLAN-Demux {
    routing-instances {
    "$junos-routing-instance" {
    interface "$junos-interface-name";
    }
    }
    interfaces {
    demux0 {
    unit "$junos-interface-unit" {
    demux-source inet;
    demux {
    inet {
    address source;
    auto-configure {
 

Re: [j-nsp] EVPN/VXLAN experience

2019-03-28 Thread Andrey Kostin

Hi Sebastian,

Could you please clarify a little bit, does this limit on bridge-domain 
number apply when you have same 500 vlans on 30 aes or each ae has 
unique 500 VNIs?

How is external connectivity implemented and for how many VNIs?

Kind regards,
Andrey

Sebastian Wiesinger писал 2019-03-25 05:58:

* Rob Foehl  [2019-03-22 18:40]:
Huh, that's potentially bad...  Can you elaborate on the config a bit 
more?

Are you hitting a limit around ~16k bridge domains total?


Well we're just putting VLANs on LACP trunks like this:

ae0 {
mtu 9216;
esi {
00:00:00:00:00:00:00:01:01:01;
all-active;
}
aggregated-ether-options {
lacp {
active;
system-id 00:00:00:01:01:01;
hold-time up 2;
}
}
unit 0 {
family ethernet-switching {
interface-mode trunk;
vlan {
members STORAGE1;
}
}
}
}

VLANs are configured "as ususal":

vlans {
STORAGE1 {
vlan-id 402;
vxlan {
vni 402;
}
}
}


If you have 30 AEs you will start hitting this when you put around 500
vlans on the vlan members list of all AEs.

What I find irritating are the warnings around the evpn configuration:

evpn {
## Warning: Encapsulation can only be configured for an EVPN 
instance

## Warning
encapsulation vxlan;
## Warning: multicast-mode can only be configured in a virtual
switch instance
## Warning: Multicast mode can only be configured if
route-distinguisher is configured
multicast-mode ingress-replication;
## Warning: Extended VNI list can only be configured in a
virtual switch instance
extended-vni-list all;
}

This config works without problems and was the configuration we got
from Juniper in the beginning as well. Did not find an explanation for
the warnings when we initally provisioned this.

Regards

Sebastian


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] JunOS 18, ELS vs non-ELS QinQ native vlan handling.

2019-03-22 Thread Andrey Kostin

Hi Alexandre,

Did it pass frames without C-tag in Junos versions < 18?

Kind regards,
Andrey

Alexandre Snarskii писал 2019-03-22 13:03:

Hi!

Looks like JunOS 18.something introduced an incompatibility of native
vlan handling in QinQ scenario between ELS (qfx, ex2300) and non-ELS
switches: when ELS switch forwards untagged frame to QinQ, it now adds
two vlan tags (one specified as native for interface and S-vlan) 
instead

of just S-vlan as it is done by both non-ELS and 'older versions'.

As a result, if the other end of tunnel is non-ELS (or third-party)
switch, it strips only S-vlan and originally untagged frame is passed
with vlan tag :(

Are there any way to disable this additional tag insertion ?

PS: when frames sent in reverse direction, non-ELS switch adds only
S-vlan and this frame correctly decapsulated and sent untagged.

ELS-side configuration (ex2300, 18.3R1-S1.4. also tested with
qfx5100/5110):

[edit interfaces ge-0/0/0]
flexible-vlan-tagging;
native-vlan-id 1;
mtu 9216;
encapsulation extended-vlan-bridge;
unit 0 {
vlan-id-list 1-4094;
input-vlan-map push;
output-vlan-map pop;
}

(when native-vlan-id is not configured, untagged frames are not
accepted at all).

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN/VXLAN experience

2019-03-22 Thread Andrey Kostin
One more question just came to mind: what routing protocol do you use 
for underlay, eBGP/iBGP/IGP? Design guides show examples with eBGP but 
looks like for deployment that's not very big ISIS could do everything 
needed. What are pros and cons for BGP vs IGP?


Kind regards,
Andrey

Andrey Kostin писал 2019-03-22 09:46:

Thank you Sebastian for sharing your very valuable experience.

Kind regards,
Andrey

Sebastian Wiesinger писал 2019-03-22 04:39:

* Andrey Kostin  [2019-03-15 20:50]:
I'm interested to hear about experience of running EVPN/VXLAN, 
particularly
with QFX10k as L3 gateway and QFX5k as spine/leaves. As per docs, it 
should
be immune to any single switch downtime, so might be a candidate to 
really

redundant design.


All right here it goes:

I can't speak for QFX10k as spine but we have QFX5100 Leaf/Spine
setups with EVPN/VXLAN running right now. Switch downtime is no
problem at all, we unplugged a running switch, shut down ports,
unplugged cables between leaf & spine or leaf & client all while there
was storage traffic (NFS) active in the setup. Worst thing that
happend was that IOPS went down from 400k/s to 100k/s for 1-3 seconds.

What did bother us was that you are limited (at least on QFX5100) in
the amount of "VLANs" (VNIs). We were testing with 30 client
full-trunk ports per leaf and with that amount you can only provision
around 500 VLANs before you get errors and basically it seems you run
out of memory for bridge domains on the switch. This seems to be a
limitation by the chips used in the QFX5100, at least that's what I
got when I asked about it.

You can check if you know where:

root@SW-A:RE:0% ifsmon -Id | grep IFBD
 IFBD   :12884  0

root@SW-A:RE:0% ifsmon -Id | grep Bridge
 Bridge Domain  : 3502   0

These numbers combined need to be <= 16382.

And if you get over the limit these nice errors occur:

dcf_ng_get_vxlan_ifbd_hw_token: Max vxlan ifbd hw token reached 16382
ifbd_create_node: VXLAN IFBD hw token couldn't be allocated for 



Workaround is to decrease VLANs or trunk config.

Also you absolutely NEED LACP from servers to the fabric. 17.4 has
enhancements which will put the client ports in LACP standby when the
leaf gets separated from all spines.


As a downside I see the more complex configuration at least. Adding
vlan means adding routing instance etc. There are also other
questions, about convergence, scalability, how stable it is and code
maturity.


We have it automated with Ansible. Management access happens over OOB
(Mgmt) ports and everything is pushed by Ansible playbooks. Ansible
generates configuration from templates and pushes it to the switches
via netconf. I never would want to do this by hand. This demands a
certain level of structuring by every team (network, people doing the
cabling, server team) but it works out well for structured setups.

Our switch config looks like this:

--
user@sw1-spine-pod1> show configuration
## Last commit: 2019-03-11 03:13:49 CET by user
## Image name: jinstall-host-qfx-5-flex-17.4R2-S2.3-signed.tgz

version 17.4R1-S3.3;
groups {
/* Created by Ansible */
evpn-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-1 { /* OMITTED */ };
/* Created by Ansible - Empty group for maintenance operations */
client-interfaces;
}
apply-groups [ evpn-defaults evpn-spine-defaults evpn-spine-1 ];
--

So everything Ansible does is contained in apply-groups and is hidden. 
You can

immediately spot if something is configured by hand.

For code we're currently running on the 17.4 train which works mostly
fine, we had a few problems with third party 40G optics but these
should be fixed in the newest 17.4 service release.

Also we had a problem where new Spine/Leaf links did not come up but
these vanished after rebooting/upgrading the spines.

In daily operations it proves to be quite stable.


Best Regards

Sebastian


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EVPN/VXLAN experience

2019-03-22 Thread Andrey Kostin

Thank you Sebastian for sharing your very valuable experience.

Kind regards,
Andrey

Sebastian Wiesinger писал 2019-03-22 04:39:

* Andrey Kostin  [2019-03-15 20:50]:
I'm interested to hear about experience of running EVPN/VXLAN, 
particularly
with QFX10k as L3 gateway and QFX5k as spine/leaves. As per docs, it 
should
be immune to any single switch downtime, so might be a candidate to 
really

redundant design.


All right here it goes:

I can't speak for QFX10k as spine but we have QFX5100 Leaf/Spine
setups with EVPN/VXLAN running right now. Switch downtime is no
problem at all, we unplugged a running switch, shut down ports,
unplugged cables between leaf & spine or leaf & client all while there
was storage traffic (NFS) active in the setup. Worst thing that
happend was that IOPS went down from 400k/s to 100k/s for 1-3 seconds.

What did bother us was that you are limited (at least on QFX5100) in
the amount of "VLANs" (VNIs). We were testing with 30 client
full-trunk ports per leaf and with that amount you can only provision
around 500 VLANs before you get errors and basically it seems you run
out of memory for bridge domains on the switch. This seems to be a
limitation by the chips used in the QFX5100, at least that's what I
got when I asked about it.

You can check if you know where:

root@SW-A:RE:0% ifsmon -Id | grep IFBD
 IFBD   :12884  0

root@SW-A:RE:0% ifsmon -Id | grep Bridge
 Bridge Domain  : 3502   0

These numbers combined need to be <= 16382.

And if you get over the limit these nice errors occur:

dcf_ng_get_vxlan_ifbd_hw_token: Max vxlan ifbd hw token reached 16382
ifbd_create_node: VXLAN IFBD hw token couldn't be allocated for 



Workaround is to decrease VLANs or trunk config.

Also you absolutely NEED LACP from servers to the fabric. 17.4 has
enhancements which will put the client ports in LACP standby when the
leaf gets separated from all spines.


As a downside I see the more complex configuration at least. Adding
vlan means adding routing instance etc. There are also other
questions, about convergence, scalability, how stable it is and code
maturity.


We have it automated with Ansible. Management access happens over OOB
(Mgmt) ports and everything is pushed by Ansible playbooks. Ansible
generates configuration from templates and pushes it to the switches
via netconf. I never would want to do this by hand. This demands a
certain level of structuring by every team (network, people doing the
cabling, server team) but it works out well for structured setups.

Our switch config looks like this:

--
user@sw1-spine-pod1> show configuration
## Last commit: 2019-03-11 03:13:49 CET by user
## Image name: jinstall-host-qfx-5-flex-17.4R2-S2.3-signed.tgz

version 17.4R1-S3.3;
groups {
/* Created by Ansible */
evpn-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-defaults { /* OMITTED */ };
/* Created by Ansible */
evpn-spine-1 { /* OMITTED */ };
/* Created by Ansible - Empty group for maintenance operations */
client-interfaces;
}
apply-groups [ evpn-defaults evpn-spine-defaults evpn-spine-1 ];
--

So everything Ansible does is contained in apply-groups and is hidden. 
You can

immediately spot if something is configured by hand.

For code we're currently running on the 17.4 train which works mostly
fine, we had a few problems with third party 40G optics but these
should be fixed in the newest 17.4 service release.

Also we had a problem where new Spine/Leaf links did not come up but
these vanished after rebooting/upgrading the spines.

In daily operations it proves to be quite stable.


Best Regards

Sebastian


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] 400G is coming?

2019-03-15 Thread Andrey Kostin

Hi juniper-nsp,

Accidentally found that MX series datasheet now mentions MPC-10E with 
400G ports

https://www.juniper.net/assets/us/en/local/pdf/datasheets/1000597-en.pdf

"The MPC-10E line card is a key contributor to the service
provider transformation in the cloud era when deployed with
MX960, MX480, and MX240 platforms in a Juniper Secure
Automated Distributed Cloud environment. By providing the
underlying network infrastructure with scale, agility, routing
innovation, and pervasive security while incorporating universal
(10/40/100/400GbE) ports, the MPC-10E protects existing
investments with disaggregated software innovation and
infinite programmability. Built-in automation enables rapid
deployment without disrupting the existing MX960/MX480/
MX240 footprint. The MPC-10E line card is powered by the
new Juniper Si5 silicon, which enables the benefits highlighted
in Table 2."

MPC10E-10C
Modular port concentrator with 8xQSPF28 multirate
ports (10/40/100GbE) plus 2xQSFP56-DD multirate
ports (10/40/100/400GbE)
MPC10E-15C
Modular port concentrator with 12xQSPF28 multirate
ports (10/40/100GbE) plus 3xQSFP56-DD multirate
ports (10/40/100/400GbE)

Search on juniper.net returns very few results for MPC-10E.

Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] EX4600 or QFX5110

2019-03-15 Thread Andrey Kostin

Hi guys,

My 0.02: we use QFX5100 in VC and it's pretty solid. But. As mentioned, 
it's a single logical switch and by design it can't run members with 
different Junos versions that means downtime when you need to upgrade 
it. There is an ISSU but it has it's own caveats, so be prepared to 
afford some downtime for reboot. For example, there was an issue with 
QoS that required both Junos and host OS upgrade, so full reboot was 
inevitable in that case. Maybe I'm missing something, would like to hear 
about your best practice regarding VC high-availability.


For simple L3 routing QFX5100 works well, but when I tried to run PIM on 
irb interfaces it behaved in strange way so I had to rollback and move 
PIM to the routers because didn't have time to investigate.
We run VC with two members only. Tried EX4300 up to 8 members but it was 
very sluggish. Thankfully for us 96 ports is enough for ToR switch in 
the most of the cases.
Regarding VCF, as per reading docs my understanding about it is that 
it's the same control plane as VC but with Spine-Leaf topology instead 
of ring. Because we use only 2 member VCs, there is no added value in 
it. Seems to me that VCF can't eliminate concern about reboot downtime 
and more switches you have more impact you can get.


I'm interested to hear about experience of running EVPN/VXLAN, 
particularly with QFX10k as L3 gateway and QFX5k as spine/leaves. As per 
docs, it should be immune to any single switch downtime, so might be a 
candidate to really redundant design. As a downside I see the more 
complex configuration at least. Adding vlan means adding routing 
instance etc. There are also other questions, about convergence, 
scalability, how stable it is and code maturity.
I'd be appreciated if somebody could share a feedback about operation of 
EVPN/VXLAN.


Kind regards,
Andrey


Graham Brown писал 2019-03-12 15:40:

Hi Alex,

Just to add a little extra to what Charles has already said; The EX4600 
has

been around for quite some time, whereas the QFX5110 is a much newer
product, so the suggestion for the QFX over EX could have been down to
this.

Have a look at the datasheets for any additional benefits that may suit 
one
over the over, table sizes / port counts / protocol support etc etc. If 
in
doubt between the two, quote out the solution for each variant and see 
how

they best fit in terms of features and CAPEX/OPEX for your needs.

Just to echo Charles, remember that a VC / VCF is one logical switch 
from a
control plane perspective, so if you have two ToR per-rack, ensure that 
the
two are not part of the same VC or VCF. Then you can afford to lose a 
ToR /

series of ToRs for maintenance without breaking a sweat.

HTH,
Graham

Graham Brown
Twitter - @mountainrescuer 
LinkedIn 


On Wed, 13 Mar 2019 at 08:00, Anderson, Charles R  wrote:

Spanning Tree is rather frowned upon for new designs (for good 
reasons).
Usually, if you have the ability to do stright L2 bridging, you can 
always
do L3 on top of that.  A routed Spine/Leaf design with EVPN-VXLAN 
overly

for L2 extension might be a good candidate and is typically the answer
given these days.

I'm not a fan of proprietary fabric designs like VCF or MC-LAG.  VC is
okay, but I wouldn't use it across your entire set of racks because 
you are
creating a single management/control plane as a single point of 
failure
with shared fate for the entire 6 racks.  If you must avoid L3 for 
some
reason, I would create a L2 distribution layer VC out of a couple 
QFX5110s

and dual-home independent Top Of Rack switches to that VC so each rack
switch is separate.  I've used 2-member VCs with QFX5100 without 
issue.
Just be sure to enable "no-split-detection" if and only if you have 
exactly

2 members.  Then interconnect the distribution VCs at each site with
regular LAGs.

On Tue, Mar 12, 2019 at 06:36:49PM +, Alex Martino via juniper-nsp
wrote:
> Hi,
>
> I am seeking advices.
>
> I am working on a L2/L3 DC setup. I have six racks spread across two
locations. I need about 20 ports of 10 Gbps (*2 for redundancy) ports 
per
rack and a low bandwidth between the two locations c.a. 1 Gbps. 
Nothing

special here.
>
> At first sight, the EX4600 seems like a perfect fit with Virtual Chassis
feature in each rack to avoid spanning tree across all racks. 
Essentially,

I would imagine one VC cluster of 6 switches per location and running
spanning-tree for the two remote locations, where L3 is not possible.
>
> I have been told to check the QFX5110 without much context, other than
not do VC but only VCF with QFXs. As such and after doing my searches, 
my
findings would suggest that the EX4600 is a good candidate for VC but 
does
not support VCF, where the QFX5110 would be a good candidate for VCF 
but
not for VC (although the feature seems to be supported). And I have 
been

told to either use VC or VCF rather than MC-LAG.
>
> Any suggestions?
>
> 

Re: [j-nsp] DHCP Client subscriber management

2018-12-13 Thread Andrey Kostin


Hi Mattew,

In Junos there is no dedicated feature set for subscriber management, 
all features are included in regular Junos package, but you need to 
activate some licenses to use it. If you have already working AAA system 
for DHCP subscribers, Junos should be capable to work with it. Regarding 
hardware I'd strongly recommend to use physical routers for subscribers, 
at least 2 for redundancy, separate from core routers, although there is 
a new feature that allows to virtualize physical resources and dedicate 
ports to particular virtual host, it's very new. For MX204 you probably 
need to check if all needed features are supported on that platform.


Kind regards,
Andrey

Matthew Crocker писал 29.11.2018 12:41:

Hello,

  I currently have 4 MX480s (RE-S-2X00x6) running my core network and
I’m looking to manage about 20k FTTH residential subscribers.  The
customers are running DHCP clients (Calix ONT, Netgear residential
router).   What are my options for managing each DHCP request in 
JunOS

would the subscriber management feature set help?   On my old Redback
gear I could do Client Less IP.  I want to be able to support v4 (I
have enough v4 for 20k subs) and v6.   Customers will be groomed into
multiple 10G interfaces over VLANs.

The MX480s REs have spare CPU cores and RAM,  not sure if I could run
another VM on the RE to support customer management.  I’m also 
looking
at getting a couple new MX204s to support the subscribers and leave 
my

MX480s to core MPLS/BGP router work

Thanks

-Matt
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] SRX WAN DHCP

2018-12-10 Thread Andrey Kostin
Just adding that new jdhcpd daemon was implemented long ago and 
coexisted at least in junos 12 and 13 causing a lot of confusion. Old 
dhcpd daemon was (recently) removed, so OP needs to adjust to new 
config.


Kind regards,
Andrey

Eldon Koyle писал 10.12.2018 10:55:

I think the commands to check on the dhcp client are show dhcp client
binding and show dhcp client statistics.

There are a lot of bugs in -D70.

--
Eldon

On Mon, Dec 10, 2018 at 8:42 AM Mohammad Khalil  
wrote:


I cannot do the upgrade right now as I have to do the setup so 
quickly

What features should I enable ?

On Mon, 10 Dec 2018 at 17:40, Eldon Koyle <
ekoyle+puck.nether@gmail.com> wrote:

The firmware that ships on the SRX is missing a lot of features.  I 
would
recommend upgrading to the latest version in that code train, which 
is

15.1X49-D150.

--
Eldon

On Mon, Dec 10, 2018 at 1:00 AM Mohammad Khalil 


wrote:


Hello all
I have an old SRX which I configured it is WAN IP address using 
the below

command:
set interfaces ge-0/0/0 unit 0 family inet address dhcp

Now , I have replaced the box with a newer one (srx300 
15.1X49-D70.3)

but I
cannot find the command itself
I have tried the below:
set interfaces ge-0/0/0 unit 0 family inet dhcp-client
With no luck !
When I try to check :
show system services ? (no DHCP option is available)
I have configured:
set security zones security-zone untrust interfaces ge-0/0/0.0
host-inbound-services dhcp

Any ideas guys?

BR,
Mohammad
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper MPC2E-3D-NG-R-B vs MPC2E-3D-R-B

2018-11-02 Thread Andrey Kostin

Hi Pavel,

HQoS is needed for subscriber aggregation, exactly for the case that is 
discussed in another thread "Juniper buffer float".
NG cards without HQoS can be configured for "flexible mode" which is 
HQoS limited to 32k queues. It's suitable for example if you have some 
number of vlans and want to do per-vlan shaping and queueing. For 
example, 32k queues with 4 queue per interface will allow to serve 8000 
subinterfaces. On MPC2E-NG it means that you can do per-vlan QoS on 8 
10G ports with 1000 vlans on each port - not bad in compare with old 
non-NG non-Q cards that can do queueing only on physical interfaces. It 
will work for small-scale LNS, up to 8k subscribers, but watch out to 
not exceed it ;)
Full-scale NG HQoS card allows 512k queues that allows (in theory) to 
terminate 64k subscribers with 8 queues per interface.


Kind regards,
Andrey

Pavel Lunin писал 20.10.2018 05:12:

Hi,

If memory serves, MPC2-NG is much like MPC3-NG: one MPC5-like PFE 
inside. A
good question is what is the difference between the MPC2-NG and 
MPC3-NG...

I'll let the astute readers figure it out on their own.

Old MPC2 non-NG is the "first generation Trio"-based card: two PFE, 
one per
MIC. There is no much sens in buying it today, it's a 10-years old 
thing.


E vs. non-E - Saku is (as always) right, the non-E was the very very 
first

generation of MPC cards, which had a kind of broken oscillator (only
matters for SyncE applications). So we normally omit the E in every 
day

language today, as all MPC cards are E since many many years.

And yeah, "HQoS" (aka -Q/EQ version of cards) has nothing to do with 
the
NG/non-NG story. Most cards have a -Q version to support "HQoS". 
Shortly

speaking, it's for those folks who don't know what to do with their
employer's money.

--
Pavel





___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] LSP's with IPV6 on Juniper

2018-08-29 Thread Andrey Kostin

Hi Craig,

Recently I asked in this list exactly the same question, how legit is 
to not use "family inet6 labeled-unicast explicit-null" but just change 
next-hop to IPv4 address for IPv6 BGP session. After some discussion I 
was pointed out to RFC4798 that states


The 6PE routers MUST exchange the IPv6 prefixes over MP-BGP
  sessions as per [RFC2545] running over IPv4.  The MP-BGP Address
  Family Identifier (AFI) used MUST be IPv6 (value 2).  In doing 
so,

  the 6PE routers convey their IPv4 address as the BGP Next Hop for
  the advertised IPv6 prefixes.  The IPv4 address of the egress 6PE
  router MUST be encoded as an IPv4-mapped IPv6 address in the BGP
  Next Hop field.  This encoding is consistent with the definition
  of an IPv4-mapped IPv6 address in [RFC4291] as an "address type
  used to represent the address of IPv4 nodes as IPv6 addresses".

This is not exactly how it works in our case, because next sentence 
states that label MUST be provided for such prefixes:

 In addition, the 6PE MUST bind a label to the IPv6 prefix as per
  [RFC3107].  The Subsequence Address Family Identifier (SAFI) used
  in MP-BGP MUST be the "label" SAFI (value 4) as defined in
  [RFC3107].

For IPv6 BGP session AFI/SAFI is 2/1 instead of 2/4 as per RFC, however 
it works.
Just for the record, possible AFI/SAFI combinations can be found here: 
https://www.juniper.net/documentation/en_US/junos/topics/usage-guidelines/routing-enabling-multiprotocol-bgp.html


Following example makes me thinking that if IPv6 unicast session is 
configured between mapped IPv4 addresses it may work without any 
next-hop tooling and traffic will use MPLS tunnels if they exist:

https://www.juniper.net/documentation/en_US/junos/topics/example/bgp-ipv6.html

You are probably also aware that you have to run IPv6 in the core 
because explicit-null label is not assigned in this case and you need 
family inet6 on the ingress interface of egress PE. As long as this 
condition met it works, no caveats or issues found so far.


craig washington писал 29.08.2018 10:55:


So my fix was leaving everything as is and just changing the next-hop
from self to the IPv4 address of the advertising PE under the v6 
group

which is basically what would be happening anyway if I deleted the
groups and added everything to the v4 group.


My overall goal was to try to get IPv6 prefixes to use the same LSP's
as their IPv4 counterparts with as little trouble as possible. (not
adding new protocols or changing existing protocols if possible)

Simplest way I found was just changing the next hop. Everything
worked as expected when that was done.


I just didn't know if there was anything else anyone else was doing
of if anyone came across a similar situation.





--
Kind regards,
Andrey Kostin

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Juniper vs Arista

2018-08-15 Thread Andrey Kostin

Pavel Lunin писал 14.08.2018 05:06:




Not sure, but from the first glance it doesn't look like they've 
gained

more than they've lost with the JunosE to JUNOS BNG migration.


I didn't miss JunosE any single day after we finished migration to MX.
MX platform is not ideal and has it's own quirks but I doubt that ideal 
BNG exists.


--
Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 6PE without family inet6 labeled-unicast

2018-07-31 Thread Andrey Kostin

Hi Aaron,

Possibly it could, but it definitely needs to be checked and tested 
about possibility of unequal load-balancing. As far as next-hop tooling 
required anyway to process those prefixes in a different way than other 
announced from PEs, and traffic is actually sent via rsvp tunnels, 
probably the outcome will be just replacing one kind of complexity with 
another. It will be interesting to test though.


BTW, thanks to all who replied! Looks like one of my previous messages 
didn't reach the list.


Kind regards
Andrey

Aaron Gould писал 30.07.2018 13:37:
" so for traffic load-balancing we change next-hop to anycast 
loopback

address shared by those two PE and use dedicated LSPs to that IP with
"no-install" for real PE loopback addresses"

Did you have to use this anycast method?... just wondering if bgp
multipathing would've worked in this case also...and if so, why was 
one

method chose over the other

-Aaron



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Junos 17.3R3 reloads RE that releases mastership

2018-07-30 Thread Andrey Kostin

Replying to my own question.

I received direct reply from one of juniper-nsp subscribers with link 
to article explaining this behavior:


This is FAD to ensure new backup synchronizes correctly.

https://kb.juniper.net/InfoCenter/index?page=content=KB32221


Kind regards,
Andrey



Andrey Kostin писал 30.07.2018 10:39:

Hello juniper-nsp,

Installed new 17.3R3.9 for testing on the router acting as L2TP LNS
and encountered strange behavior right off the bat. When mastership 
is

switched between REs, the former master RE reloads right after it
releases it's mastership. Switchover itself goes smoothly, all
protocols survive, subscriber sessions stay online etc. So overall
it's not harmful but suspicious. Opened JTAC case but wondering if
anybody already seen this in any release or may be it's normal in 
17.x

train? Before that JTAC told me that 17.3R3 is going to be a "golden
release", so I didn't expect to face a problem that's so obvious.

Kind regards,
Andrey




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Junos 17.3R3 reloads RE that releases mastership

2018-07-30 Thread Andrey Kostin

Hello juniper-nsp,

Installed new 17.3R3.9 for testing on the router acting as L2TP LNS and 
encountered strange behavior right off the bat. When mastership is 
switched between REs, the former master RE reloads right after it 
releases it's mastership. Switchover itself goes smoothly, all protocols 
survive, subscriber sessions stay online etc. So overall it's not 
harmful but suspicious. Opened JTAC case but wondering if anybody 
already seen this in any release or may be it's normal in 17.x train? 
Before that JTAC told me that 17.3R3 is going to be a "golden release", 
so I didn't expect to face a problem that's so obvious.


Kind regards,
Andrey




___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 6PE without family inet6 labeled-unicast

2018-07-23 Thread Andrey Kostin
  

Hi Pavel, 

Thanks for details. Looks like it's all documented
except next-hop conversion... 

I guess that in "show route
advertised-protocol" the address is shown before conversion because
overwise it would be invalid and could not be announced... 

Kind
regards, 

Andrey 

Pavel Lunin писал 22.07.2018 17:55: 

> Errata 
>>
So your BGP route will not be inactive because of the unreachable
next-hop. 
> So your BGP route *will be* inactive because of the
unreachable next-hop. 
> 
> On Sun, Jul 22, 2018 at 11:52 PM, Pavel
Lunin wrote:
> 
>> On Sun, Jul 22, 2018 at 9:45 PM, Andrey Kostin
wrote:
>> 
>>> Hi Pavel, 
>>> 
>>> Thanks for replying. I understand how
it works as soon as proper next-hop is present in a route. My attention
was attracted by implicit next-hop conversion from pure IPv4 address to
IPv4-mapped IPv6 next-hop from "Nexthop: YYY.YYY.155.141" in the
advertised route to "Protocol next hop: :::YYY.YYY.155.141" in the
received route.
>> 
>> This is normal. In order to announce AFI/SAFI 2/1
update, you must have an IPv6 next-hop. This is why it gets
automatically converted. If you enable BGP-LU, nothing will change in
this terms, your next-hop address will still be an IPv4-mapped IPv6
address. It will just be labeled. 
>> Same thing happens when you
perform next-hop-self (or it's eBGP) for an IPv6 route, announced via an
MP-BGP session over IPv4. 
>> And ipv6-tunneling under mpls stanza is
what makes your LDP/RSVP routes be leaked from inet.3 to inet6.3 with
automatic v4-to-v6 mapping. It's a syntactic sugar, you can do the same
with policies, explicitly leaking inet.3 to inet6.3.
>> 
>>> I'm also
wondering what could happen is there are no LSP available, which is
rather unreal situation because everything will be broken anyway in this
case.
>> 
>> If no LSP/FEC is available for the v4-mapped IPv6 next-hop,
you won't have an LDP/RSVP route in inet.3, thus it won't be leaked to
inet6.3. So your BGP route will not be inactive because of the
unreachable next-hop. And not, it's not so unusual. You can easily have
your IGP up and running, but someone forgot to add MPLS on one of the
core interfaces. So your BGP session and routes are up, IGP works but
there is no labeled next-hop in inet.3. 
>> -- 
>> Pavel



Links:
--
[1] mailto:ank...@podolsk.ru
[2]
mailto:plu...@gmail.com
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 6PE without family inet6 labeled-unicast

2018-07-22 Thread Andrey Kostin
  

Hi Pavel, 

Thanks for replying. I understand how it works as soon
as proper next-hop is present in a route. My attention was attracted by
implicit next-hop conversion from pure IPv4 address to IPv4-mapped IPv6
next-hop from "Nexthop: YYY.YYY.155.141" in the advertised route to
"Protocol next hop: :::YYY.YYY.155.141" in the received route.


Overwise it all works as expected, considering that family inet6 is
enabled in the core. 

I'm also wondering what could happen is there are
no LSP available, which is rather unreal situation because everything
will be broken anyway in this case.  

Kind regards, 

Andrey 

Pavel
Lunin писал 21.07.2018 06:44: 

> In this setup it's not 6PE but just
classic IP over MPLS, where vanilla inet/inet6 iBGP resolves it's
protocol next-hop with a labeled LDP/RSVP forwarding next hop. 
> It
works much the same way for v6 as for v4, except that the v6 header is
exposed to the last P router, when it performs PHP. It still relies on
MPLS to make the forwarding decision (if we don't take into account the
hashing story), however it "sees" the v6 header when it puts it onto the
wire, and needs to treat it accordingly. E. g. it must set the v6
ethertype or decide what to do if the egress interface MTU can't
accommodate the packet. 
> So you need family inet6 enabled on the
egress interface of the penultimate LSR to make IPv6 over MPLS work. 
>
6PE was invented to work around this. Technically it's the same IPv6
over MPLS but with an explicit (as opposed to implicit) null label at
the tail end, which hides the v6 header from the penultimate LSR. Or you
can just disable PHP in the core. 
> 
> Cheers, 
> Pavel 
> 
> пт, 20
июл. 2018 г., 21:59 Andrey Kostin : 
> 
>> Hello juniper-nsp,
>> 
>>
I've accidentally encountered an interesting behavior and wondering if

>> anyone already seen it before or may be it's documented. So pointing
to 
>> the docs is appreciated.
>> 
>> The story:
>> We began to
activate ipv6 for customers connected from cable network 
>> after cable
provider eventually added ipv6 support. We receive prefixes 
>> from
cable network via eBGP and then redistribute them inside our AS 
>> with
iBGP. There are two PE connected to cable network and receiving 
>> same
prefixes, so for traffic load-balancing we change next-hop to 
>>
anycast loopback address shared by those two PE and use dedicated LSPs

>> to that IP with "no-install" for real PE loopback addresses.
>> IPv6
wasn't deemed to use MPLS and existing plain iBGP sessions between 
>>
IPv6 addresses with family inet6 unicast were supposed to be reused. 
>>
However, the same export policy with term that changes next-hop for 
>>
specific community is used for both family inet and inet6, so it 
>>
started to assign IPv4 next-hop to IPv6 prefixes implicitly.
>> 
>> Here
is the example of one prefix.
>> 
>> ## here PE receives prefix from
eBGP neighbor:
>> 
>> @re1.agg01.LLL2> show route
::e1bc::/46
>> 
>> inet6.0: 52939 destinations, 105912 routes
(52920 active, 1 holddown, 
>> 24 hidden)
>> + = Active Route, - = Last
Active, * = Both
>> 
>> ::e1bc::/46*[BGP/170] 5d 13:16:26, MED
100, localpref 100
>> AS path:  I, validation-state: unverified
>> >
to :::f200:0:2:2:2 via ae2.202
>> 
>> ## Now PE advertises
it to iBGP neighbor with next-hop changed to plain 
>> IP:
>>
@re1.agg01.LLL2> show route ::e1bc::/46 
>>
advertising-protocol bgp ::1::1:140
>> 
>> inet6.0: 52907
destinations, 105843 routes (52883 active, 6 holddown, 
>> 24 hidden)
>>
Prefix Nexthop MED Lclpref AS 
>> path
>> * ::e1bc::/46
YYY.YYY.155.141 100 100  
>> I
>> 
>> ## Same output as above with
details
>> {master}
>> @re1.agg01.LLL2> show route
::e1bc::/46 
>> advertising-protocol bgp ::1::1:140
detail ## Session is 
>> between v6 addresses
>> 
>> inet6.0: 52902
destinations, 105836 routes (52881 active, 3 holddown, 
>> 24 hidden)
>>
* ::e1bc::/46 (3 entries, 1 announced)
>> BGP group internal-v6
type Internal
>> Nexthop: YYY.YYY.155.141 ## v6 
>> prefix advertised
with plain v4 next-hop
>> Flags: Nexthop Change
>> MED: 100
>>
Localpref: 100
>> AS path: []  I
>> Communities: :10102
no-export
>> 
>> ## iBGP neighbor receives prefix with tooled next hop
and uses 
>> established LSPs to forward traffic:
>> u...@re0.bdr01.lll>
show route ::e1bc::/46
>> 
>> inet6.0: 52955 destinations,
323835 routes (52877 active, 10 holddown, 
>> 79 hidden)
>

Re: [j-nsp] 6PE without family inet6 labeled-unicast

2018-07-22 Thread Andrey Kostin

Hi Pedro,

Thanks for your comment. I agree with you that penultimate LSP forwards 
traffic based on received label without IPv6 lookup. In my scenario, 
default PHP is used and all routers have family inet6 configured, so it 
just works.


Pedro Marques Antunes via juniper-nsp писал 21.07.2018 05:36:

The IPv6 explicit null label works as an overlay for the IPv6 traffic
carried over the core.

In a PHP scenario, the ultimate LSR will still forward based on the
received MPLS transport label. Therefore I do not think it will 
require
an IPv6 routing table. However it still needs to forward IPv6 
packets.
Not a problem with recent routers. But this might have been a problem 
in
the days when you would have devices without any IPv6 capabilities. 
On

Junos boxes though, `family inet6` is still required on the egress
interface.

In a UHP scenario, the penultimate LSR is expected to forward a 
packet
with the IPv4 explicit null label (3). But this cannot be used with 
an

IPv6 packet. The overlay is mandatory in such a scenario.

On Friday, 20 July 2018 at 21:40:26 +0100, Dan Peachey wrote:



Hi,

Presumably the penultimate LSR has the IPv6 iBGP routes? i.e. it 
knows how
to IPv6 route to the destination. The last LSR->LER hop should just 
be IPv6

routed in that case.

I've noticed this behaviour before whilst playing with 6PE on lab 
devices.

It would of course break if you were running IPv4 core only.

Cheers,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] 6PE without family inet6 labeled-unicast

2018-07-22 Thread Andrey Kostin

Hi Dan,

Thanks for answering. All routers have family inet6 configured on all 
participating interfaces, because other v6 traffic is forwarded without 
MPLS, so we are safe for that.



Kind regards,
Andrey

Dan Peachey писал 20.07.2018 16:40:




 Hi,

Presumably the penultimate LSR has the IPv6 iBGP routes? i.e. it 
knows how
to IPv6 route to the destination. The last LSR->LER hop should just 
be IPv6

routed in that case.

I've noticed this behaviour before whilst playing with 6PE on lab 
devices.

It would of course break if you were running IPv4 core only.

Cheers,

Dan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] 6PE without family inet6 labeled-unicast

2018-07-20 Thread Andrey Kostin
   ulst  1050092 4
  YYY.YYY.155.14 ucst 1775 1 
ae1.0
  YYY.YYY.155.9 Push 486887 1859
1 ae12.0
  YYY.YYY.155.95 ucst 2380 1 
ae4.0
  YYY.YYY.155.9 Push 486892 2555
1 ae12.0


The result is that we have IPv6 traffic forwarded via MPLS without 6PE 
configured properly. ipv6-tunneling is configured under "protocols mpls" 
but no "family inet6 labeled-unicast explicit-null" under v4 iBGP 
session.
It works as far as we have v6 enabled on all MPLS links, so packets are 
not dropped because of implicit-null label.

Looks sketchy but it works. Has anybody seen/used it before?

--
Kind regards,

Andrey Kostin
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] 6PE without family inet6 labeled-unicast

2018-07-20 Thread Andrey Kostin
   ulst  1050092 4
  YYY.YYY.155.14 ucst 1775 1 
ae1.0
  YYY.YYY.155.9 Push 486887 1859
1 ae12.0
  YYY.YYY.155.95 ucst 2380 1 
ae4.0
  YYY.YYY.155.9 Push 486892 2555
1 ae12.0


The result is that we have IPv6 traffic forwarded via MPLS without 6PE 
configured properly. ipv6-tunneling is configured under "protocols mpls" 
but no "family inet6 labeled-unicast explicit-null" under v4 iBGP 
session.
It works as far as we have v6 enabled on all MPLS links, so packets are 
not dropped because of implicit-null label.

Looks sketchy but it works. Has anybody seen/used it before?

--
Kind regards,

Andrey Kostin
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] QFX5100 ACLs

2017-12-11 Thread Andrey Kostin

Hi Alain,

Good to know that now it works. It was way back in February 2016 with 
13.2X51-D35.3 and below is the exempt from TAC case. We haven't been 
told however that a PR was raised to address the issue or there are 
plans to resolve it.



Problem Description :

We use common set of filters on all our juniper devices to protect
control plane and it turnes out there is a strange problem with filter
on QFX switches.

When that input filter list is applied then at least ports tcp/22 and
tcp/179 are world-wide open.

Issue: Filter was not getting programmed in TCAM:

Action taken:

As per our latest communication, we have identified two reasons behind
the filters not getting programmed  First, the filter entries exceeded
the maximum TCAM entries. Second, we observed the the QFX platforms do
not support input-list. Although the config gets committed without any
error, only the first filter gets programmed in TCAM. We also provided 
a

sample configuration to demonstrate the ssh filter.

JTAC engineer's examples provided:


I have tried the following configs in the lab under 13.2X51-D35 and 
14.1X53-D30 and have observed the following:


   Config independent of the group:

set interfaces lo0 unit 0 family inet filter input-list [ accept-ftp 
accept-ssh ]


  Config within group:

set groups common:lo-filter interfaces lo0 unit 0 family inet filter 
input-list accept-ftp
set groups common:lo-filter interfaces lo0 unit 0 family inet filter 
input-list accept-ssh
In both cases, the configuration goes through without any error but 
only the first filter (accept-ftp) actually gets programmed in

the PFE programs as can observed  below:



TFXPC0(vty)# show filter
Program Filters:
---
   Index Dir CntText Bss  Name
  --  --  --  --  


Term Filters:

   IndexSemantic   Name
  -- --
   1  Classicaccept-ftp
   2  Classicaccept-ssh
   3  Classiclo0.0-i
   17000  Classic__default_arp_policer__
16777216  Classicfnp-filter-level-all





TFXPC0(vty)# show filter hw 3 show_term_info
==
Filter index   : 3
==


- Filter name  : lo0.0-i
 + Programmed: YES
  + BD ID : 184
  + Total TCAM entries available: 1528
  + Total TCAM entries needed   : 8
  + Term Expansion:
- Term1: will expand to 1 term : Name "accept-ftp-0"
- Term2: will expand to 1 term : Name "accept-ftp-1"
  + Term TCAM entry requirements:
- Term1: needs 4 TCAM entries: Name "accept-ftp-0"
- Term2: needs 4 TCAM entries: Name "accept-ftp-1"
  + Total TCAM entries available: 1528
  + Total TCAM entries needed   : 8


Even the counters only show the counters for the first filter 
(accept-filter)  and not those for the following filters (accept-ssh)
in the input-list. The following is missing count-accept-ssh-lo0.0-i
.




Alain Hebert писал 11.12.2017 08:23:

    Hi,

    Odd.

    Model: qfx5100-48s-6q
    Junos: 17.2R1.13

    I've verified with both the "pfe shell" and a Nessus scan
TCP+UDP+Ports 1 thru 65535 and this input-list

     [ ICMP-FI OSPF-PEERS-FI LDP-PEERS-FI BGP-PEERS-FI
BFD-PEERS-FI VRRP-FI DHCP-FI -MGMT-FI DROP-FI ]

    Worked as advertised (for once).

-
Alain Hebertaheb...@pubnix.net
PubNIX Inc.
50 boul. St-Charles
P.O. Box 26770 Beaconsfield, Quebec H9W 6G7
Tel: 514-990-5911  http://www.pubnix.netFax: 514-990-9443

On 12/10/17 12:39, Andrey Kostin wrote:

Hi Brendan,

If you use filter-list on Lo0 interface as per "securing RE guide" 
then it's not supported. Only first filter in list is programmed and 
everything else is ignored. We ran into the same issue and had to pull 
it out from JTAC to confirm.


Brendan Mannella писал 04.12.2017 15:51:

+ Programmed: YES
  + Total TCAM entries available: 1788
  + Total TCAM entries installed  : 516

Brendan Mannella

TeraSwitch Inc.
Main - 1.412.945.7045
Direct - 1.412.945.7049
eFax - 1.412.945.7049
Colocation . Cloud . Connectivity




This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom 
they
are addressed. If you have received this email in error please 
notify
the sender. Please note that any views or opinions presented in 
this
email are solely those of the author and do not necessarily 
represent
those of the company. Finally, the recipient should check this 
email
and any attachments for the presence of viruses. The company 
accepts

no liability for any damage caused by any virus transmitted by this

On Mon, Dec 4, 2017 at 11:57 AM, Saku Ytti <s...@ytti.fi> wrote:


Hey Brendan,

This is news to me, but plausible. Can you do this for me

start shell pfe network fpc0
show filter

show filter hw  show_term_info

Compare how many TCAM entries are needed, and how many are 
avail

Re: [j-nsp] QFX5100 ACLs

2017-12-10 Thread Andrey Kostin

Hi Brendan,

If you use filter-list on Lo0 interface as per "securing RE guide" then 
it's not supported. Only first filter in list is programmed and 
everything else is ignored. We ran into the same issue and had to pull 
it out from JTAC to confirm.


Brendan Mannella писал 04.12.2017 15:51:

+ Programmed: YES
  + Total TCAM entries available: 1788
  + Total TCAM entries installed  : 516

Brendan Mannella

TeraSwitch Inc.
Main - 1.412.945.7045
Direct - 1.412.945.7049
eFax - 1.412.945.7049
Colocation . Cloud . Connectivity




This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the sender. Please note that any views or opinions presented in this
email are solely those of the author and do not necessarily represent
those of the company. Finally, the recipient should check this email
and any attachments for the presence of viruses. The company accepts
no liability for any damage caused by any virus transmitted by this

On Mon, Dec 4, 2017 at 11:57 AM, Saku Ytti <s...@ytti.fi> wrote:


Hey Brendan,

This is news to me, but plausible. Can you do this for me

start shell pfe network fpc0
show filter

show filter hw  show_term_info

Compare how many TCAM entries are needed, and how many are 
available.


Also if you can take a risk of reloading the FPC run:
show filter hw  show_terms_brcm

This may crash your PFE, if you actually did not have all of the
entries programmed in HW.


commit will succeed if you build filter which will not fit in HW,
there should be syslog entry, but no complain during commit. You 
will

end up having no filter or some mangled version of it. So it's just
alternative theory on why you may be accepting something you thought
you aren't.


On 4 December 2017 at 18:02, Brendan Mannella 
<bmanne...@teraswitch.com>

wrote:
> Hello,
>
> So i have been testing QFX5100 product for use as a core L3 
switch/router
> with BGP/OSPF. I have my standard RE filter blocking various 
things
> including BGP from any unknown peer. I started to receive errors 
in my

logs
> showing BGP packets getting through from hosts that weren't 
allowed.

After
> digging around i found that Juniper apparently has built in ACL to 
allow
> BGP, which bypasses my ACLs, probably for VCF or something.. Is 
there any
> way to disable this behavior or does anyone have any other 
suggestions?

>
> root@XXX% cprod -A fpc0 -c "show filter hw dynamic 47 show_terms"
>
> Filter name  : dyn-bgp-pkts
> Filter enum  : 47
> Filter location  : IFP
> List of tcam entries : [(total entries: 2)
> Entry: 37
> - Unit 0
> - Entry Priority 0x7FFC
> - Matches:
> PBMP 0x0001fffc
> PBMP xe
> L4 SRC Port 0x00B3 mask 0x
> IP Protocol 0x0006 mask 0x00FF
> L3DestHostHit 1 1
> - Actions:
> ChangeCpuQ
> ColorIndependent param1: 1, param2: 0
> CosQCpuNew cosq: 30
> Implicit Counter
> Entry: 38
> - Unit 0
> - Entry Priority 0x7FFC
> - Matches:
> PBMP 0x0001fffc
> PBMP xe
> L4 DST Port 0x00B3 mask 0x
> IP Protocol 0x0006 mask 0x00FF
> L3DestHostHit 1 1
> - Actions:
> ChangeCpuQ
> ColorIndependent param1: 1, param2: 0
> CosQCpuNew cosq: 30
> Implicit Counter
>]
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp



--
  ++ytti


_______
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


--
Kind regards,
Andrey Kostin
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Using a QFX5100 without QFabric?

2017-10-27 Thread Andrey Kostin

Chris Wopat писал 25.10.2017 13:00:

On 10/24/2017 05:30 PM, Vincent Bernat wrote:

  ❦ 24 octobre 2017 14:29 -0400, Andrey Kostin <ank...@podolsk.ru> :




Straight up saying "don't put public IPs on them" doesn't seem like
the best advice to me. You can certainly do this, we do and it's 
fine.

When you craft your RE protection filter you just have to squeeze a
bit more space here or there compared to say, an MX filter. You 
should

have this enabled weather you're using public IPs or not.

Regarding TCAM programming, it's loud and clear when this happens via
a console message and a sev0 syslog message.


Yes, that's true, and we spend a decent amount of time packing lo0 
filters in a tiny TCAM after discovered that filter input-list silently 
allows everything except the first filter and doesn't generate any 
complaint.
So, no objection for public IPs but only careful filter planning 
required.


--
Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Using a QFX5100 without QFabric?

2017-10-27 Thread Andrey Kostin

Vincent Bernat писал 24.10.2017 18:30:

❦ 24 octobre 2017 14:29 -0400, Andrey Kostin <ank...@podolsk.ru> :


QFX5100 are good as L2 devices for aggregation, we use them in
virtual-chassis. But be careful with planning any L3 services on
them. First, don't put public IPs on them because TCAM for filters 
is

tiny and programmed in a tricky for understanding way. As a result
everything that doesn't fit in TCAM is silently allowed. We observed
that lo0 filters were "bypassed" this way and switch was exposed to
continuous brute-force attack.


That's scary! I remember having a commit error when I set too many
filters (in fact, too many source/destination combination, solved by
removing either source or destination from the filter), so there are
some checks in place. Which version were you using when you got the
problem? Is there an easy way to check if we are hit by that?


At that moment (Feb 2016) it was 13.2X51-D35.3.
Is I can see from the link posted in the thread, MPLS on IRB is not 
supported yet, probably hardware limitation.


Here is the conclusion from JTAC case:
Problem Description :

We use common set of filters on all our juniper devices to protect
control plane and it turnes out there is a strange problem with filter
on QFX switches.

When that input filter list is applied then at least ports tcp/22 and
tcp/179 are world-wide open.

Issue: Filter was not getting programmed in TCAM:

Action taken:

As per our latest communication, we have identified two reasons behind
the filters not getting programmed  First, the filter entries exceeded
the maximum TCAM entries. Second, we observed the the QFX platforms do
not support input-list. Although the config gets committed without any
error, only the first filter gets programmed in TCAM. We also provided 
a

sample configuration to demonstrate the ssh filter.

JTAC engineer's examples provided:


I have tried the following configs in the lab under 13.2X51-D35 and 
14.1X53-D30 and have observed the following:


   Config independent of the group:

set interfaces lo0 unit 0 family inet filter input-list [ accept-ftp 
accept-ssh ]


  Config within group:

set groups common:lo-filter interfaces lo0 unit 0 family inet filter 
input-list accept-ftp
set groups common:lo-filter interfaces lo0 unit 0 family inet filter 
input-list accept-ssh
In both cases, the configuration goes through without any error but 
only the first filter (accept-ftp) actually gets programmed in

the PFE programs as can observed  below:



TFXPC0(vty)# show filter
Program Filters:
---
   Index Dir CntText Bss  Name
  --  --  --  --  


Term Filters:

   IndexSemantic   Name
  -- --
   1  Classicaccept-ftp
   2  Classicaccept-ssh
   3  Classiclo0.0-i
   17000  Classic__default_arp_policer__
16777216  Classicfnp-filter-level-all





TFXPC0(vty)# show filter hw 3 show_term_info
==
Filter index   : 3
==


- Filter name  : lo0.0-i
 + Programmed: YES
  + BD ID : 184
  + Total TCAM entries available: 1528
  + Total TCAM entries needed   : 8
  + Term Expansion:
- Term1: will expand to 1 term : Name "accept-ftp-0"
- Term2: will expand to 1 term : Name "accept-ftp-1"
  + Term TCAM entry requirements:
- Term1: needs 4 TCAM entries: Name "accept-ftp-0"
- Term2: needs 4 TCAM entries: Name "accept-ftp-1"
  + Total TCAM entries available: 1528
  + Total TCAM entries needed   : 8


Even the counters only show the counters for the first filter 
(accept-filter)  and not those for the following filters (accept-ssh)
in the input-list. The following is missing count-accept-ssh-lo0.0-i
.






Second thing I can recall is that MPLS works only on physical
interfaces, not irb. And finally I had very mixed results when tried
to PIM multicast routing between irb interfaces and have to give up
and pass L2 to a router, didn't try it on physical ports though.


I had also some bad experience with IRB on QFX5100. For example,
unnumbered interfaces don't work on IRB. Also, I have also already
related here my troubles with IRB, routing daemons and MC-LAG. For 
some

reasons, it seems many features don't play well with IRB (at least on
14.1X53 train). I am now using them as L2 switches and as BGP RR (but 
no

routing) and so far, no problems.




--
Kind regards,

Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] Using a QFX5100 without QFabric?

2017-10-24 Thread Andrey Kostin
QFX5100 are good as L2 devices for aggregation, we use them in 
virtual-chassis. But be careful with planning any L3 services on them. 
First, don't put public IPs on them because TCAM for filters is tiny and 
programmed in a tricky for understanding way. As a result everything 
that doesn't fit in TCAM is silently allowed. We observed that lo0 
filters were "bypassed" this way and switch was exposed to continuous 
brute-force attack. Second thing I can recall is that MPLS works only on 
physical interfaces, not irb. And finally I had very mixed results when 
tried to PIM multicast routing between irb interfaces and have to give 
up and pass L2 to a router, didn't try it on physical ports though.


Kind regards,
Andrey Kostin


Matt Freitag писал 24.10.2017 09:26:
Karl, we're also looking at QFX5100-48S switches for our aggregation. 
I
actually have one in place doing aggregation and routing and the only 
"big"
change I found is the DHCP forwarder config is not remotely similar 
to the
forwarding-options helpers bootp config we've been using to forward 
DHCP on
our MX480 core. But that only counts if you do routing and DHCP 
forwarding

at the QFX.

But, if you want to do routing and DHCP forwarding on this, any 
forwarding
in the default routing instance goes under forwarding-options 
dhcp-relay

and any DHCP forwarding in a non-default routing instance goes under
routing-instances INSTANCE-NAME forwarding-options dhcp-relay.

There are a ton of DHCP relay options but we found we just need a 
server
group that contains all our DHCP servers and an interface group that 
ties

an interface to a server group.

Again I only bring the DHCP relay stuff up because we've been using
forwarding-options helpers bootp on our MX's to do DHCP forwarding 
and the

QFX explicitly disallows that in favor of the dhcp-relay.

Other than that initial confusion we've not had a problem and I'm 
very

interested in any issues you hear of. This QFX I'm talking about runs
Junos 14.1X53-D40.8.

I'm also very interested in any other issues people have had doing 
this.


Matt Freitag
Network Engineer
Information Technology
Michigan Technological University
(906) 487-3696 <%28906%29%20487-3696>
https://www.mtu.edu/
https://www.mtu.edu/it

On Tue, Oct 24, 2017 at 8:41 AM, Karl Gerhard <karl_g...@gmx.at> 
wrote:



Hello

we're thinking about buying a few QFX5100 as they are incredibly 
cheap on
the refurbished market - sometimes even cheaper than a much older 
EX4550.


Are there any caveats when using the QFX5100-48S as a normal 
aggregation

switch without QFabric? We have a pretty basic setup of Access (EX),
Aggregation (EX or QFX) and Core (MX). We're only switching at our
aggregation layer but we would like to have options for the future.

Regards
Karl

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

[j-nsp] RE-S-X6 experience

2017-08-22 Thread Andrey Kostin
Reviving old thread, has anybody already tried/tested new RE-S-X6 and 
can share an experience about it, any gotchas?


Current recommended Junos version 15.1F6-S6/15.1R6 for MX already 
covers the first supported release 15.1F4,16.1. Is there something why 
we should stay aside from it?


--
Kind regards,
Andrey Kostin
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] PIM-ASM on QFX5100

2017-06-14 Thread Andrey Kostin

Hi all,

Does anybody have experience with using PIM routing on QFX5100 
switches? I use PIM on irb interfaces in both upstream and downstream 
directions. Receivers are directly connected to downstream irb interface 
and upstream irb interface connects to MX router. In my setup SSM groups 
work, PIM joins are forwarded upstream hop-by-hop to source addresses 
but I can't make it work for ASM groups, looks like registers never 
leave to remote RP or never processed if RP is configured local on QFX. 
MX routers do encapsulation/decapsulation for PIM register packets 
forwarded to RP, is QFX capable to do this and does it require special 
configuration or may be special license?


Kind regards,
Andrey

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] options for adding communities to an EVPN routing-instance?

2016-05-13 Thread Andrey Kostin
I was able to add ordinary communities to l2vpn NLRIs via vrf-export 
policy attached to routing-instance to allow them later pass route 
reflector's policies. The small caveat is that vrf-export overrides 
default policy generated by vrf-target and both communities (target:x:x 
and ASN:x) must be _added_ in the policy but it's documented pretty 
clear. May be it will work this way for evpn as well.


WBR,
Andrey


Adam Vitkovsky писал 11.05.2016 10:04:

Michael Hare
Sent: Saturday, April 23, 2016 12:12 AM

Does anyone know if it is possible and how to add communities to 
routes to
an EVPN routing-instance in the instance configuration itself?  For 
example,

in bgp.evpn.0, I have

2:a.b.c.d:200::1900::00:1f:45:a0:1b:bb/304 (2 entries, 0 announced) 
...

Communities: target:64900:200

I'd like to be able to add, for example, $MYISP:12345 to the mac
announcements.  I haven't tried but am guessing I could do this in 
the IBGP
export policy using 'from instance' but this is suboptimal because 
then my PE
will need different export policies whereas they are currently now 
all

congruent.


Very interesting question indeed,
and I believe it's valid requirement as well.

I'm just trying to find out, to no avail, if one can control what MAC
addresses make it from MAC address table to MP-BGP and with what
attributes.
If such a policy attachment point would exist one could tag MAC
addresses with standard communities there (but I think no such thing
exist in Junos or XR)

So when you tried to tag the MAC routes using iBGP peer export policy
-has that worked please?


adam


Adam Vitkovsky
IP Engineer



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] MPC-3D-16XGE-SFPP at 1G speed

2016-04-29 Thread Andrey Kostin

Or you can consider a new feature called Fusion ;)
http://www.juniper.net/techpubs/en_US/junos14.2/information-products/pathway-pages/junos-fusion/junos-fusion.html

Michael Loftis писал 26.04.2016 17:07:
Yeah those are specifically NOT 1/10, just 10G.  In general with the 
big
MXes the MICs won't do 1/10.  For 1G you need like MIC-3D-20GE-SFP on 
an
MPC or like the DPCE-*-40GE-SFP or similar-ish.  It might be cheaper 
to
just use a cheap EX3300 or EX4300 w/ 2x10 (for redundancy) to the MX 
if
you've a fair amount of 1G that you're connecting...or really...if 
you're
connecting 1G at all to the MX rather than burning a slot (MIC slot 
or

router slot) on 1G interfaces.

I actually can't think of ANY line card nor MIC for MX that does 
1/10...


On Tue, Apr 26, 2016 at 12:23 PM, Dave Peters - Terabit Systems <
d...@terabitsystems.com> wrote:


Hi all--

Stupid question, here. Can the MPC-3D-16XGE-SFPP run with 1G optics 
(e.g.
EX-SFP-1GE-SX), and if so, is there a specific port setting I need 
to
commit? I'm running an MX480 with 13.3R8.7, and Uncle Google hasn't 
been

too useful, yet.

I tried:

set interfaces xe-0/0/0 auto-negotiate

inserted the EX-SFP-1GE-SX connected to an outside 1G port, no 
lights, no

joy.

Any help is appreciated.

Thanks much.

--Dave Peters
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



--
Kind regards,
Andrey
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Re: [j-nsp] ACX is just not there (was Re: EX4550 L2Circuit/VPN to MX80/lt Interface)

2014-12-05 Thread Andrey Kostin

Mark,

Can you share, what kind of mpls signalling (rsvp, ldp) and backup 
technologies (FRR, LFA etc) do you use in rings of ME3600?


Kind regards,
Andrey Kostin

Mark Tinka писал 13.11.2014 19:08:

On Thursday, November 13, 2014 05:09:49 PM Phil Bedard
wrote:


Maybe vMX is the answer to a 1U MX at this point,
depending on the throughput you really need.


This is only useful where you need a cheap router for some
routing and port density is of no concern. So route
reflectors, simple routing in the data centre, enterprise
office routers, e.t.c.

The reason we deploy ME3600X's is MPLS in fibre access
rings. vMX won't be of any use there.

Mark.


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp