Re: [j-nsp] evpn with vrf

2019-06-10 Thread Jason Lixfeld
So JunOS supports draft-rabadan-sajassi-bess-evpn-ipvpn-interworking-02 then?

> On Jun 10, 2019, at 4:21 PM, Aaron Gould  wrote:
> 
> Seems that I get an auto-export from evpn-learned destinations auto exported
> as /32's into the vrf that the IRB is attached to.
> 
> Is this possibly with inet.0 global route table?
> 
> In other words, in a vrf table I see evpn-learned routes listed like this...
> 
> 172.223.10.10/32   *[EVPN/7] 00:00:03
>> via irb.0
> 
> ... how would I get this same behavior if the irb.0 interface was in inet.0
> routing domain and not vrf ?
> 
> -Aaron
> 
> 
> 
> 
> 
> Details.
> 
> 
> root@stlr-960-e> show evpn database
> Instance: 10
> VLAN  DomainId  MAC addressActive source  Timestamp
> IP address
> 10  00:00:00:00:00:01  irb.0  Jun 10
> 15:13:59  172.223.10.1
> 
> 172.223.10.5
> 10  00:50:79:66:68:21  ae141.0Jun 10
> 15:12:06
> 10  00:50:79:66:68:23  ae141.0Jun 10
> 15:10:53
> 10  02:05:86:71:f1:02  10.103.128.9   Jun 10
> 14:10:25
> 
> root@stlr-960-e> show route table one.inet.0
> 
> one.inet.0: 3 destinations, 4 routes (3 active, 0 holddown, 0 hidden)
> + = Active Route, - = Last Active, * = Both
> 
> 172.223.10.0/24*[Direct/0] 00:00:38
>> via irb.0
>[Direct/0] 00:00:38
>> via irb.0
> 172.223.10.1/32*[Local/0] 00:00:38
>  Local via irb.0
> 172.223.10.5/32*[Local/0] 00:00:38
>  Local via irb.0
> 
> root@stlr-960-e> ping 172.223.10.10 routing-instance one
> PING 172.223.10.10 (172.223.10.10): 56 data bytes
> 64 bytes from 172.223.10.10: icmp_seq=0 ttl=64 time=391.814 ms
> 64 bytes from 172.223.10.10: icmp_seq=1 ttl=64 time=118.886 ms
> ^C
> --- 172.223.10.10 ping statistics ---
> 2 packets transmitted, 2 packets received, 0% packet loss
> round-trip min/avg/max/stddev = 118.886/255.350/391.814/136.464 ms
> 
> root@stlr-960-e> show route table one.inet.0
> 
> one.inet.0: 4 destinations, 5 routes (4 active, 0 holddown, 0 hidden)
> + = Active Route, - = Last Active, * = Both
> 
> 172.223.10.0/24*[Direct/0] 00:00:58
>> via irb.0
>[Direct/0] 00:00:58
>> via irb.0
> 172.223.10.1/32*[Local/0] 00:00:58
>  Local via irb.0
> 172.223.10.5/32*[Local/0] 00:00:58
>  Local via irb.0
> 172.223.10.10/32   *[EVPN/7] 00:00:03
>> via irb.0
> 
> root@stlr-960-e>
> 
> root@stlr-960-e> ping 172.223.10.20 routing-instance one
> PING 172.223.10.20 (172.223.10.20): 56 data bytes
> 64 bytes from 172.223.10.20: icmp_seq=0 ttl=64 time=437.254 ms
> 64 bytes from 172.223.10.20: icmp_seq=1 ttl=64 time=161.525 ms
> ^C
> --- 172.223.10.20 ping statistics ---
> 3 packets transmitted, 2 packets received, 33% packet loss
> round-trip min/avg/max/stddev = 161.525/299.389/437.254/137.865 ms
> 
> root@stlr-960-e> show route table one.inet.0
> 
> one.inet.0: 5 destinations, 6 routes (5 active, 0 holddown, 0 hidden)
> + = Active Route, - = Last Active, * = Both
> 
> 172.223.10.0/24*[Direct/0] 00:01:11
>> via irb.0
>[Direct/0] 00:01:11
>> via irb.0
> 172.223.10.1/32*[Local/0] 00:01:11
>  Local via irb.0
> 172.223.10.5/32*[Local/0] 00:01:11
>  Local via irb.0
> 172.223.10.10/32   *[EVPN/7] 00:00:16
>> via irb.0
> 172.223.10.20/32   *[EVPN/7] 00:00:03
>> via irb.0
> 
> root@stlr-960-e>
> 
> 
> ___
> juniper-nsp mailing list juniper-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] evpn with vrf (change to evpn inside inet.0 and igp advertise evpn /32's)

2019-06-10 Thread Aaron Gould
I think I got it.  This works to get evpn host routes into ospf.  Is there a
better way ?

set policy-options policy-statement my-ospf-export-policy term 1 from
protocol evpn

set policy-options policy-statement my-ospf-export-policy term 1 then accept

set protocols ospf export my-ospf-export-policy

-Aaron


After putting the above evpn ospf export on an evpn pe, I see this on a
non-evpn ospf router across the network...

root@blvr-witness> show route table inet.0 172.223.10.0/24

inet.0: 39 destinations, 39 routes (39 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

172.223.10.0/24*[OSPF/10] 00:54:18, metric 4
> to 10.103.130.245 via ge-0/0/9.0
172.223.10.10/32   *[OSPF/150] 00:02:54, metric 0, tag 0
> to 10.103.130.245 via ge-0/0/9.0
172.223.10.11/32   *[OSPF/150] 00:01:57, metric 0, tag 0
> to 10.103.130.245 via ge-0/0/9.0
172.223.10.20/32   *[OSPF/150] 00:02:54, metric 0, tag 0
> to 10.103.130.245 via ge-0/0/9.0
172.223.10.21/32   *[OSPF/150] 00:01:57, metric 0, tag 0
> to 10.103.130.245 via ge-0/0/9.0


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] evpn with vrf (change to evpn inside inet.0 and igp advertise evpn /32's)

2019-06-10 Thread Aaron Gould
Oh dang, hang on... I just removed irb.0 from vrf and allowed it to sit in
inet.0 global table... and I DO see the evpn routes in inet.0 now...

So I think my question is actually this...  when I have evpn with irb inside
vrf, MP-iBGP advertises all those evpn /32's to the other remote pe's in
that  vrf.  Great.  But with epvn irb inside inet.0 , how do I get something
like ospf to do the same ?  how do I get ospf to advertise all those evpn
/32 host routes ?

I would think this is what I would need in order to have the efficient
routing to the evpn hosts in a certain data center that spreads across many
dc's is, I would need the igp to advertise those epvn /32's throughout
the domain.


root@stlr-960-e> show route table inet.0 172.223.10.0/24

inet.0: 42 destinations, 43 routes (42 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

172.223.10.0/24*[Direct/0] 00:01:34
> via irb.0
[Direct/0] 00:01:34
> via irb.0
172.223.10.1/32*[Local/0] 00:01:34
  Local via irb.0
172.223.10.5/32*[Local/0] 00:01:34
  Local via irb.0
172.223.10.10/32   *[EVPN/7] 00:01:21
> via irb.0
172.223.10.11/32   *[EVPN/7] 00:00:59
> to 10.103.129.14 via ae0.0, Push 301728, Push
299840(top)
172.223.10.20/32   *[EVPN/7] 00:01:09
> via irb.0
172.223.10.21/32   *[EVPN/7] 00:00:17
> to 10.103.129.14 via ae0.0, Push 301728, Push
299840(top)


-Aaron

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] evpn with vrf

2019-06-10 Thread Aaron Gould
Seems that I get an auto-export from evpn-learned destinations auto exported
as /32's into the vrf that the IRB is attached to.

Is this possibly with inet.0 global route table?

In other words, in a vrf table I see evpn-learned routes listed like this...

172.223.10.10/32   *[EVPN/7] 00:00:03
> via irb.0

... how would I get this same behavior if the irb.0 interface was in inet.0
routing domain and not vrf ?

-Aaron





Details.


root@stlr-960-e> show evpn database
Instance: 10
VLAN  DomainId  MAC addressActive source  Timestamp
IP address
10  00:00:00:00:00:01  irb.0  Jun 10
15:13:59  172.223.10.1
 
172.223.10.5
10  00:50:79:66:68:21  ae141.0Jun 10
15:12:06
10  00:50:79:66:68:23  ae141.0Jun 10
15:10:53
10  02:05:86:71:f1:02  10.103.128.9   Jun 10
14:10:25

root@stlr-960-e> show route table one.inet.0

one.inet.0: 3 destinations, 4 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

172.223.10.0/24*[Direct/0] 00:00:38
> via irb.0
[Direct/0] 00:00:38
> via irb.0
172.223.10.1/32*[Local/0] 00:00:38
  Local via irb.0
172.223.10.5/32*[Local/0] 00:00:38
  Local via irb.0

root@stlr-960-e> ping 172.223.10.10 routing-instance one
PING 172.223.10.10 (172.223.10.10): 56 data bytes
64 bytes from 172.223.10.10: icmp_seq=0 ttl=64 time=391.814 ms
64 bytes from 172.223.10.10: icmp_seq=1 ttl=64 time=118.886 ms
^C
--- 172.223.10.10 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 118.886/255.350/391.814/136.464 ms

root@stlr-960-e> show route table one.inet.0

one.inet.0: 4 destinations, 5 routes (4 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

172.223.10.0/24*[Direct/0] 00:00:58
> via irb.0
[Direct/0] 00:00:58
> via irb.0
172.223.10.1/32*[Local/0] 00:00:58
  Local via irb.0
172.223.10.5/32*[Local/0] 00:00:58
  Local via irb.0
172.223.10.10/32   *[EVPN/7] 00:00:03
> via irb.0

root@stlr-960-e>

root@stlr-960-e> ping 172.223.10.20 routing-instance one
PING 172.223.10.20 (172.223.10.20): 56 data bytes
64 bytes from 172.223.10.20: icmp_seq=0 ttl=64 time=437.254 ms
64 bytes from 172.223.10.20: icmp_seq=1 ttl=64 time=161.525 ms
^C
--- 172.223.10.20 ping statistics ---
3 packets transmitted, 2 packets received, 33% packet loss
round-trip min/avg/max/stddev = 161.525/299.389/437.254/137.865 ms

root@stlr-960-e> show route table one.inet.0

one.inet.0: 5 destinations, 6 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

172.223.10.0/24*[Direct/0] 00:01:11
> via irb.0
[Direct/0] 00:01:11
> via irb.0
172.223.10.1/32*[Local/0] 00:01:11
  Local via irb.0
172.223.10.5/32*[Local/0] 00:01:11
  Local via irb.0
172.223.10.10/32   *[EVPN/7] 00:00:16
> via irb.0
172.223.10.20/32   *[EVPN/7] 00:00:03
> via irb.0

root@stlr-960-e>


___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Simulate minimum-links for ordinary interfaces?

2019-06-10 Thread Per Westerlund

Thanks, good suggestion.

Haven’t used that before. Given that input, this is what I will try:

- Add a dummy linknet to each tunnel interface, since RPM and 
IP-monitoring works with addresses and not interfaces directly
- Use two RPM-probes on the primary links to be able to have independent 
failure tests
- Use one IP-monitoring policy matching on both RPM-probes, so I can 
change routing as soon as one of the two links fail


/Per

PS: Results will be reported once I’m done


On 10 Jun 2019, at 16:34, Hansen, Christoffer wrote:


On 10/06/2019 09:44, p...@westerlund.se wrote:
I know that almost anything can be solved with event-scripts 
triggered

by link-up/down for st0.X, but that kind of configuration is somewhat
hidden, and also probably difficult to get completely correct.


Either the event-scripts triggering you wan to initially avoid or
alternatively change to do dynamic routing between the sites?

Static routes with Remote Probe Monitoring is my suggestion.

https://www.juniper.net/documentation/en_US/release-independent/nce/topics/task/configuration/internet-protocol-route-monitoring.html

Christoffer

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Simulate minimum-links for ordinary interfaces?

2019-06-10 Thread Hansen, Christoffer

On 10/06/2019 09:44, p...@westerlund.se wrote:
> I know that almost anything can be solved with event-scripts triggered
> by link-up/down for st0.X, but that kind of configuration is somewhat
> hidden, and also probably difficult to get completely correct.

Either the event-scripts triggering you wan to initially avoid or
alternatively change to do dynamic routing between the sites?

Static routes with Remote Probe Monitoring is my suggestion.

https://www.juniper.net/documentation/en_US/release-independent/nce/topics/task/configuration/internet-protocol-route-monitoring.html

Christoffer



signature.asc
Description: OpenPGP digital signature
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Simulate minimum-links for ordinary interfaces?

2019-06-10 Thread p1

Hi!

I have not been able to figure out how to "disable" the remaining 
interfaces among a set of interfaces when one goes down. Is it even 
possible? I'm looking for something like "minimum-links" for LAGs.


The background is that we are using an external service that is 
filtering our outbound traffic. The connection is set up using IPsec 
tunnels. One tunnel is not enough, we have to load-balance over more 
than one to have enough total bandwidth (load-balancing is set up and 
works well).


There is one primary filtering site, and a secondary site. All traffic 
is routed to the same IPv4-address that exists in both sites.


Here is a configuration example:

ladmin@srx-1> show configuration routing-instances outbound-vr
instance-type virtual-router;
interface st0.1; # Primary site
interface st0.2; # Primary site
interface st0.3; # Secondary site
interface st0.4; # Secondary site
routing-options {
static {
route 1.2.3.4/32 {
qualified-next-hop st0.1 {
metric 1;
}
qualified-next-hop st0.2 {
metric 1;
}
qualified-next-hop st0.3 {
metric 2;
}
qualified-next-hop st0.4 {
metric 2;
}
}
}
}

If st0.1 goes down, st0.2 cannot handle all of the load, so we want to 
move all of the traffic to st0.3 and st0.4 instead. Ideally, once st0.1 
recovers, the traffic should move back to st0.1 and st0.2.


Is this possible to do in a good way?

I know that almost anything can be solved with event-scripts triggered 
by link-up/down for st0.X, but that kind of configuration is somewhat 
hidden, and also probably difficult to get completely correct.



Any hints appreciated.

/Per Westerlund
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp