Hi,
When will be released junos for EX9k, witch support EVPN+VXLAN and what version 
of software it will be.

Pozdrawiam
TomekC




On Thu, Jul 16, 2015 at 9:53 AM -0700, "Nischal Sheth" <[email protected]> 
wrote:
















Should have added that there will be an official build of 14.2R4 with the EVPN
VXLAN code in a few weeks.



-Nischal



On Jul 16, 2015, at 9:45 AM, Nischal Sheth <[email protected]> wrote:







I don't think 15.1R1 has the EVPN VXLAN code.



-Nischal



On Jul 16, 2015, at 9:41 AM, Dan Houtz <[email protected]> wrote:






Hi Nischal,




Thanks for this. I will try and test this today.




On another note, I'm curious if the 15.1R1 Junos release supports latest 
Contrail 2.2? If so, it might make sense to move from our daily build of 14.2 
to something that's official supported.




-Dan




On Wed, Jul 15, 2015 at 4:04 PM, Nischal Sheth 
<[email protected]> wrote:






Hi Dan,



Here's config snippets for how traffic would cross between VRF and inet.0.
I haven't included the configs for virtual-switch (i.e. L2) instances for 
public1
and public2.  In the example below, irb1 and irb2 are the routing-interfaces
the BDs in the virtual-switch instances.



10.1.1/24 is subnet assigned to public1
10.1.2/24 is subnet assigned to public2



I've also not included detailed VRF configs for public1 and public2 (e.g. vrf
import and export policies), just the portions needed for routing.



Let me know if you have questions.



-Nischal




[edit interfaces]
+   irb {
+       unit 1 {
+           family inet {
+               address 
10.1.1.254/24;
+           }
+       }
+       unit 2 {
+           family inet {
+               address 
10.1.2.254/24;
+           }
+       }
+   }





[edit forwarding-options family inet filter]
+     input redirect-to-public-vrfs;





[edit firewall]



+   filter redirect-to-public-vrfs {
+       /* redirect traffic to public1 */
+       term t1 {
+           from {
+               destination-address {
+                   
10.1.1.0/24;
+               }
+           }
+           then {
+               routing-instance public1;
+           }
+       }
+       /* redirect traffic to public2 */
+       term t2 {
+           from {
+               destination-address {
+                   
10.1.2.0/24;
+               }
+           }
+           then {
+               routing-instance public2;
+           }
+       }
+       /* continue lookup in current table (inet.0) */
+       term default {
+           then accept;
+       }
+   }





[edit routing-instances]
+   public1 {
+       interface irb.1;
+       routing-options {
+           static {
+               route 
0.0.0.0/0 next-table inet.0;
+           }
+       }
+   }
+   public2 {
+       interface irb.2;
+       routing-options {
+           static {
+               route 
0.0.0.0/0 next-table inet.0;
+           }
+       }
+   }











On Jul 14, 2015, at 12:45 AM, Dan Houtz <[email protected]> wrote:





Hi Nischal,


Thank you for the detailed reply. I may have a few follow up questions in the 
coming days but for now I only have two:


1. Do you have an example config of what you are thinking DM might push to the 
MX to  route between VRF and inet.0? Even if I could apply that by default 
right now it would be helpful.


2. Any ETA on the bugs you referenced and will there be a reasonably easy to 
make use of these prior to official 2.21 packages being released?


Thanks!

Dan
On Jul 14, 2015 12:29 AM, "Nischal Sheth" <[email protected]> wrote:





On Jul 9, 2015, at 12:25 PM, Dan Houtz <[email protected]> wrote:









Hi Nischal,












Hi Dan,



Pleas see inline.











I'll need to re-read through the bug text a few more times to fully grasp as I 
have yet to totally understand the difference between L2 and L2/L3 networks 
from the
Contrail perspective. It seems all networks we create via the 
Contrail webui are L2/L3. I see there is an L2 option when configured a port 
but this requires me to create IP/MAC mappings which doesn't apply to us. 
Perhaps this is just a limitation of the UI and I could accomplish L2 only via 
API calls?










I gather from your follow-up email on DHCP that you have a good handle on the
difference between L2 and L2/L3 in Contrail now. In terms of DM, the difference
should be that DM will only configure a virtual-switch routing-instance for L2 
only
VNs, whereas it would configure both virtual-switch and vrf routing-instances 
for
L2+L3 VNs. Further, we're also adding a L3 only mode where DM will create only
a vrf.












In our use case we are doing overlays between all bare metal servers - we do 
not have any Openstack/VM elements. We will also generally be assigning 
publicly routed IP's directly to all servers and will not require NAT. There 
may be cases where customers need
 to build 'backnets' with private IP addresses. I suspect most of these to be 
islands where routing don't be required but we may need to extend these to our 
MXs for routing between private networks belonging to a specific customer.











Please see below.











Based on the above thoughts I believe we will probably have 3 primary scenarios:



1. Create a 'public' virtual network in Contrail, assign a public IP block to 
it, and have it be publicly routable - prefixes of VN exported into inet.0 and 
default from inet.0 imported into routing instance. It seems like creating a 
network, marking
 it as external, and not creating floating IPs would indicate this type of 
setup but I haven't actually looked at how
Contrail does NAT/Floating IPs/etc.










This should have worked, but DM currently treats external VNs as L3 only VNs and
does not create a virtual-switch routing-instance. The bug I created earlier 
tracks this
issue.



https://bugs.launchpad.net/juniperopenstack/+bug/1472699



The bug also describes the routing configuration to allow the VN subnet to be 
added
or imported into inet.0 and traffic redirection from inet.0 to the VRF. We need 
to work
around some JUNOS quirks wherein 2 route tables cannot have static routes that
point to each other.  I've assumed that the VRF will have a static default that 
points to
inet.0. If you're doing manual configuration, you could use rib-groups to 
import default
route from BGP/OSPF into the VRF, but DM uses static default with a table 
next-hop
since it doesn't want to assume anything about routing protocol being used for 
inet.0.



Traffic destined to the VRF subnet will be redirected from inet.0 to the VRF 
using filter
based forwarding by applying an input filter on inet.0. This filter matches the 
destination
subnet and redirects traffic to the appropriate VRF.
 







2. Create a 'private' virtual network in Contrail, assign a private IP block to 
it and have it be routable only between other networks belonging to the tenant 
it is created under. I think this would generally be an all or nothing thing - 
every
 private network belonging to a tenant can reach any other private network 
belonging to that tenant. I could see someone wanting more fine grain control 
but probably not super common and would mostly look at inline firewall chaining 
or policy to control this









This is supported today, and is finer grained than all or nothing. You can use 
policy to
allow traffic between specific VNs. Matches on L4 information are not yet 
supported.





3. Create a 'backnet' virtual network in Contrail, assign no IP's - simply 
extended L2 to appropriate bare metal servers. Servers could be IP'd adhoc 
without
Contrail being aware at all. Network would be an island with no config existing 
on MX.








This should work today (modulo issues you found with Praveen) if you do not 
extend
such VNs to the MXs. It will be further refined as part of the fix for bugs 
1472699 and
1471637. If a VN is L2 only, you will not need to configure a subnet for it.








Let me know if the above makes any sense :)









Yes, the requirements sound reasonable and I think that you should be able to 
achieve
what you are looking once 1472699 and 1471637 are fixed.



-Nischal



Thanks!


On Wed, Jul 8, 2015 at 11:58 AM, Nischal Sheth 
<[email protected]> wrote:






Hi Dan,



https://bugs.launchpad.net/juniperopenstack/+bug/1472699




This needs some changes to the device manager.



Could you please take a look and provide feedback on whether
the solution outlined will meet your requirements?




Thanks,
Nischal





On Jul 7, 2015, at 5:54 PM, Dan Houtz <[email protected]> wrote:





Unless I'm overlooking something, It doesn't look like device-manager builds 
the config needed to import a default route from inet.0 into the VN's routing 
instance or to import the routing instance's prefix into inet.0. In our case we 
are assigning public
 IP's directly to the bare metal servers and do not require Is it possible for 
device-manager to configure the import/export policies to make these routes 
available?




Thanks!


Dan 




On Tue, Jul 7, 2015 at 4:55 PM, Nischal Sheth 
<[email protected]> wrote:






Hi Dan,



This confirms that the problem is related to route target filtering. It's
a JUNOSism that bgp.rtarget.0 routes are resolved using inet.3 by
default.



Using gr-* interfaces will result in creation of inet.3 routes to the CN IP
since DM adds CN IPs to the dynamic-tunnel destination network list.
In this case the gr tunnels wouldn't be used in the data plane since you
are using VXLAN. If you also have VMs, MX will use gr tunnels for
data traffic to vrouters.



If it's undesirable to create gr- tunnels, you have 2 options:



1) You can configure JUNOS to resolve bgp.rtarget.0 over inet.0 instead
of inet.3 using something like:




root@a3-mx80-1# show routing-options resolution 
rib bgp.rtarget.0 {
    resolution-ribs inet.0;
}



2) Alternately, you can add add static routes with discard nexthop to CN
ip addresses into inet.3.




root@a3-mx80-1# show routing-options rib inet.3 
static {
    route 
10.10.210.140/32 discard;
}



[edit]



I would recommend the latter.





-Nischal







On Jul 7, 2015, at 2:36 PM, Dan Houtz <[email protected]> wrote:




root@gw2z0> show route table bgp.rtarget.0



bgp.rtarget.0: 2 destinations, 4 routes (2 active, 0 holddown, 2 hidden)

+ = Active Route, - = Last Active, * = Both



65412:65412:1/96

                   *[RTarget/5] 01:36:41

                      Type Default

                      Local

65412:65412:8000001/96

                   *[RTarget/5] 01:36:41

                      Type Default

                      Local



root@gw2z0> show route table bgp.rtarget.0 hidden



bgp.rtarget.0: 2 destinations, 4 routes (2 active, 0 holddown, 2 hidden)

+ = Active Route, - = Last Active, * = Both



65412:65412:1/96

                    [BGP/170] 02:26:54, localpref 100, from 10.10.210.140

                      AS path: I, validation-state: unverified

                      Unusable

65412:65412:8000001/96

                    [BGP/170] 02:26:54, localpref 100, from 10.10.210.140

                      AS path: I, validation-state: unverified

                      Unusable






I don't believe I should have any gr interfaces as I'm doing VXLAN (no 
MPLS/GRE/etc) correct?








On Tue, Jul 7, 2015 at 4:10 PM, Nischal Sheth 
<[email protected]> wrote:






Hi Dan,



Since there's no routes being sent to the CNs, there may be a problem
with route target filtering, which makes the MX think that the CN is not
interested in the any route targets.



Can you run "show route table bgp.rtarget.0" on the MX and check if
there are any hidden routes in that table?  If there are, we need to
check (show interfaces terse gr-*) if there's any gr-* devices on the
system.  If there aren't any, can you add them using something like:




fpc 1 {
    pic 0 {
        tunnel-services;
    }
}





-Nischal







On Jul 7, 2015, at 12:47 PM, Dan Houtz <[email protected]> wrote:




I was able to get updated MX code (14.2-20150704.0) and now have device-manager 
successfully configuring my MX80. BGP session between MX and Contrail also 
seems to be stable now however I am having an issue with reach-ability between 
MX and hosts connected
 to TOR switches. Based on ititial troubleshooting I don't believe Junos is 
announcing the EVPN route for the IRB interface:



oot@gw2z0# show groups __contrail__ interfaces irb

gratuitous-arp-reply;

unit 4 {

    family inet {

        address 10.10.210.145/29;

    }

}





root@gw2z0# run show route advertising-protocol bgp 10.10.210.140



bgp.rtarget.0: 2 destinations, 4 routes (2 active, 0 holddown, 2 hidden)

  Prefix                  Nexthop              MED     Lclpref    AS path

  65412:65412:1/96

*                         Self                         100        I

  65412:65412:8000001/96

*                         Self                         100        I






IRB interface is up:



root@gw2z0# run show interfaces routing | grep irb

irb.4            Up    INET  10.10.210.145



root@gw2z0# run show route 10.10.210.145



_contrail_l3_4_Test.inet.0: 2 destinations, 3 routes (2 active, 0 holddown, 0 
hidden)

+ = Active Route, - = Last Active, * = Both



10.10.210.145/32 *[Local/0] 00:02:51

                      Local via irb.4








On Sat, Jul 4, 2015 at 1:10 PM, Nischal Sheth 
<[email protected]> wrote:






https://bugs.launchpad.net/juniperopenstack/+bug/1465070



-Nischal



Sent from my iPhone




On Jul 4, 2015, at 11:04 AM, Dan Houtz <[email protected]> wrote:








Great! Next question...




Are there plans to add in ability to apply the 'virtual-gateway-address' knob 
when configuring IRBs? I believe this is the recommended way to configure 
redundant MX gateways correct?




-Dan




On Sat, Jul 4, 2015 at 9:15 AM, Vedamurthy Ananth Joshi 
<[email protected]> wrote:



Yes…this should be addressed too.



Vedu





From: Dan Houtz <[email protected]>

Date: Saturday, July 4, 2015 at 7:38 PM

To: Vedamurthy Ananth Joshi <[email protected]>

Cc: OpenContrail Users List - 2 <[email protected]>

Subject: Re: [Users] Problem with Device Manager's VXLAN config in Contrail 2.2









Vedu,



Thank you for the information. I have reached out to our SE to see about 
getting updated code. I am also seeing the following with BGP sessions between 
Contrail and MX since moving to 2.2:




Jul  4 14:06:47  gw2z0 rpd[86503]: RPD_BGP_NEIGHBOR_STATE_CHANGED: BGP peer 
10.10.210.140 (Internal AS 65412) changed state from OpenConfirm to Established 
(event RecvKeepAlive) (instance master)
Jul  4 14:06:47  gw2z0 rpd[86503]: bgp_read_v4_update:10535: NOTIFICATION sent 
to 10.10.210.140 (Internal AS 65412): code 3 (Update Message Error) subcode 9 
(error with optional attribute), Data:  c0 16 09 10 fc 00
Jul  4 14:06:47  gw2z0 rpd[86503]: RPD_BGP_NEIGHBOR_STATE_CHANGED: BGP peer 
10.10.210.140 (Internal AS 65412) changed state from Established to Idle (event 
RecvUpdate) (instance master)
Jul  4 14:06:47  gw2z0 rpd[86503]: Received malformed update from 10.10.210.140 
(Internal AS 65412)
Jul  4 14:06:47  gw2z0 rpd[86503]:   Family evpn, prefix 
3:10.10.210.140:1::4::10.10.214.65/152
Jul  4 14:06:47  gw2z0 rpd[86503]:   Malformed Attribute PMSI(22) flag 0xc0 
length 9.
Jul  4 14:06:52  gw2z0 rpd[86503]: bgp_parse_open_options: peer 
10.10.210.140+50620 (proto): unsupported AF 1 SAFI 243




Is this something that will also be fixed with the new MX code?



Thanks!
Dan



On Sat, Jul 4, 2015 at 8:02 AM, Vedamurthy Ananth Joshi 
<[email protected]> wrote:



Dan,
Ingress-node-replication was not pushed by Device Manager on purpose. 
The corresponding MX image could be any daily build equal to or greater than 
14.2-20150627.0. 



Vedu





From: Dan Houtz <[email protected]>

Date: Saturday, July 4, 2015 at 1:47 PM

To: OpenContrail Users List - 2 <[email protected]>

Subject: [Users] Problem with Device Manager's VXLAN config in Contrail 2.2









Has anyone else tried configuring EVPN VXLAN on an MX using device manager in 
Contrail 2.2? In my testing the configuration being pushed my netconf is not 
valid:




root@gw2z0# commit check
[edit routing-instances _contrail_l2_4_Test bridge-domains bd-4]
  'vxlan'
    multicast-group or ovsdb-managed or ingress-node-replication should be 
enabled
error: configuration check-out failed: (statements constraint check failed)




To fix this you must manually configure ingress-node-replication:



root@gw2z0# set groups __contrail__ routing-instances _contrail_l2_4_Test 
bridge-domains bd-4 vxlan ingress-node-replication






root@gw2z0# commit check
configuration check succeeds






Is this possibly MX junos version specific? I am using a daily build given to 
me by my SE as I don't believe any released versions support VXLAN:




root@gw2z0# run show version
Hostname: gw2z0
Model: mx80-48t
Junos: 14.2-20150527_rpd_v2_evpn_vnid.0




I doubt it matters but it's also odd that device manager is applying this since 
I'm using VXLAN:




root@gw2z0# show groups __contrail__ protocols mpls
interface all;




Thanks!
Dan


























_______________________________________________

Users mailing list

[email protected]

http://lists.opencontrail.org/mailman/listinfo/users_lists.opencontrail.org








































































_______________________________________________

Users mailing list

[email protected]

http://lists.opencontrail.org/mailman/listinfo/users_lists.opencontrail.org
_______________________________________________
Users mailing list
[email protected]
http://lists.opencontrail.org/mailman/listinfo/users_lists.opencontrail.org

Reply via email to