On Tue, 26 Nov 2019 at 04:24, Rajak, Vishal <[email protected]> wrote: > > Hi, > > > > We are trying to bring up IPsec over vxlan between two nodes of openstack > cluster in our lab environment.
Since what you described below does *much more* than set up trivial ipsec vxlan tunnel setup, then: 1. have you tried to set up same thing with plain vxlan setup and succeed? I think you are jumping too early in conclusions that you have IPsec problem here. 2. have you tried to set up simple ipsec vxlan and succeed without setting up OpenFlow controller, fail-mode=safe and patch ports? A simple IPsec set up really does not need that. 3. Also, did you check ovs-monitor-ipsec.log and libreswan files for errors? > > > > Note: There are only 2 nodes in cluster(Compute and controller). > > Following are the steps followed to bring-up ovs with ipsec > > Link: http://docs.openvswitch.org/en/latest/tutorials/ipsec/ > > Commands used on Controller node: (IP – 10.2.2.1) > > a. dnf install python2-openvswitch libreswan \ > "kernel-devel-uname-r == $(uname -r)" > > b. yum install python-openvswitch - to install > python-openvswitch-2.11.0-4.el7.x86_64 as it has support for ipsec. > > c. Download openvswitch 2.11 version rpms and put it in on the server > d. Install openvswitch rpms in the server > > ex- rpm -ivh openvswitch-ipsec-2.11.0-4.el7.x86_64.rpm -- to install > ovs-ipsec rpm > > e. iptables -A INPUT -p esp -j ACCEPT > f. iptables -A INPUT -p udp --dport 500 -j ACCEPT Do you have firewalld running? If so then using iptables is not the right way to ACCEPT traffic. You have to use firewalld-cmd. > > g. cp -r /usr/share/openvswitch/ /usr/local/share/ I don't understand why you have to do the command above. Are you mixing self built packages with official packages that have different path prefixes? > h. systemctl start openvswitch-ipsec.service > > I. ovs-vsctl add-port br-ex ipsec_vxlan -- set interface ipsec_vxlan > type=vxlan options:remote_ip=10.2.2.2 options:psk=swordfish > > > > Commands used on compute node: (IP -10.2.2.2) > > Link: https://devinpractice.com/2016/10/18/open-vswitch-introduction-part-1/ > > a. ovs-vsctl add-br br-ex > > b. ip link set br-ex up > > c. ovs-vsctl add-port br-ex enp1s0f1 > > d. ip addr del 10.2.2.2/24 dev enp1s0f1 > > e. ip addr add 10.2.2.2/24 dev br-ex > > f. ip route add default via 10.2.2.254 dev br-ex > > g. Same step as done for controller node above for ipsec configuration. > > After bringing up ipsec in compute node the connectivity for all 10.2.2.0 > network went down. Commands a-g don't have anything to do with IPsec. I see how network connectivity could go down there at the time you move IP address from enp1s0f1 to br-ex. > > Following are the steps followed to resolve the issue of 10.2.2.0.network > down due to creation of bridge in compute node > > Replicated the output of ovs-vsctl show in compute node by comparing the > output of controller node by executing following commands > > 1. ovs-vsctl set-controller br-ex tcp:127.0.0.1:6633 > > 2. ovs-vsctl – set Bridge br-ex fail-mode=secure > > After running the above 2 commands the network connectivity to outside > network from compute node went down and other servers came up. > > 3. ovs-vsctl add-port br-ex phy-br-ex – set interface phy-br-ex type=patch > options:peer=int-br-ex > > 4. ovs-vsctl add-port br-int int-br-ex – set interface int-br-ex > type=patch options:peer=phy-br-ex Is phy-br-ex the physical bridge and int-br-ex the integration bridge? If so then it seems odd that you are trying to connect them with patch port and use IPsec (or for that matter even plain) tunneling. > > After running the above 2 commands also the compute node was > not reachable to outside network. > > Compared the files in network-script in both compute node and controller node > and found some difference. > > Compute node didn’t had ifcfg-br-ex file, so added it in compute node. > Made some changes in ifcfg-enp1s0f1 file after comparing it with the same > file present in controller node. > > d. Restarted the network service. > > e. After restarting the network service the changes which was made in > ovs-vsctl was removed and only the bridge br-ex which was created on physical > interface remained. > > f. The compute node started ping the outer network as well. > > g. Ran command to establish ipsec-vsxlan. > > ovs-vsctl add-port br-ex ipsec_vxlan -- set interface ipsec_vxlan > type=vxlan options:remote_ip=10.2.2.1 options:psk=swordfish > > h. After port was added for ipsec-vxlan the network again went down. > > I. Removed the ipsec-vxlan port. > > j. Now the compute node has bridge over physical interface and its > pinging to outside network as well. > > k. Tried pinging from VM in compute node to VM in controlled node. Ping > didn’t work. > > 1. Removed the VM form compute node and tried creating another > instance. Creation of another instance failed. > > 2. Debugged the issue and found out that was not running. > > 3. Started neutron-openvswitch-agent again. After starting > neutron-openvswitch-agent the creation of VM was successful. > > 4. Still the VMs are not pinging. > > l. Compared /etc/neutron/plugins/ml2/openvswitch_agent.ini in both > controller and compute node and found some difference. After resolving those > difference and restarting neutron-openvswitch- agent the phy-br-ex port, > controller tcp:127.0.0.1:6633 and fail-mode=secure automatically got added in > Bridge br-ex. > > > > Still the VMs are not pinging. > > Ipsec is not established and VMs are not pinging. > > > > Regards, > > Vishal. > > _______________________________________________ > discuss mailing list > [email protected] > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss _______________________________________________ discuss mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
