Hi Shankar,
I suggest using allowed-address-pairs extension and ML2 plugin with
OFAgent driver on Icehouce. (but I have not tried it)
By the way, the following is my detailed operation log about forcible
ip spoofing.
I tried this on a fresh Ubuntu 13.10.
This is localrc I used.
I disabled Neutron security-groups.
----------------------------------------
HOST_IP=127.0.0.1
SERVICE_HOST=127.0.0.1
disable_service n-net
enable_service neutron q-svc q-agt q-l3 q-dhcp q-meta q-lbaas
enable_service ryu
FLOATING_RANGE=192.168.1.0/24
PUBLIC_NETWORK_GATEWAY=192.168.1.1
Q_HOST=$SERVICE_HOST
Q_USE_SECGROUP=False
Q_PLUGIN=ryu
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
RYU_API_HOST=$SERVICE_HOST
RYU_OFP_HOST=$SERVICE_HOST
MYSQL_PASSWORD=mysql
RABBIT_PASSWORD=rabbit
SERVICE_TOKEN=service
SERVICE_PASSWORD=admin
ADMIN_PASSWORD=admin
RYU_APPS=ryu.app.gre_tunnel,ryu.app.quantum_adapter,ryu.app.rest,ryu.app.rest_conf_switch,ryu.app.rest_tunnel,ryu.app.tunnel_port_updater,ryu.app.rest_quantum
----------------------------------------
After run devstack, I created a network for an investigation of communication.
This is needed to keep the default network for SSH access to VMs.
(@HOST means the operation on a host running devstack. @VM2 means the
operation on vm2)
@HOST
----------------------------------------
$ neutron net-create private2
Created a new network:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| admin_state_up | True |
| id | 1bccc324-618a-4967-b07b-aa6d56e61372 |
| name | private2 |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 813c46c214b143f5a9b8d5fa5c7026b4 |
+----------------+--------------------------------------+
$ neutron subnet-create --name subnet-private2 --ip-version 4
--gateway 10.1.0.1 private2 10.1.0.0/24
Created a new subnet:
+------------------+--------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "10.1.0.2", "end": "10.1.0.254"} |
| cidr | 10.1.0.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.1.0.1 |
| host_routes | |
| id | f501fc25-02b2-4716-bd9d-fa2f438ac5f2 |
| ip_version | 4 |
| name | subnet-private2 |
| network_id | 1bccc324-618a-4967-b07b-aa6d56e61372 |
| tenant_id | 813c46c214b143f5a9b8d5fa5c7026b4 |
+------------------+--------------------------------------------+
$ neutron net-list
+--------------------------------------+----------+-----------------------------------------------------+
| id | name | subnets
|
+--------------------------------------+----------+-----------------------------------------------------+
| 1bccc324-618a-4967-b07b-aa6d56e61372 | private2 |
f501fc25-02b2-4716-bd9d-fa2f438ac5f2 10.1.0.0/24 |
| aa9e8a37-4665-4e95-8b54-18fe15e2a464 | private |
01319da0-f766-4d8c-a42f-0661f2a40069 10.0.0.0/24 |
| ebfcc3ff-870a-4342-a454-24a89fbcccbf | public |
2004f838-9dd8-4b5f-b792-5a69da6de888 192.168.1.0/24 |
+--------------------------------------+----------+-----------------------------------------------------+
----------------------------------------
Then, I created two VM which has two NIC.
@HOST
----------------------------------------
$ nova boot --flavor m1.nano --image
bf452563-2a66-480a-9bd5-f07dbab9217f --nic
net-id=aa9e8a37-4665-4e95-8b54-18fe15e2a464 --nic
net-id=1bccc324-618a-4967-b07b-aa6d56e61372 vm1
$ nova boot --flavor m1.nano --image
bf452563-2a66-480a-9bd5-f07dbab9217f --nic
net-id=aa9e8a37-4665-4e95-8b54-18fe15e2a464 --nic
net-id=1bccc324-618a-4967-b07b-aa6d56e61372 vm2
----------------------------------------
Allow ICMP traffic.
Neutron's security groups had been disabled but Nova's is available.
@HOST
----------------------------------------
$ nova secgroup-add-rule default tcp 1 65535 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 1 | 65535 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
----------------------------------------
Create Floating IP and associate to VM's port connected to the default
network for SSH login.
@HOST
----------------------------------------
$ neutron floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 192.168.1.3 |
| floating_network_id | ebfcc3ff-870a-4342-a454-24a89fbcccbf |
| id | 56cdf045-4648-4993-a896-6c409d91a9e3 |
| port_id | |
| router_id | |
| tenant_id | 813c46c214b143f5a9b8d5fa5c7026b4 |
+---------------------+--------------------------------------+
$ neutron floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 192.168.1.4 |
| floating_network_id | ebfcc3ff-870a-4342-a454-24a89fbcccbf |
| id | 60054857-6129-4b0b-97f5-66872f7877e3 |
| port_id | |
| router_id | |
| tenant_id | 813c46c214b143f5a9b8d5fa5c7026b4 |
+---------------------+--------------------------------------+
$ neutron port-list
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| id | name | mac_address |
fixed_ips
|
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| 1fdaab5f-cc2c-43e9-83d5-6bf29cf34f45 | | fa:16:3e:c1:85:21 |
{"subnet_id": "f501fc25-02b2-4716-bd9d-fa2f438ac5f2", "ip_address":
"10.1.0.2"} |
| 20c0b808-83f3-4ae4-8d9a-bd8e078f7621 | | fa:16:3e:75:97:8a |
{"subnet_id": "f501fc25-02b2-4716-bd9d-fa2f438ac5f2", "ip_address":
"10.1.0.4"} |
| 41d18013-4902-470b-82c5-05534f2151a6 | | fa:16:3e:c6:c0:83 |
{"subnet_id": "2004f838-9dd8-4b5f-b792-5a69da6de888", "ip_address":
"192.168.1.4"} |
| 4ce8dab2-2adf-4421-81c9-f3f66b0a6f2f | | fa:16:3e:ad:f8:53 |
{"subnet_id": "01319da0-f766-4d8c-a42f-0661f2a40069", "ip_address":
"10.0.0.1"} |
| 66e16428-493b-4539-b939-63be73ba9416 | | fa:16:3e:c1:36:62 |
{"subnet_id": "01319da0-f766-4d8c-a42f-0661f2a40069", "ip_address":
"10.0.0.3"} |
| 770f94f4-623d-46b9-9eb3-eec78e67d32f | | fa:16:3e:5a:8d:e3 |
{"subnet_id": "01319da0-f766-4d8c-a42f-0661f2a40069", "ip_address":
"10.0.0.2"} |
| ba171dd7-699c-49b6-9f7e-9036a784996c | | fa:16:3e:1d:29:32 |
{"subnet_id": "f501fc25-02b2-4716-bd9d-fa2f438ac5f2", "ip_address":
"10.1.0.3"} |
| c3bbf4a0-280e-4a4a-9930-4de560744085 | | fa:16:3e:b1:af:66 |
{"subnet_id": "2004f838-9dd8-4b5f-b792-5a69da6de888", "ip_address":
"192.168.1.3"} |
| da238085-ca15-4557-81cc-63a310626737 | | fa:16:3e:31:ee:10 |
{"subnet_id": "2004f838-9dd8-4b5f-b792-5a69da6de888", "ip_address":
"192.168.1.2"} |
| fea60867-a33c-4186-8cc9-b2396d95eb62 | | fa:16:3e:ae:86:e6 |
{"subnet_id": "01319da0-f766-4d8c-a42f-0661f2a40069", "ip_address":
"10.0.0.4"} |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
$ neutron floatingip-associate 56cdf045-4648-4993-a896-6c409d91a9e3
66e16428-493b-4539-b939-63be73ba9416
Associated floatingip 56cdf045-4648-4993-a896-6c409d91a9e3
$ neutron floatingip-associate 60054857-6129-4b0b-97f5-66872f7877e3
fea60867-a33c-4186-8cc9-b2396d95eb62
Associated floatingip 60054857-6129-4b0b-97f5-66872f7877e3
$ nova list
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------+
| ID | Name | Status | Task State |
Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------+
| 57cb8ec4-cf29-415f-aa76-0dcd16f01aa7 | vm1 | ACTIVE | - |
Running | private2=10.1.0.3; private=10.0.0.3, 192.168.1.3 |
| 3c9e2d3d-31eb-475e-82e2-072a7519b1c8 | vm2 | ACTIVE | - |
Running | private2=10.1.0.4; private=10.0.0.4, 192.168.1.4 |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------+
----------------------------------------
SSH login to vm2 and configure eth1. Do similar operation on vm1.
@VM2
----------------------------------------
$ ssh [email protected]
[email protected]'s password: cubswin:)
$ ifconfig -a
eth0 Link encap:Ethernet HWaddr FA:16:3E:AE:86:E6
inet addr:10.0.0.4 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:feae:86e6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:234 errors:0 dropped:0 overruns:0 frame:0
TX packets:236 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:29495 (28.8 KiB) TX bytes:24684 (24.1 KiB)
eth1 Link encap:Ethernet HWaddr FA:16:3E:75:97:8A
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
$ sudo ifconfig eth1 10.1.0.4
----------------------------------------
Check the bridge which vm2 connected to.
The name of the bridge is 'qbr' + a part of port id.
Here the bridge is qbr20c0b808-83.
@HOST
----------------------------------------
$ neutron port-list | grep '10.1.0.4'
| 20c0b808-83f3-4ae4-8d9a-bd8e078f7621 | | fa:16:3e:75:97:8a |
{"subnet_id": "f501fc25-02b2-4716-bd9d-fa2f438ac5f2", "ip_address":
"10.1.0.4"} |
$ brctl show
bridge name bridge id STP enabled interfaces
qbr20c0b808-83 8000.229cfc3c7b25 no qvb20c0b808-83
tap20c0b808-83
qbr66e16428-49 8000.d627e6cf5e7b no qvb66e16428-49
tap66e16428-49
qbrba171dd7-69 8000.a2c153560086 no qvbba171dd7-69
tapba171dd7-69
qbrfea60867-a3 8000.361319500d42 no qvbfea60867-a3
tapfea60867-a3
----------------------------------------
After start tcpdump on the host, do ping to vm1 on vm2.
This succeeds.
@VM2
----------------------------------------
$ ping 10.1.0.3
PING 10.1.0.3 (10.1.0.3): 56 data bytes
64 bytes from 10.1.0.3: seq=0 ttl=64 time=1.822 ms
64 bytes from 10.1.0.3: seq=1 ttl=64 time=1.118 ms
^C
--- 10.1.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.118/1.470/1.822 ms
----------------------------------------
@HOST
----------------------------------------
$ sudo tcpdump -i qbr20c0b808-83
tcpdump: WARNING: qbr20c0b808-83: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on qbr20c0b808-83, link-type EN10MB (Ethernet), capture size
65535 bytes
20:02:27.010535 ARP, Request who-has 10.1.0.3 tell 10.1.0.4, length 28
20:02:27.011480 ARP, Reply 10.1.0.3 is-at fa:16:3e:1d:29:32 (oui
Unknown), length 28
20:02:27.011743 IP 10.1.0.4 > 10.1.0.3: ICMP echo request, id 5634,
seq 0, length 64
20:02:27.012033 IP 10.1.0.3 > 10.1.0.4: ICMP echo reply, id 5634, seq
0, length 64
20:02:28.012804 IP 10.1.0.4 > 10.1.0.3: ICMP echo request, id 5634,
seq 1, length 64
20:02:28.013340 IP 10.1.0.3 > 10.1.0.4: ICMP echo reply, id 5634, seq
1, length 64
^C
6 packets captured
6 packets received by filter
0 packets dropped by kernel
----------------------------------------
Then I changed the IP address of eth1 on vm2 and pinged to vm1.
This fails. No packet reaches the bridge.
@VM2
----------------------------------------
$ sudo ifconfig eth1 10.1.0.10
$ ping 10.1.0.3
PING 10.1.0.3 (10.1.0.3): 56 data bytes
^C
--- 10.1.0.3 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
----------------------------------------
@HOST
----------------------------------------
$ sudo tcpdump -i qbr20c0b808-83
tcpdump: WARNING: qbr20c0b808-83: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on qbr20c0b808-83, link-type EN10MB (Ethernet), capture size
65535 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
----------------------------------------
Modify nwfilter nova-base.
I deleted no-ip-spoofing and no-arp-spoofing.
@HOST
----------------------------------------
$ virsh nwfilter-dumpxml nova-base
<filter name='nova-base' chain='root'>
<uuid>83f28c40-429a-4e50-80a7-0a2b70a2d210</uuid>
<filterref filter='no-mac-spoofing'/>
<filterref filter='no-ip-spoofing'/>
<filterref filter='no-arp-spoofing'/>
<filterref filter='allow-dhcp-server'/>
</filter>
$ virsh nwfilter-edit nova-base
Network filter nova-base XML configuration edited.
$ virsh nwfilter-dumpxml nova-base
<filter name='nova-base' chain='root'>
<uuid>83f28c40-429a-4e50-80a7-0a2b70a2d210</uuid>
<filterref filter='no-mac-spoofing'/>
<filterref filter='allow-dhcp-server'/>
</filter>
----------------------------------------
I did ping to vm1 on vm2 again.
This time, it succeeded.
@VM2
----------------------------------------
$ ping 10.1.0.3
PING 10.1.0.3 (10.1.0.3): 56 data bytes
64 bytes from 10.1.0.3: seq=0 ttl=64 time=1.657 ms
64 bytes from 10.1.0.3: seq=1 ttl=64 time=1.073 ms
64 bytes from 10.1.0.3: seq=2 ttl=64 time=0.411 ms
64 bytes from 10.1.0.3: seq=3 ttl=64 time=1.206 ms
^C
--- 10.1.0.3 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.411/1.086/1.657 ms
----------------------------------------
@HOST
----------------------------------------
$ sudo tcpdump -i qbr20c0b808-83
tcpdump: WARNING: qbr20c0b808-83: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on qbr20c0b808-83, link-type EN10MB (Ethernet), capture size
65535 bytes
20:03:25.801990 ARP, Request who-has 10.1.0.3 tell 10.1.0.10, length 28
20:03:25.802739 ARP, Reply 10.1.0.3 is-at fa:16:3e:1d:29:32 (oui
Unknown), length 28
20:03:25.803043 IP 10.1.0.10 > 10.1.0.3: ICMP echo request, id 6402,
seq 0, length 64
20:03:25.803289 IP 10.1.0.3 > 10.1.0.10: ICMP echo reply, id 6402, seq
0, length 64
20:03:26.804166 IP 10.1.0.10 > 10.1.0.3: ICMP echo request, id 6402,
seq 1, length 64
20:03:26.804549 IP 10.1.0.3 > 10.1.0.10: ICMP echo reply, id 6402, seq
1, length 64
20:03:27.805416 IP 10.1.0.10 > 10.1.0.3: ICMP echo request, id 6402,
seq 2, length 64
20:03:27.805597 IP 10.1.0.3 > 10.1.0.10: ICMP echo reply, id 6402, seq
2, length 64
20:03:28.806416 IP 10.1.0.10 > 10.1.0.3: ICMP echo request, id 6402,
seq 3, length 64
20:03:28.807023 IP 10.1.0.3 > 10.1.0.10: ICMP echo reply, id 6402, seq
3, length 64
----------------------------------------
Thanks,
Kaneko
2014-04-04 13:14 GMT+09:00 arjun jayaprakash <[email protected]>:
> Thanks Kaneko,
>
> I tried flushing both arp & IP tables, but same results. Any idea by when
> those Openstack fix will be available.
>
> Regards,
> Shankar.
> On Thursday, 3 April 2014 5:31 PM, Yoshihiro Kaneko <[email protected]>
> wrote:
> Hi,
>
> 2014-04-02 19:56 GMT+09:00 arjun jayaprakash <[email protected]>:
>> Chain ryu_neutron_agen-s85e1f2a9-c (1 references)
>> target prot opt source destination
>> RETURN all -- 10.0.0.3 anywhere MAC
>> FA:16:3E:73:59:71
>> DROP all -- anywhere anywhere
>> and not able to send packet from guest VMs.
>>
>> Environment: Ubuntu 13.10 + DevStack Havana (single node setup).
>> Need to use a VM as a proxy to examine packets before forwarding them to
>> original destination. Packet will be rerouted to Proxy VM using SDN.
>> [VM1] --> [Proxy VM] --> [VM2].
>> However, anti-spoofing rules prevent me to do this. (Rant mode on: Did the
>> OpenStack developers not envision that researchers may want to use VMs as
>> proxies? Why did they make it almost impossible to disable the
>> anti-spoofing
>> mechanism?).
>
> How about allowed-address-pairs extension?
>
> http://docs.openstack.org/admin-guide-cloud/content/section_allowed_address_pairs.html
> But Ryu plugin is not supporting this extension unfortunately...
>
>> Tried the following things:
>> a) Flushing IPTables ... no go. IPTables shows up as flushed completely.
>> But
>> blockage is still there for spoofed packets.
>> b) Edited virt/libvirt/firewall.py file to set base_filter as nova-vpn
>> (which should not get any anti-spoof filters). Did a reset on q-svc,
>> n-api.
>> But no go.
>> c) In localrc, file set Q_USE_SECGROUP=False. I now see that IPTables does
>> not have those anti-spoofing rules listed. Still the spoofed packets do
>> not
>> go through.
>> d) Did a "sudo virsh nwfilter-edit nova-base" and deleted the
>> anti-spoofing
>> lines in the xml file. And also deleted the DROP rules from IPTables
>> (using
>> iptables-save > dump, edit dump, iptables-restore < dump).
>
> Did you delete no-arp-spoofing not only no-ip-spoofing ?
> I created two VM(vm1, vm2), and changed the IP address of each VM by hand on
> VM.
> When I deleted no-ip-spoofing and no-arp-spoofing from nova-base, ping
> succeeded.
> But I think that this will not be a solution for your problem because
> this is an absurd operation.
> I think it is better that waiting for the release of the bug fix and
> considering the use of allowed-address-pairs extension.
> https://bugs.launchpad.net/nova/+bug/1112912
>
> Thanks,
> Kaneko
>
>> Still nothing happened.
>> What else can I try ?
>> Thanks,
>> Shankar.
>>
>>
>> ------------------------------------------------------------------------------
>>
>> _______________________________________________
>> Ryu-devel mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/ryu-devel
>
>>
>
> ------------------------------------------------------------------------------
> _______________________________________________
> Ryu-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/ryu-devel
>
>
------------------------------------------------------------------------------
_______________________________________________
Ryu-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ryu-devel