Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-25 Thread Leen Besselink
Balu ?

Have you tried looking in the /var/lib/dhcp directory (the directory might 
depend
on the DHCP-client you are using) of the Ubuntu image ?

As this isn't a clean image but it has been connected to an other network, 
maybe a
previous DHCP-server told it to add the route ? And now the client is just 
re-using
an old lease ?

On Wed, Apr 24, 2013 at 10:13:52PM -0700, Aaron Rosen wrote:
 I'm not sure but if it works fine with the ubuntu cloud image and not with
 your ubuntu image than there is something in your image adding that route.
 
 
 On Wed, Apr 24, 2013 at 10:06 PM, Balamurugan V G
 balamuruga...@gmail.comwrote:
 
  Hi Aaron,
 
  I tried the image you pointed and it worked fine out of the box. That is
  it did not get the route to 169.254.0.0.26 on boot and I am able to
  retrieve info from metadata service. The image I was using earlier is a
  Ubuntu 12.04 LTS desktop image. What do you think could be wrong with my
  image? Its almost the vanilla Ubuntu image, I have not installed much. on
  it.
 
  Here is the quantum details you asked and more. This was taken before I
  tried the image you pointed to. And by the way, I have not added any host
  route as well.
 
  root@openstack-dev:~# quantum router-list
 
  +--+-++
  | id   | name| external_gateway_info
 |
 
  +--+-++
  | d9e87e85-8410-4398-9ddd-2dbc36f4b593 | router1 | {network_id:
  e8862e1c-0233-481f-b284-b027039feef7} |
 
  +--+-++
  root@openstack-dev:~# quantum net-list
 
  +--+-+-+
  | id   | name| subnets
  |
 
  +--+-+-+
  | c4a7475e-e33f-47d0-a6ff-d0cf50c012d7 | net1|
  ecdfe002-658e-4174-a33c-934ba09179b7 192.168.2.0/24 |
  | e8862e1c-0233-481f-b284-b027039feef7 | ext_net |
  783e6a47-d7e0-46ba-9c2a-55a92406b23b 10.5.12.20/24  |
 
  +--+-+-+
  *root@openstack-dev:~# quantum subnet-list
 
  +--+--++--+
  | id   | name | cidr   |
  allocation_pools |
 
  +--+--++--+
  | 783e6a47-d7e0-46ba-9c2a-55a92406b23b |  | 10.5.12.20/24  |
  {start: 10.5.12.21, end: 10.5.12.25} |
  | ecdfe002-658e-4174-a33c-934ba09179b7 |  | 192.168.2.0/24 |
  {start: 192.168.2.2, end: 192.168.2.254} |*
 
  +--+--++--+
  root@openstack-dev:~# quantum port-list
 
  +--+--+---++
  | id   | name | mac_address   |
  fixed_ips
 |
 
  +--+--+---++
  | 193bb8ee-f50d-4b1f-87ae-e033c1730953 |  | fa:16:3e:91:3d:c0 |
  {subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
  10.5.12.21}  |
  | 19bce882-c746-497b-b401-dedf5ab605b2 |  | fa:16:3e:97:89:f6 |
  {subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
  10.5.12.23}  |
  | 41ab9b15-ddc9-4a00-9a34-2e3f14e7e92f |  | fa:16:3e:45:58:03 |
  {subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
  192.168.2.2} |
  | 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |  | fa:16:3e:83:a7:e4 |
  {subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
  192.168.2.3} |
  | 59e69986-6e8a-4f1e-a754-a1d421cdebde |  | fa:16:3e:91:ee:76 |
  {subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
  192.168.2.1} |
  | 65167653-f6ff-438b-b465-f5dcc8974549 |  | fa:16:3e:a7:77:0b |
  {subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
  10.5.12.24}  |
 
  +--+--+---++
  root@openstack-dev:~# quantum floatingip-list
 
  +--+--+-+--+
  | id   | fixed_ip_address |
  floating_ip_address | 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-25 Thread Leen Besselink
On Thu, Apr 25, 2013 at 12:45:03PM +0530, Balamurugan V G wrote:
 Hi Leen,
 
 I do not have any other DHCP sever which can do this other than the one
 created by quantum. Infact, If i delete the route manually and restart the
 network(interface down and up), I get the routed added back. Please refer
 below:
 

Then the an other cause could be that you added something in /etc to try and 
fix a
problem you had before.

Have you checked that yet ?:

grep -R 169.254 /etc/ 2/dev/null

Just to be sure.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
Yup, If your host supports namespaces this can be done via the
quantum-metadata-agent.  The following setting is also required in your
 nova.conf: service_quantum_metadata_proxy=True


On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G
balamuruga...@gmail.comwrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata service
 work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
Thanks Aaron.

I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
169.254.169.254 from the VM. I am using a single node setup with two
NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

These are my metadata related configurations.

*/etc/nova/nova.conf *
metadata_host = 10.5.12.20
metadata_listen = 127.0.0.1
metadata_listen_port = 8775
metadata_manager=nova.api.manager.MetadataManager
service_quantum_metadata_proxy = true
quantum_metadata_proxy_shared_secret = metasecret123

*/etc/quantum/quantum.conf*
allow_overlapping_ips = True

*/etc/quantum/l3_agent.ini*
use_namespaces = True
auth_url = http://10.5.3.230:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass
metadata_ip = 10.5.12.20

*/etc/quantum/metadata_agent.ini*
auth_url = http://10.5.3.230:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass
nova_metadata_ip = 127.0.0.1
nova_metadata_port = 8775
metadata_proxy_shared_secret = metasecret123


I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
request but no response.

root@openstack-dev:~# ip netns exec
qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
qg-193bb8ee-f5
10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
qg-193bb8ee-f5
192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
qr-59e69986-6e
root@openstack-dev:~# ip netns exec
qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
65535 bytes
^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1, length
28
23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
Unknown), length 28
23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
length 28
23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
length 28
23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
length 28
23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
length 28

6 packets captured
6 packets received by filter
0 packets dropped by kernel
root@openstack-dev:~#


Any help will be greatly appreciated.

Thanks,
Balu


On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G balamuruga...@gmail.com
  wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata service
 work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
The vm should not have a routing table entry for 169.254.0.0/16  if it does
i'm not sure how it got there unless it was added by something other than
dhcp. It seems like that is your problem as the vm is arping directly for
that address rather than the default gw.


On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G
balamuruga...@gmail.comwrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
 and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
 the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
 ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
 request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16 from
the VMs routing table, I could access the metadata service!

The route for 169.254.0.0/16 is added automatically when the instance boots
up, so I assume its coming from the DHCP. Any idea how this can be
suppressed?

Strangely though, I do not see this route in a WindowsXP VM booted in the
same network as the earlier Ubuntu VM and the Windows VM can reach the
metadata service with out me doing anything. The issue is with the Ubuntu
VM.

Thanks,
Balu



On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G balamuruga...@gmail.com
  wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
 and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
 the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
 ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
 request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
Hrm, I'd do quantum subnet-list and see if you happened to create a subnet
169.254.0.0/16? Otherwise I think there is probably some software in your
vm image that is adding this route. One thing to test is if you delete this
route and then rerun dhclient to see if it's added again via dhcp.


On Wed, Apr 24, 2013 at 12:00 AM, Balamurugan V G
balamuruga...@gmail.comwrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in the
 same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
 and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
 the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
 ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
 request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp






___
Mailing list: 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Salvatore Orlando
The dhcp agent will set a route to 169.254.0.0/16 if
enable_isolated_metadata_proxy=True.
In that case the dhcp port ip will be the nexthop for that route.

Otherwise, it might be your image might have a 'builtin' route to such
cidr.
What's your nexthop for the link-local address?

Salvatore


On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in the
 same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
 and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
 the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
 ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
 request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp






 ___
 Mailing list: https://launchpad.net/~openstack
 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
Yup,  That's only if your subnet does not have a default gateway set.
Providing the output of route -n would be helpful .


On Wed, Apr 24, 2013 at 12:08 AM, Salvatore Orlando sorla...@nicira.comwrote:

 The dhcp agent will set a route to 169.254.0.0/16 if
 enable_isolated_metadata_proxy=True.
 In that case the dhcp port ip will be the nexthop for that route.

 Otherwise, it might be your image might have a 'builtin' route to such
 cidr.
 What's your nexthop for the link-local address?

 Salvatore


 On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in the
 same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I cant
 ping 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.comwrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
Hi Salvatore,

Thanks for the response. I do not have enable_isolated_metadata_proxy
anywhere under /etc/quantum and /etc/nova. The closest I see is
'enable_isolated_metadata' in /etc/quantum/dhcp_agent.ini and even that is
commented out. What do you mean by link-local address?

Like you said, I suspect that the image has the route. This was was a
snapshot taken in a Folsom setup. So its possible that Folsom has injected
this route and when I took the snapshot, it became part of the snapshot. I
then copied over this snapshot to a new Grizzly setup. Let me check the
image and remove it from the image if it has the route. Thanks for the hint
again.

Regards,
Balu



On Wed, Apr 24, 2013 at 12:38 PM, Salvatore Orlando sorla...@nicira.comwrote:

 The dhcp agent will set a route to 169.254.0.0/16 if
 enable_isolated_metadata_proxy=True.
 In that case the dhcp port ip will be the nexthop for that route.

 Otherwise, it might be your image might have a 'builtin' route to such
 cidr.
 What's your nexthop for the link-local address?

 Salvatore


 On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in the
 same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I cant
 ping 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
I do not have any thing running in the VM which could add this route. With
the route removed, when I disable and enable networking, so that it gets
back the details from DHCP server, I see that the route is getting added
again.

So DHCP seems to be my issue. I guess this rules out any pre-existing route
in the image as well.

Regards,
Balu


On Wed, Apr 24, 2013 at 12:39 PM, Aaron Rosen aro...@nicira.com wrote:

 Hrm, I'd do quantum subnet-list and see if you happened to create a subnet
 169.254.0.0/16? Otherwise I think there is probably some software in your
 vm image that is adding this route. One thing to test is if you delete this
 route and then rerun dhclient to see if it's added again via dhcp.


 On Wed, Apr 24, 2013 at 12:00 AM, Balamurugan V G balamuruga...@gmail.com
  wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in the
 same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I cant
 ping 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.comwrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
The routing table in the VM is:

root@vm:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 192.168.2.1 0.0.0.0 UG0  00 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000   00 eth0
192.168.2.0 0.0.0.0 255.255.255.0   U 1  00 eth0
root@vm:~#

And the routing table in the OpenStack node(single node host) is:

root@openstack-dev:~# ip netns exec
qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
qg-193bb8ee-f5
10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
qg-193bb8ee-f5
192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
qr-59e69986-6e
root@openstack-dev:~#

Regards,
Balu




On Wed, Apr 24, 2013 at 12:41 PM, Aaron Rosen aro...@nicira.com wrote:

 Yup,  That's only if your subnet does not have a default gateway set.
 Providing the output of route -n would be helpful .


 On Wed, Apr 24, 2013 at 12:08 AM, Salvatore Orlando 
 sorla...@nicira.comwrote:

 The dhcp agent will set a route to 169.254.0.0/16 if
 enable_isolated_metadata_proxy=True.
 In that case the dhcp port ip will be the nexthop for that route.

 Otherwise, it might be your image might have a 'builtin' route to such
 cidr.
 What's your nexthop for the link-local address?

 Salvatore


 On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in
 the same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I
 cant ping 169.254.169.254 from the VM. I am using a single node setup with
 two NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref
 Use Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  0
 0 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  0
 0 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  0
 0 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
I booted a Ubuntu Image in which I had made sure that there was no
pre-existing route for 169,254.0.0/16. But its getting the route from DHCP
once its boots up. So its the DHCP server which is sending this route to
the VM.

Regards,
Balu


On Wed, Apr 24, 2013 at 12:47 PM, Balamurugan V G
balamuruga...@gmail.comwrote:

 Hi Salvatore,

 Thanks for the response. I do not have enable_isolated_metadata_proxy
 anywhere under /etc/quantum and /etc/nova. The closest I see is
 'enable_isolated_metadata' in /etc/quantum/dhcp_agent.ini and even that is
 commented out. What do you mean by link-local address?

 Like you said, I suspect that the image has the route. This was was a
 snapshot taken in a Folsom setup. So its possible that Folsom has injected
 this route and when I took the snapshot, it became part of the snapshot. I
 then copied over this snapshot to a new Grizzly setup. Let me check the
 image and remove it from the image if it has the route. Thanks for the hint
 again.

 Regards,
 Balu



 On Wed, Apr 24, 2013 at 12:38 PM, Salvatore Orlando 
 sorla...@nicira.comwrote:

 The dhcp agent will set a route to 169.254.0.0/16 if
 enable_isolated_metadata_proxy=True.
 In that case the dhcp port ip will be the nexthop for that route.

 Otherwise, it might be your image might have a 'builtin' route to such
 cidr.
 What's your nexthop for the link-local address?

 Salvatore


 On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in
 the same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I
 cant ping 169.254.169.254 from the VM. I am using a single node setup with
 two NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref
 Use Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  0
 0 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  0
 0 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  0
 0 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
Can you show us a quantum subnet-show for the subnet your vm has an ip on.
Is it possible that you added a host_route to the subnet for 169.254/16?

Or could you try this image:
http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img


On Wed, Apr 24, 2013 at 1:06 AM, Balamurugan V G balamuruga...@gmail.comwrote:

 I booted a Ubuntu Image in which I had made sure that there was no
 pre-existing route for 169,254.0.0/16. But its getting the route from DHCP
 once its boots up. So its the DHCP server which is sending this route to
 the VM.

 Regards,
 Balu


 On Wed, Apr 24, 2013 at 12:47 PM, Balamurugan V G balamuruga...@gmail.com
  wrote:

 Hi Salvatore,

 Thanks for the response. I do not have enable_isolated_metadata_proxy
 anywhere under /etc/quantum and /etc/nova. The closest I see is
 'enable_isolated_metadata' in /etc/quantum/dhcp_agent.ini and even that is
 commented out. What do you mean by link-local address?

 Like you said, I suspect that the image has the route. This was was a
 snapshot taken in a Folsom setup. So its possible that Folsom has injected
 this route and when I took the snapshot, it became part of the snapshot. I
 then copied over this snapshot to a new Grizzly setup. Let me check the
 image and remove it from the image if it has the route. Thanks for the hint
 again.

 Regards,
 Balu



 On Wed, Apr 24, 2013 at 12:38 PM, Salvatore Orlando 
 sorla...@nicira.comwrote:

 The dhcp agent will set a route to 169.254.0.0/16 if
 enable_isolated_metadata_proxy=True.
 In that case the dhcp port ip will be the nexthop for that route.

 Otherwise, it might be your image might have a 'builtin' route to such
 cidr.
 What's your nexthop for the link-local address?

 Salvatore


 On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in
 the same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.comwrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if
 it does i'm not sure how it got there unless it was added by something
 other than dhcp. It seems like that is your problem as the vm is arping
 directly for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I 
 see
 that the VM's routing table has an entry for 169.254.0.0/16 but I
 cant ping 169.254.169.254 from the VM. I am using a single node setup 
 with
 two NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running.
 When I ping 169.254.169.254 from VM, in the host's router namespace, I 
 see
 the ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref
 Use Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  0
 0 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  0
 0 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  0
 0 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture
 size 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Martinx - ジェームズ
Hi Balu!

Listen, is your metadata service up and running?!

If yes, which guide you used?

I'm trying everything I can to enable metadata without L3 with a Quantum
Single Flat topology for my own guide:
https://gist.github.com/tmartinx/d36536b7b62a48f859c2

I really appreciate any feedback!

Tks!
Thiago


On 24 April 2013 03:34, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
 and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
 the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
 ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
 request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
Hi Aaron,

I tried the image you pointed and it worked fine out of the box. That is it
did not get the route to 169.254.0.0.26 on boot and I am able to retrieve
info from metadata service. The image I was using earlier is a Ubuntu 12.04
LTS desktop image. What do you think could be wrong with my image? Its
almost the vanilla Ubuntu image, I have not installed much. on it.

Here is the quantum details you asked and more. This was taken before I
tried the image you pointed to. And by the way, I have not added any host
route as well.

root@openstack-dev:~# quantum router-list
+--+-++
| id   | name| external_gateway_info
   |
+--+-++
| d9e87e85-8410-4398-9ddd-2dbc36f4b593 | router1 | {network_id:
e8862e1c-0233-481f-b284-b027039feef7} |
+--+-++
root@openstack-dev:~# quantum net-list
+--+-+-+
| id   | name| subnets
|
+--+-+-+
| c4a7475e-e33f-47d0-a6ff-d0cf50c012d7 | net1|
ecdfe002-658e-4174-a33c-934ba09179b7 192.168.2.0/24 |
| e8862e1c-0233-481f-b284-b027039feef7 | ext_net |
783e6a47-d7e0-46ba-9c2a-55a92406b23b 10.5.12.20/24  |
+--+-+-+
*root@openstack-dev:~# quantum subnet-list
+--+--++--+
| id   | name | cidr   |
allocation_pools |
+--+--++--+
| 783e6a47-d7e0-46ba-9c2a-55a92406b23b |  | 10.5.12.20/24  | {start:
10.5.12.21, end: 10.5.12.25} |
| ecdfe002-658e-4174-a33c-934ba09179b7 |  | 192.168.2.0/24 | {start:
192.168.2.2, end: 192.168.2.254} |*
+--+--++--+
root@openstack-dev:~# quantum port-list
+--+--+---++
| id   | name | mac_address   |
fixed_ips
   |
+--+--+---++
| 193bb8ee-f50d-4b1f-87ae-e033c1730953 |  | fa:16:3e:91:3d:c0 |
{subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
10.5.12.21}  |
| 19bce882-c746-497b-b401-dedf5ab605b2 |  | fa:16:3e:97:89:f6 |
{subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
10.5.12.23}  |
| 41ab9b15-ddc9-4a00-9a34-2e3f14e7e92f |  | fa:16:3e:45:58:03 |
{subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
192.168.2.2} |
| 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |  | fa:16:3e:83:a7:e4 |
{subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
192.168.2.3} |
| 59e69986-6e8a-4f1e-a754-a1d421cdebde |  | fa:16:3e:91:ee:76 |
{subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
192.168.2.1} |
| 65167653-f6ff-438b-b465-f5dcc8974549 |  | fa:16:3e:a7:77:0b |
{subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
10.5.12.24}  |
+--+--+---++
root@openstack-dev:~# quantum floatingip-list
+--+--+-+--+
| id   | fixed_ip_address |
floating_ip_address | port_id  |
+--+--+-+--+
| 1a5dfbf3-0986-461d-854e-f4f8ebb58f8d | 192.168.2.3  | 10.5.12.23
 | 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |
| f9d6e7f4-b251-4a2d-9310-532d8ee376f6 |  | 10.5.12.24
 |  |
+--+--+-+--+
root@openstack-dev:~# quantum subnet-show
ecdfe002-658e-4174-a33c-934ba09179b7
+--+--+
| Field| Value|

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
I'm not sure but if it works fine with the ubuntu cloud image and not with
your ubuntu image than there is something in your image adding that route.


On Wed, Apr 24, 2013 at 10:06 PM, Balamurugan V G
balamuruga...@gmail.comwrote:

 Hi Aaron,

 I tried the image you pointed and it worked fine out of the box. That is
 it did not get the route to 169.254.0.0.26 on boot and I am able to
 retrieve info from metadata service. The image I was using earlier is a
 Ubuntu 12.04 LTS desktop image. What do you think could be wrong with my
 image? Its almost the vanilla Ubuntu image, I have not installed much. on
 it.

 Here is the quantum details you asked and more. This was taken before I
 tried the image you pointed to. And by the way, I have not added any host
 route as well.

 root@openstack-dev:~# quantum router-list

 +--+-++
 | id   | name| external_gateway_info
|

 +--+-++
 | d9e87e85-8410-4398-9ddd-2dbc36f4b593 | router1 | {network_id:
 e8862e1c-0233-481f-b284-b027039feef7} |

 +--+-++
 root@openstack-dev:~# quantum net-list

 +--+-+-+
 | id   | name| subnets
 |

 +--+-+-+
 | c4a7475e-e33f-47d0-a6ff-d0cf50c012d7 | net1|
 ecdfe002-658e-4174-a33c-934ba09179b7 192.168.2.0/24 |
 | e8862e1c-0233-481f-b284-b027039feef7 | ext_net |
 783e6a47-d7e0-46ba-9c2a-55a92406b23b 10.5.12.20/24  |

 +--+-+-+
 *root@openstack-dev:~# quantum subnet-list

 +--+--++--+
 | id   | name | cidr   |
 allocation_pools |

 +--+--++--+
 | 783e6a47-d7e0-46ba-9c2a-55a92406b23b |  | 10.5.12.20/24  |
 {start: 10.5.12.21, end: 10.5.12.25} |
 | ecdfe002-658e-4174-a33c-934ba09179b7 |  | 192.168.2.0/24 |
 {start: 192.168.2.2, end: 192.168.2.254} |*

 +--+--++--+
 root@openstack-dev:~# quantum port-list

 +--+--+---++
 | id   | name | mac_address   |
 fixed_ips
|

 +--+--+---++
 | 193bb8ee-f50d-4b1f-87ae-e033c1730953 |  | fa:16:3e:91:3d:c0 |
 {subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
 10.5.12.21}  |
 | 19bce882-c746-497b-b401-dedf5ab605b2 |  | fa:16:3e:97:89:f6 |
 {subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
 10.5.12.23}  |
 | 41ab9b15-ddc9-4a00-9a34-2e3f14e7e92f |  | fa:16:3e:45:58:03 |
 {subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
 192.168.2.2} |
 | 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |  | fa:16:3e:83:a7:e4 |
 {subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
 192.168.2.3} |
 | 59e69986-6e8a-4f1e-a754-a1d421cdebde |  | fa:16:3e:91:ee:76 |
 {subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
 192.168.2.1} |
 | 65167653-f6ff-438b-b465-f5dcc8974549 |  | fa:16:3e:a7:77:0b |
 {subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
 10.5.12.24}  |

 +--+--+---++
 root@openstack-dev:~# quantum floatingip-list

 +--+--+-+--+
 | id   | fixed_ip_address |
 floating_ip_address | port_id  |

 +--+--+-+--+
 | 1a5dfbf3-0986-461d-854e-f4f8ebb58f8d | 192.168.2.3  | 10.5.12.23
  | 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |
 | f9d6e7f4-b251-4a2d-9310-532d8ee376f6 |  | 10.5.12.24
  |  |

 

[Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-23 Thread Balamurugan V G
Hi,

In Grizzly, when using quantum and overlapping IPs, does metadata service
work? This wasnt working in Folsom.

Thanks,
Balu
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp