Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Leen Besselink
Balu ?

Have you tried looking in the /var/lib/dhcp directory (the directory might 
depend
on the DHCP-client you are using) of the Ubuntu image ?

As this isn't a clean image but it has been connected to an other network, 
maybe a
previous DHCP-server told it to add the route ? And now the client is just 
re-using
an old lease ?

On Wed, Apr 24, 2013 at 10:13:52PM -0700, Aaron Rosen wrote:
> I'm not sure but if it works fine with the ubuntu cloud image and not with
> your ubuntu image than there is something in your image adding that route.
> 
> 
> On Wed, Apr 24, 2013 at 10:06 PM, Balamurugan V G
> wrote:
> 
> > Hi Aaron,
> >
> > I tried the image you pointed and it worked fine out of the box. That is
> > it did not get the route to 169.254.0.0.26 on boot and I am able to
> > retrieve info from metadata service. The image I was using earlier is a
> > Ubuntu 12.04 LTS desktop image. What do you think could be wrong with my
> > image? Its almost the vanilla Ubuntu image, I have not installed much. on
> > it.
> >
> > Here is the quantum details you asked and more. This was taken before I
> > tried the image you pointed to. And by the way, I have not added any host
> > route as well.
> >
> > root@openstack-dev:~# quantum router-list
> >
> > +--+-++
> > | id   | name| external_gateway_info
> >|
> >
> > +--+-++
> > | d9e87e85-8410-4398-9ddd-2dbc36f4b593 | router1 | {"network_id":
> > "e8862e1c-0233-481f-b284-b027039feef7"} |
> >
> > +--+-++
> > root@openstack-dev:~# quantum net-list
> >
> > +--+-+-+
> > | id   | name| subnets
> > |
> >
> > +--+-+-+
> > | c4a7475e-e33f-47d0-a6ff-d0cf50c012d7 | net1|
> > ecdfe002-658e-4174-a33c-934ba09179b7 192.168.2.0/24 |
> > | e8862e1c-0233-481f-b284-b027039feef7 | ext_net |
> > 783e6a47-d7e0-46ba-9c2a-55a92406b23b 10.5.12.20/24  |
> >
> > +--+-+-+
> > *root@openstack-dev:~# quantum subnet-list
> >
> > +--+--++--+
> > | id   | name | cidr   |
> > allocation_pools |
> >
> > +--+--++--+
> > | 783e6a47-d7e0-46ba-9c2a-55a92406b23b |  | 10.5.12.20/24  |
> > {"start": "10.5.12.21", "end": "10.5.12.25"} |
> > | ecdfe002-658e-4174-a33c-934ba09179b7 |  | 192.168.2.0/24 |
> > {"start": "192.168.2.2", "end": "192.168.2.254"} |*
> >
> > +--+--++--+
> > root@openstack-dev:~# quantum port-list
> >
> > +--+--+---++
> > | id   | name | mac_address   |
> > fixed_ips
> >|
> >
> > +--+--+---++
> > | 193bb8ee-f50d-4b1f-87ae-e033c1730953 |  | fa:16:3e:91:3d:c0 |
> > {"subnet_id": "783e6a47-d7e0-46ba-9c2a-55a92406b23b", "ip_address":
> > "10.5.12.21"}  |
> > | 19bce882-c746-497b-b401-dedf5ab605b2 |  | fa:16:3e:97:89:f6 |
> > {"subnet_id": "783e6a47-d7e0-46ba-9c2a-55a92406b23b", "ip_address":
> > "10.5.12.23"}  |
> > | 41ab9b15-ddc9-4a00-9a34-2e3f14e7e92f |  | fa:16:3e:45:58:03 |
> > {"subnet_id": "ecdfe002-658e-4174-a33c-934ba09179b7", "ip_address":
> > "192.168.2.2"} |
> > | 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |  | fa:16:3e:83:a7:e4 |
> > {"subnet_id": "ecdfe002-658e-4174-a33c-934ba09179b7", "ip_address":
> > "192.168.2.3"} |
> > | 59e69986-6e8a-4f1e-a754-a1d421cdebde |  | fa:16:3e:91:ee:76 |
> > {"subnet_id": "ecdfe002-658e-4174-a33c-934ba09179b7", "ip_address":
> > "192.168.2.1"} |
> > | 65167653-f6ff-438b-b465-f5dcc8974549 |  | fa:16:3e:a7:77:0b |
> > {"subnet_id": "783e6a47-d7e0-46ba-9c2a-55a92406b23b", "ip_address":
> > "10.5.12.24"}  |
> >
> > +--+--+---++
> > root@openstack-dev:~# quantum floatingip-list
> >
> >

Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Wangpan
I check the master codes of nova several days ago, and find the first logical 
partition of root disk will be choosed to inject files, so you may have to 
change the nova code to implement what you want.
I also will try to fix this issue in the havana edition.

2013-04-25



Wangpan



发件人:Balamurugan V G
发送时间:2013-04-25 14:18
主题:Re: Re: [Openstack] [OpenStack] Files Injection in to Windows VMs
收件人:"Wangpan"
抄送:"openstack@lists.launchpad.net"

Is there was way to inject in to the regular filesystem(C: drive) in 
Windows7/Windows8?


Regards,
Balu




On Thu, Apr 25, 2013 at 11:46 AM, Balamurugan V G  
wrote:

Thanks for link! By running the OpenHiddenSystemDrive exe, I am able to see the 
injected file.


Regards,
Balu




On Thu, Apr 25, 2013 at 10:30 AM, Wangpan  wrote:

have you open and check the 'system reserved partition'? see the refer at 
bellow:
http://www.techfeb.com/how-to-open-windows-7-hidden-system-reserved-partition/

2013-04-25



Wangpan



发件人:Balamurugan V G
发送时间:2013-04-25 12:34
主题:Re: [Openstack] [OpenStack] Files Injection in to Windows VMs
收件人:"Wangpan"
抄送:"openstack@lists.launchpad.net"

Hi Wanpan, 

While I am able to inject files in to WindowsXP, CentOS5.9 and 
Ubuntu12.04. I am unable to do it for Windows8Entrprise OS. I did 
search the entire drive for the file I injected but couldnt file. 
Below is the log from nova-compute.log. 


2013-04-24 01:41:27.973 AUDIT nova.compute.manager 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Starting instance... 
2013-04-24 01:41:28.170 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Attempting claim: 
memory 1024 MB, disk 10 GB, VCPUs 1 
2013-04-24 01:41:28.171 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Memory: 3953 
MB, used: 2048 MB 
2013-04-24 01:41:28.171 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Memory limit: 5929 
MB, free: 3881 MB 
2013-04-24 01:41:28.172 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Disk: 225 GB, 
used: 15 GB 
2013-04-24 01:41:28.172 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Disk limit not 
specified, defaulting to unlimited 
2013-04-24 01:41:28.173 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total CPU: 2 VCPUs, 
used: 2 VCPUs 
2013-04-24 01:41:28.173 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] CPU limit not 
specified, defaulting to unlimited 
2013-04-24 01:41:28.174 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Claim successful 
2013-04-24 01:41:33.998 INFO nova.virt.libvirt.driver 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Creating image 
2013-04-24 01:41:34.281 INFO nova.virt.libvirt.driver 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Injecting files into 
image 65eaa160-d0e7-403e-a52c-90bea3c22cf7 
2013-04-24 01:41:36.534 INFO nova.virt.libvirt.firewall 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Called 
setup_basic_filtering in nwfilter 
2013-04-24 01:41:36.535 INFO nova.virt.libvirt.firewall 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Ensuring static 
filters 
2013-04-24 01:41:38.555 13316 INFO nova.compute.manager [-] Lifecycle 
event 0 on VM aa46445e-1f86-4a5a-8002-a7703ff98648 
2013-04-24 01:41:38.763 13316 INFO nova.virt.libvirt.driver [-] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Instance spawned 
successfully. 
2013-04-24 01:41:38.996 13316 INFO nova.compute.manager [-] [instance: 
aa46445e-1f8

Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Balamurugan V G
Is there was way to inject in to the regular filesystem(C: drive) in
Windows7/Windows8?

Regards,
Balu


On Thu, Apr 25, 2013 at 11:46 AM, Balamurugan V G
wrote:

> Thanks for link! By running the OpenHiddenSystemDrive exe, I am able to
> see the injected file.
>
> Regards,
> Balu
>
>
> On Thu, Apr 25, 2013 at 10:30 AM, Wangpan wrote:
>
>> **
>> have you open and check the 'system reserved partition'? see the refer at
>> bellow:
>>
>> http://www.techfeb.com/how-to-open-windows-7-hidden-system-reserved-partition/
>>
>> 2013-04-25
>>  --
>>  Wangpan
>>  --
>>  *发件人:*Balamurugan V G
>> *发送时间:*2013-04-25 12:34
>> *主题:*Re: [Openstack] [OpenStack] Files Injection in to Windows VMs
>> *收件人:*"Wangpan"
>> *抄送:*"openstack@lists.launchpad.net"
>>
>>  Hi Wanpan,
>>
>> While I am able to inject files in to WindowsXP, CentOS5.9 and
>> Ubuntu12.04. I am unable to do it for Windows8Entrprise OS. I did
>> search the entire drive for the file I injected but couldnt file.
>> Below is the log from nova-compute.log.
>>
>>
>> 2013-04-24 01:41:27.973 AUDIT nova.compute.manager
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Starting instance...
>> 2013-04-24 01:41:28.170 AUDIT nova.compute.claims
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Attempting claim:
>> memory 1024 MB, disk 10 GB, VCPUs 1
>> 2013-04-24 01:41:28.171 AUDIT nova.compute.claims
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Memory: 3953
>> MB, used: 2048 MB
>> 2013-04-24 01:41:28.171 AUDIT nova.compute.claims
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Memory limit: 5929
>> MB, free: 3881 MB
>> 2013-04-24 01:41:28.172 AUDIT nova.compute.claims
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Disk: 225 GB,
>> used: 15 GB
>> 2013-04-24 01:41:28.172 AUDIT nova.compute.claims
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Disk limit not
>> specified, defaulting to unlimited
>> 2013-04-24 01:41:28.173 AUDIT nova.compute.claims
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total CPU: 2 VCPUs,
>> used: 2 VCPUs
>> 2013-04-24 01:41:28.173 AUDIT nova.compute.claims
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] CPU limit not
>> specified, defaulting to unlimited
>> 2013-04-24 01:41:28.174 AUDIT nova.compute.claims
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Claim successful
>> 2013-04-24 01:41:33.998 INFO nova.virt.libvirt.driver
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Creating image
>> 2013-04-24 01:41:34.281 INFO nova.virt.libvirt.driver
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Injecting files into
>> image 65eaa160-d0e7-403e-a52c-90bea3c22cf7
>> 2013-04-24 01:41:36.534 INFO nova.virt.libvirt.firewall
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Called
>> setup_basic_filtering in nwfilter
>> 2013-04-24 01:41:36.535 INFO nova.virt.libvirt.firewall
>> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
>> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Ensuring static
>> filters
>> 2013-04-24 01:41:38.555 13316 INFO nova.compute.manager [-] Lifecycle
>> event 0 on VM aa46445e-1f86-4a5a-8002-a7703ff98648
>> 2013-04-24 01:41:38.763 13316 INFO nova.virt.libvirt.driver [-]
>> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Instance spawned
>> successfully.
>> 2013-04-24 01:41:38.996 13316 INFO nova.compute.manager [-] [instance:
>> aa46445e-1f86-4a5a-8002-a7703ff98648] During sync_power_state the
>> instance has a pending task. Skip.
>> 2013-04-24 01:41:59.494 13316 AUDIT nova.compute.resource_tracker [-]
>> Aud

Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Balamurugan V G
Thanks for link! By running the OpenHiddenSystemDrive exe, I am able to see
the injected file.

Regards,
Balu


On Thu, Apr 25, 2013 at 10:30 AM, Wangpan wrote:

> **
> have you open and check the 'system reserved partition'? see the refer at
> bellow:
>
> http://www.techfeb.com/how-to-open-windows-7-hidden-system-reserved-partition/
>
> 2013-04-25
>  --
>  Wangpan
>  --
>  *发件人:*Balamurugan V G
> *发送时间:*2013-04-25 12:34
> *主题:*Re: [Openstack] [OpenStack] Files Injection in to Windows VMs
> *收件人:*"Wangpan"
> *抄送:*"openstack@lists.launchpad.net"
>
>  Hi Wanpan,
>
> While I am able to inject files in to WindowsXP, CentOS5.9 and
> Ubuntu12.04. I am unable to do it for Windows8Entrprise OS. I did
> search the entire drive for the file I injected but couldnt file.
> Below is the log from nova-compute.log.
>
>
> 2013-04-24 01:41:27.973 AUDIT nova.compute.manager
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Starting instance...
> 2013-04-24 01:41:28.170 AUDIT nova.compute.claims
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Attempting claim:
> memory 1024 MB, disk 10 GB, VCPUs 1
> 2013-04-24 01:41:28.171 AUDIT nova.compute.claims
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Memory: 3953
> MB, used: 2048 MB
> 2013-04-24 01:41:28.171 AUDIT nova.compute.claims
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Memory limit: 5929
> MB, free: 3881 MB
> 2013-04-24 01:41:28.172 AUDIT nova.compute.claims
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Disk: 225 GB,
> used: 15 GB
> 2013-04-24 01:41:28.172 AUDIT nova.compute.claims
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Disk limit not
> specified, defaulting to unlimited
> 2013-04-24 01:41:28.173 AUDIT nova.compute.claims
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total CPU: 2 VCPUs,
> used: 2 VCPUs
> 2013-04-24 01:41:28.173 AUDIT nova.compute.claims
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] CPU limit not
> specified, defaulting to unlimited
> 2013-04-24 01:41:28.174 AUDIT nova.compute.claims
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Claim successful
> 2013-04-24 01:41:33.998 INFO nova.virt.libvirt.driver
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Creating image
> 2013-04-24 01:41:34.281 INFO nova.virt.libvirt.driver
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Injecting files into
> image 65eaa160-d0e7-403e-a52c-90bea3c22cf7
> 2013-04-24 01:41:36.534 INFO nova.virt.libvirt.firewall
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Called
> setup_basic_filtering in nwfilter
> 2013-04-24 01:41:36.535 INFO nova.virt.libvirt.firewall
> [req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
> 117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Ensuring static
> filters
> 2013-04-24 01:41:38.555 13316 INFO nova.compute.manager [-] Lifecycle
> event 0 on VM aa46445e-1f86-4a5a-8002-a7703ff98648
> 2013-04-24 01:41:38.763 13316 INFO nova.virt.libvirt.driver [-]
> [instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Instance spawned
> successfully.
> 2013-04-24 01:41:38.996 13316 INFO nova.compute.manager [-] [instance:
> aa46445e-1f86-4a5a-8002-a7703ff98648] During sync_power_state the
> instance has a pending task. Skip.
> 2013-04-24 01:41:59.494 13316 AUDIT nova.compute.resource_tracker [-]
> Auditing locally available compute resources
> 2013-04-24 01:42:00.345 13316 AUDIT nova.compute.resource_tracker [-]
> Free ram (MB): 881
> 2013-04-24 01:42:00.346 13316 AUDIT nova.compute.resource_tracker [-]
> Free disk (GB): 200
> 2013-04-24 01:42:00.346 13316 AUDIT nova.c

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
I'm not sure but if it works fine with the ubuntu cloud image and not with
your ubuntu image than there is something in your image adding that route.


On Wed, Apr 24, 2013 at 10:06 PM, Balamurugan V G
wrote:

> Hi Aaron,
>
> I tried the image you pointed and it worked fine out of the box. That is
> it did not get the route to 169.254.0.0.26 on boot and I am able to
> retrieve info from metadata service. The image I was using earlier is a
> Ubuntu 12.04 LTS desktop image. What do you think could be wrong with my
> image? Its almost the vanilla Ubuntu image, I have not installed much. on
> it.
>
> Here is the quantum details you asked and more. This was taken before I
> tried the image you pointed to. And by the way, I have not added any host
> route as well.
>
> root@openstack-dev:~# quantum router-list
>
> +--+-++
> | id   | name| external_gateway_info
>|
>
> +--+-++
> | d9e87e85-8410-4398-9ddd-2dbc36f4b593 | router1 | {"network_id":
> "e8862e1c-0233-481f-b284-b027039feef7"} |
>
> +--+-++
> root@openstack-dev:~# quantum net-list
>
> +--+-+-+
> | id   | name| subnets
> |
>
> +--+-+-+
> | c4a7475e-e33f-47d0-a6ff-d0cf50c012d7 | net1|
> ecdfe002-658e-4174-a33c-934ba09179b7 192.168.2.0/24 |
> | e8862e1c-0233-481f-b284-b027039feef7 | ext_net |
> 783e6a47-d7e0-46ba-9c2a-55a92406b23b 10.5.12.20/24  |
>
> +--+-+-+
> *root@openstack-dev:~# quantum subnet-list
>
> +--+--++--+
> | id   | name | cidr   |
> allocation_pools |
>
> +--+--++--+
> | 783e6a47-d7e0-46ba-9c2a-55a92406b23b |  | 10.5.12.20/24  |
> {"start": "10.5.12.21", "end": "10.5.12.25"} |
> | ecdfe002-658e-4174-a33c-934ba09179b7 |  | 192.168.2.0/24 |
> {"start": "192.168.2.2", "end": "192.168.2.254"} |*
>
> +--+--++--+
> root@openstack-dev:~# quantum port-list
>
> +--+--+---++
> | id   | name | mac_address   |
> fixed_ips
>|
>
> +--+--+---++
> | 193bb8ee-f50d-4b1f-87ae-e033c1730953 |  | fa:16:3e:91:3d:c0 |
> {"subnet_id": "783e6a47-d7e0-46ba-9c2a-55a92406b23b", "ip_address":
> "10.5.12.21"}  |
> | 19bce882-c746-497b-b401-dedf5ab605b2 |  | fa:16:3e:97:89:f6 |
> {"subnet_id": "783e6a47-d7e0-46ba-9c2a-55a92406b23b", "ip_address":
> "10.5.12.23"}  |
> | 41ab9b15-ddc9-4a00-9a34-2e3f14e7e92f |  | fa:16:3e:45:58:03 |
> {"subnet_id": "ecdfe002-658e-4174-a33c-934ba09179b7", "ip_address":
> "192.168.2.2"} |
> | 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |  | fa:16:3e:83:a7:e4 |
> {"subnet_id": "ecdfe002-658e-4174-a33c-934ba09179b7", "ip_address":
> "192.168.2.3"} |
> | 59e69986-6e8a-4f1e-a754-a1d421cdebde |  | fa:16:3e:91:ee:76 |
> {"subnet_id": "ecdfe002-658e-4174-a33c-934ba09179b7", "ip_address":
> "192.168.2.1"} |
> | 65167653-f6ff-438b-b465-f5dcc8974549 |  | fa:16:3e:a7:77:0b |
> {"subnet_id": "783e6a47-d7e0-46ba-9c2a-55a92406b23b", "ip_address":
> "10.5.12.24"}  |
>
> +--+--+---++
> root@openstack-dev:~# quantum floatingip-list
>
> +--+--+-+--+
> | id   | fixed_ip_address |
> floating_ip_address | port_id  |
>
> +--+--+-+--+
> | 1a5dfbf3-0986-461d-854e-f4f8ebb58f8d | 192.168.2.3  | 10.5.12.23
>  | 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |
> | f9d6e7f4-b251-4a2d-9310-532d8ee376f6 |  | 10.5.12.24
>   

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
Hi Aaron,

I tried the image you pointed and it worked fine out of the box. That is it
did not get the route to 169.254.0.0.26 on boot and I am able to retrieve
info from metadata service. The image I was using earlier is a Ubuntu 12.04
LTS desktop image. What do you think could be wrong with my image? Its
almost the vanilla Ubuntu image, I have not installed much. on it.

Here is the quantum details you asked and more. This was taken before I
tried the image you pointed to. And by the way, I have not added any host
route as well.

root@openstack-dev:~# quantum router-list
+--+-++
| id   | name| external_gateway_info
   |
+--+-++
| d9e87e85-8410-4398-9ddd-2dbc36f4b593 | router1 | {"network_id":
"e8862e1c-0233-481f-b284-b027039feef7"} |
+--+-++
root@openstack-dev:~# quantum net-list
+--+-+-+
| id   | name| subnets
|
+--+-+-+
| c4a7475e-e33f-47d0-a6ff-d0cf50c012d7 | net1|
ecdfe002-658e-4174-a33c-934ba09179b7 192.168.2.0/24 |
| e8862e1c-0233-481f-b284-b027039feef7 | ext_net |
783e6a47-d7e0-46ba-9c2a-55a92406b23b 10.5.12.20/24  |
+--+-+-+
*root@openstack-dev:~# quantum subnet-list
+--+--++--+
| id   | name | cidr   |
allocation_pools |
+--+--++--+
| 783e6a47-d7e0-46ba-9c2a-55a92406b23b |  | 10.5.12.20/24  | {"start":
"10.5.12.21", "end": "10.5.12.25"} |
| ecdfe002-658e-4174-a33c-934ba09179b7 |  | 192.168.2.0/24 | {"start":
"192.168.2.2", "end": "192.168.2.254"} |*
+--+--++--+
root@openstack-dev:~# quantum port-list
+--+--+---++
| id   | name | mac_address   |
fixed_ips
   |
+--+--+---++
| 193bb8ee-f50d-4b1f-87ae-e033c1730953 |  | fa:16:3e:91:3d:c0 |
{"subnet_id": "783e6a47-d7e0-46ba-9c2a-55a92406b23b", "ip_address":
"10.5.12.21"}  |
| 19bce882-c746-497b-b401-dedf5ab605b2 |  | fa:16:3e:97:89:f6 |
{"subnet_id": "783e6a47-d7e0-46ba-9c2a-55a92406b23b", "ip_address":
"10.5.12.23"}  |
| 41ab9b15-ddc9-4a00-9a34-2e3f14e7e92f |  | fa:16:3e:45:58:03 |
{"subnet_id": "ecdfe002-658e-4174-a33c-934ba09179b7", "ip_address":
"192.168.2.2"} |
| 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |  | fa:16:3e:83:a7:e4 |
{"subnet_id": "ecdfe002-658e-4174-a33c-934ba09179b7", "ip_address":
"192.168.2.3"} |
| 59e69986-6e8a-4f1e-a754-a1d421cdebde |  | fa:16:3e:91:ee:76 |
{"subnet_id": "ecdfe002-658e-4174-a33c-934ba09179b7", "ip_address":
"192.168.2.1"} |
| 65167653-f6ff-438b-b465-f5dcc8974549 |  | fa:16:3e:a7:77:0b |
{"subnet_id": "783e6a47-d7e0-46ba-9c2a-55a92406b23b", "ip_address":
"10.5.12.24"}  |
+--+--+---++
root@openstack-dev:~# quantum floatingip-list
+--+--+-+--+
| id   | fixed_ip_address |
floating_ip_address | port_id  |
+--+--+-+--+
| 1a5dfbf3-0986-461d-854e-f4f8ebb58f8d | 192.168.2.3  | 10.5.12.23
 | 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |
| f9d6e7f4-b251-4a2d-9310-532d8ee376f6 |  | 10.5.12.24
 |  |
+--+--+-+--+
root@openstack-dev:~# quantum subnet-show
ecdfe002-658e-4174-a33c-934ba09179b7
+--+--+
| Field| Value   

[Openstack] Call for speakers for the 2nd OpenStack User Group Nordics meetup in Stockholm, Sweden

2013-04-24 Thread Nicolae Paladi
Hi,

Following the positive feedback after the 1st OpenStack User Group Nordics
(OSUGN) in 
Stockholm,
we thought it's time to schedule the next meetup!

This is a call for speakers for the 2nd OSUGN meetup in
Stockholm,
scheduled for Wednesday, September 11, 2013.

The focus is on technical talks about on-going projects, OpenStack
deployment "war stories", projects in incubation, etc. Security-related
topics get bonus points.

Cheers,
/Nicolae.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Wangpan
have you open and check the 'system reserved partition'? see the refer at 
bellow:
http://www.techfeb.com/how-to-open-windows-7-hidden-system-reserved-partition/

2013-04-25



Wangpan



发件人:Balamurugan V G
发送时间:2013-04-25 12:34
主题:Re: [Openstack] [OpenStack] Files Injection in to Windows VMs
收件人:"Wangpan"
抄送:"openstack@lists.launchpad.net"

Hi Wanpan, 

While I am able to inject files in to WindowsXP, CentOS5.9 and 
Ubuntu12.04. I am unable to do it for Windows8Entrprise OS. I did 
search the entire drive for the file I injected but couldnt file. 
Below is the log from nova-compute.log. 


2013-04-24 01:41:27.973 AUDIT nova.compute.manager 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Starting instance... 
2013-04-24 01:41:28.170 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Attempting claim: 
memory 1024 MB, disk 10 GB, VCPUs 1 
2013-04-24 01:41:28.171 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Memory: 3953 
MB, used: 2048 MB 
2013-04-24 01:41:28.171 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Memory limit: 5929 
MB, free: 3881 MB 
2013-04-24 01:41:28.172 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Disk: 225 GB, 
used: 15 GB 
2013-04-24 01:41:28.172 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Disk limit not 
specified, defaulting to unlimited 
2013-04-24 01:41:28.173 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total CPU: 2 VCPUs, 
used: 2 VCPUs 
2013-04-24 01:41:28.173 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] CPU limit not 
specified, defaulting to unlimited 
2013-04-24 01:41:28.174 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Claim successful 
2013-04-24 01:41:33.998 INFO nova.virt.libvirt.driver 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Creating image 
2013-04-24 01:41:34.281 INFO nova.virt.libvirt.driver 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Injecting files into 
image 65eaa160-d0e7-403e-a52c-90bea3c22cf7 
2013-04-24 01:41:36.534 INFO nova.virt.libvirt.firewall 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Called 
setup_basic_filtering in nwfilter 
2013-04-24 01:41:36.535 INFO nova.virt.libvirt.firewall 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Ensuring static 
filters 
2013-04-24 01:41:38.555 13316 INFO nova.compute.manager [-] Lifecycle 
event 0 on VM aa46445e-1f86-4a5a-8002-a7703ff98648 
2013-04-24 01:41:38.763 13316 INFO nova.virt.libvirt.driver [-] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Instance spawned 
successfully. 
2013-04-24 01:41:38.996 13316 INFO nova.compute.manager [-] [instance: 
aa46445e-1f86-4a5a-8002-a7703ff98648] During sync_power_state the 
instance has a pending task. Skip. 
2013-04-24 01:41:59.494 13316 AUDIT nova.compute.resource_tracker [-] 
Auditing locally available compute resources 
2013-04-24 01:42:00.345 13316 AUDIT nova.compute.resource_tracker [-] 
Free ram (MB): 881 
2013-04-24 01:42:00.346 13316 AUDIT nova.compute.resource_tracker [-] 
Free disk (GB): 200 
2013-04-24 01:42:00.346 13316 AUDIT nova.compute.resource_tracker [-] 
Free VCPUS: -1 
2013-04-24 01:42:00.509 13316 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for openstack-dev:openstack-dev.com 
2013-04-24 01:42:00.514 13316 INFO nova.compute.manager [-] Updating 
bandwidth usage cache 
2013-04-24 01:43:06.442 13316 AUDIT nova.compute.resource_tracker [-] 
Audi

Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Balamurugan V G
Hi Wanpan,

While I am able to inject files in to WindowsXP, CentOS5.9 and
Ubuntu12.04. I am unable to do it for Windows8Entrprise OS. I did
search the entire drive for the file I injected but couldnt file.
Below is the log from nova-compute.log.


2013-04-24 01:41:27.973 AUDIT nova.compute.manager
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Starting instance...
2013-04-24 01:41:28.170 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Attempting claim:
memory 1024 MB, disk 10 GB, VCPUs 1
2013-04-24 01:41:28.171 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Memory: 3953
MB, used: 2048 MB
2013-04-24 01:41:28.171 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Memory limit: 5929
MB, free: 3881 MB
2013-04-24 01:41:28.172 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Disk: 225 GB,
used: 15 GB
2013-04-24 01:41:28.172 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Disk limit not
specified, defaulting to unlimited
2013-04-24 01:41:28.173 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total CPU: 2 VCPUs,
used: 2 VCPUs
2013-04-24 01:41:28.173 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] CPU limit not
specified, defaulting to unlimited
2013-04-24 01:41:28.174 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Claim successful
2013-04-24 01:41:33.998 INFO nova.virt.libvirt.driver
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Creating image
2013-04-24 01:41:34.281 INFO nova.virt.libvirt.driver
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Injecting files into
image 65eaa160-d0e7-403e-a52c-90bea3c22cf7
2013-04-24 01:41:36.534 INFO nova.virt.libvirt.firewall
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Called
setup_basic_filtering in nwfilter
2013-04-24 01:41:36.535 INFO nova.virt.libvirt.firewall
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Ensuring static
filters
2013-04-24 01:41:38.555 13316 INFO nova.compute.manager [-] Lifecycle
event 0 on VM aa46445e-1f86-4a5a-8002-a7703ff98648
2013-04-24 01:41:38.763 13316 INFO nova.virt.libvirt.driver [-]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Instance spawned
successfully.
2013-04-24 01:41:38.996 13316 INFO nova.compute.manager [-] [instance:
aa46445e-1f86-4a5a-8002-a7703ff98648] During sync_power_state the
instance has a pending task. Skip.
2013-04-24 01:41:59.494 13316 AUDIT nova.compute.resource_tracker [-]
Auditing locally available compute resources
2013-04-24 01:42:00.345 13316 AUDIT nova.compute.resource_tracker [-]
Free ram (MB): 881
2013-04-24 01:42:00.346 13316 AUDIT nova.compute.resource_tracker [-]
Free disk (GB): 200
2013-04-24 01:42:00.346 13316 AUDIT nova.compute.resource_tracker [-]
Free VCPUS: -1
2013-04-24 01:42:00.509 13316 INFO nova.compute.resource_tracker [-]
Compute_service record updated for openstack-dev:openstack-dev.com
2013-04-24 01:42:00.514 13316 INFO nova.compute.manager [-] Updating
bandwidth usage cache
2013-04-24 01:43:06.442 13316 AUDIT nova.compute.resource_tracker [-]
Auditing locally available compute resources
2013-04-24 01:43:07.041 13316 AUDIT nova.compute.resource_tracker [-]
Free ram (MB): 881
2013-04-24 01:43:07.042 13316 AUDIT nova.compute.resource_tracker [-]
Free disk (GB): 200
2013-04-24 01:43:07.042 13316 AUDIT nova.compute.resource_tracker [-]
Free VCPUS: -1
2013-04-24 01:43:07.266 13316 INFO nova.compute.resource_tracker [-]
Compute_service record updated for openstack-dev:openstack-

Re: [Openstack] Keystone Grizzly install

2013-04-24 Thread Viktor Viking
Hi Dolph,

Now I got an exception. It seems like I am missing "repoze.lru". I will
download and install it. I will let you know if it works.

Thank you,
Viktor



On Wed, Apr 24, 2013 at 11:25 PM, Dolph Mathews wrote:

> What happens when you run keystone-all directly?
>
>
> -Dolph
>
>
> On Wed, Apr 24, 2013 at 4:23 PM, Viktor Viking  > wrote:
>
>> Community,
>>
>> I am trying to install Keystone Grizzly following these instructions:
>> http://docs.openstack.org/trunk/openstack-compute/install/yum/content/install-keystone.html
>>
>> When I try to start the service (before db sync), I get the following
>> error message: "Starting keytonestartproc: exit status of parent of
>> /usr/bin/keystone-all  1 failed".
>>
>> /var/log/keystone/keystone.log is not giving me any clue wrt what is
>> wrong.
>>
>> Could anyone let me know where to look?
>> Viktor
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Should we discourage KVM block-based live migration?

2013-04-24 Thread Lorin Hochstein
On Wed, Apr 24, 2013 at 11:59 AM, Daniel P. Berrange wrote:

> On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:
> > In the docs, we describe how to configure KVM block-based live migration,
> > and it has the advantage of avoiding the need for shared storage of
> > instances.
> >
> > However, there's this email from Daniel Berrangé from back in Aug 2012:
> > http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html
> >
> > "Block migration is a part of the KVM that none of the upstream
> developers
> > really like, is not entirely reliable, and most distros typically do not
> > want to support it due to its poor design (eg not supported in RHEL).
> >
> > It is quite likely that it will be removed in favour of an alternative
> > implementation. What that alternative impl will be, and when I will
> > arrive, I can't say right now."
> >
> > Based on this info, the OpenStack Ops guide currently recommends against
> > using block-based live migration, but the Compute Admin guide has no
> > warnings about this.
> >
> > I wanted to sanity-check against the mailing list to verify that this was
> > still the case. What's the state of block-based live migration with KVM?
> > Should we say be dissuading people from using it, or is it reasonable for
> > people to use it?
>
> What I wrote above about the existing impl is still accurate. The new
> block migration code is now merged into libvirt and makes use of an
> NBD server built-in to the QMEU process todo block migration. API
> wise it should actually work in the same way as the existing deprecated
> block migration code.  So if you have new enough libvirt and new enough
> KVM, it probably ought to 'just work' with openstack without needing
> any code changes in nova. I have not actually tested this myself
> though.
>
> So we can probably update the docs - but we'd want to checkout just
> what precise versions of libvirt + qemu are needed, and have someone
> check that it does in fact work.
>
>
Thanks, Daniel. I can update the docs accordingly. How can I find out what
are the minimum versions of libvirt and qemu are needed?

Also, I noticed you said "qemu" and not "kvm", and I see that
http://wiki.qemu.org/KVM says that qemu-kvm fork for x86 is "deprecated,
use upstream QEMU now".  Is it the case now that when using KVM as the
hypervisor for a host, an admin will just install a "qemu" package instead
of a "qemu-kvm" package to get the userspace stuff?

Lorin
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceilometer Install

2013-04-24 Thread Riki Arslan
Hi Doug,

Your email helped me. It was actually glanceclient version 0.5.1 that was
causing the conflict. After updating it, the conflict error disappeared.

I hope this would help someone else too.

Thanks again.


On Wed, Apr 24, 2013 at 11:49 PM, Doug Hellmann  wrote:

>
>
>
> On Wed, Apr 24, 2013 at 9:17 AM, Riki Arslan wrote:
>
>> Hi,
>>
>> We are trying to install "ceilometer-2013.1~g2.tar.gz" which presumably
>> has Folsom compatibility.
>>
>> The requirment is "python-keystoneclient>=0.2,<0.3" and we have
>> the version 2.3.
>>
>> But, still, setup quits with the following message:
>>
>> "error: Installed distribution python-keystoneclient 0.2.3 conflicts with
>> requirement python-keystoneclient>=0.1.2,<0.2"
>>
>> The funny thing is, although pip-requires states
>> "python-keystoneclient>=0.2,<0.3", the error message complains that it is
>> not "python-keystoneclient>=0.1.2,<0.2".
>>
>
> Something else you have installed already wants an older version of the
> keystone client, so the installation of ceilometer is not able to upgrade
> to the version we need.
>
> Doug
>
>
>>
>> Your help is greatly appreciated.
>>
>> Thank you in advance.
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone Grizzly install

2013-04-24 Thread Dolph Mathews
What happens when you run keystone-all directly?


-Dolph


On Wed, Apr 24, 2013 at 4:23 PM, Viktor Viking
wrote:

> Community,
>
> I am trying to install Keystone Grizzly following these instructions:
> http://docs.openstack.org/trunk/openstack-compute/install/yum/content/install-keystone.html
>
> When I try to start the service (before db sync), I get the following
> error message: "Starting keytonestartproc: exit status of parent of
> /usr/bin/keystone-all  1 failed".
>
> /var/log/keystone/keystone.log is not giving me any clue wrt what is wrong.
>
> Could anyone let me know where to look?
> Viktor
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Keystone Grizzly install

2013-04-24 Thread Viktor Viking
Community,

I am trying to install Keystone Grizzly following these instructions:
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/install-keystone.html

When I try to start the service (before db sync), I get the following error
message: "Starting keytonestartproc: exit status of parent of
/usr/bin/keystone-all  1 failed".

/var/log/keystone/keystone.log is not giving me any clue wrt what is wrong.

Could anyone let me know where to look?
Viktor
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceilometer Install

2013-04-24 Thread Riki Arslan
Hi Doug,

Thank you for the reply. I have previously installed Ceilometer version
0.1. Do you think that could be the reason?

Thanks.


On Wed, Apr 24, 2013 at 11:49 PM, Doug Hellmann  wrote:

>
>
>
> On Wed, Apr 24, 2013 at 9:17 AM, Riki Arslan wrote:
>
>> Hi,
>>
>> We are trying to install "ceilometer-2013.1~g2.tar.gz" which presumably
>> has Folsom compatibility.
>>
>> The requirment is "python-keystoneclient>=0.2,<0.3" and we have
>> the version 2.3.
>>
>> But, still, setup quits with the following message:
>>
>> "error: Installed distribution python-keystoneclient 0.2.3 conflicts with
>> requirement python-keystoneclient>=0.1.2,<0.2"
>>
>> The funny thing is, although pip-requires states
>> "python-keystoneclient>=0.2,<0.3", the error message complains that it is
>> not "python-keystoneclient>=0.1.2,<0.2".
>>
>
> Something else you have installed already wants an older version of the
> keystone client, so the installation of ceilometer is not able to upgrade
> to the version we need.
>
> Doug
>
>
>>
>> Your help is greatly appreciated.
>>
>> Thank you in advance.
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceilometer Install

2013-04-24 Thread Doug Hellmann
On Wed, Apr 24, 2013 at 9:17 AM, Riki Arslan wrote:

> Hi,
>
> We are trying to install "ceilometer-2013.1~g2.tar.gz" which presumably
> has Folsom compatibility.
>
> The requirment is "python-keystoneclient>=0.2,<0.3" and we have
> the version 2.3.
>
> But, still, setup quits with the following message:
>
> "error: Installed distribution python-keystoneclient 0.2.3 conflicts with
> requirement python-keystoneclient>=0.1.2,<0.2"
>
> The funny thing is, although pip-requires states
> "python-keystoneclient>=0.2,<0.3", the error message complains that it is
> not "python-keystoneclient>=0.1.2,<0.2".
>

Something else you have installed already wants an older version of the
keystone client, so the installation of ceilometer is not able to upgrade
to the version we need.

Doug


>
> Your help is greatly appreciated.
>
> Thank you in advance.
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Martinx - ジェームズ
Hi Balu!

Listen, is your metadata service up and running?!

If yes, which guide you used?

I'm trying everything I can to enable metadata without L3 with a Quantum
Single Flat topology for my own guide:
https://gist.github.com/tmartinx/d36536b7b62a48f859c2

I really appreciate any feedback!

Tks!
Thiago


On 24 April 2013 03:34, Balamurugan V G  wrote:

> Thanks Aaron.
>
> I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
> and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
> the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
> 169.254.169.254 from the VM. I am using a single node setup with two
> NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP
>
> These are my metadata related configurations.
>
> */etc/nova/nova.conf *
> metadata_host = 10.5.12.20
> metadata_listen = 127.0.0.1
> metadata_listen_port = 8775
> metadata_manager=nova.api.manager.MetadataManager
> service_quantum_metadata_proxy = true
> quantum_metadata_proxy_shared_secret = metasecret123
>
> */etc/quantum/quantum.conf*
> allow_overlapping_ips = True
>
> */etc/quantum/l3_agent.ini*
> use_namespaces = True
> auth_url = http://10.5.3.230:35357/v2.0
> auth_region = RegionOne
> admin_tenant_name = service
> admin_user = quantum
> admin_password = service_pass
> metadata_ip = 10.5.12.20
>
> */etc/quantum/metadata_agent.ini*
> auth_url = http://10.5.3.230:35357/v2.0
> auth_region = RegionOne
> admin_tenant_name = service
> admin_user = quantum
> admin_password = service_pass
> nova_metadata_ip = 127.0.0.1
> nova_metadata_port = 8775
> metadata_proxy_shared_secret = metasecret123
>
>
> I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
> ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
> request but no response.
>
> root@openstack-dev:~# ip netns exec
> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric RefUse
> Iface
> 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
> qg-193bb8ee-f5
> 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
> qg-193bb8ee-f5
> 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
> qr-59e69986-6e
> root@openstack-dev:~# ip netns exec
> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
> 65535 bytes
> ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
> length 28
> 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
> Unknown), length 28
> 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
> length 28
> 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
> length 28
> 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
> length 28
> 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
> length 28
>
> 6 packets captured
> 6 packets received by filter
> 0 packets dropped by kernel
> root@openstack-dev:~#
>
>
> Any help will be greatly appreciated.
>
> Thanks,
> Balu
>
>
> On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen  wrote:
>
>> Yup, If your host supports namespaces this can be done via the
>> quantum-metadata-agent.  The following setting is also required in your
>>  nova.conf: service_quantum_metadata_proxy=True
>>
>>
>> On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G <
>> balamuruga...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> In Grizzly, when using quantum and overlapping IPs, does metadata
>>> service work? This wasnt working in Folsom.
>>>
>>> Thanks,
>>> Balu
>>>
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
Can you show us a quantum subnet-show for the subnet your vm has an ip on.
Is it possible that you added a host_route to the subnet for 169.254/16?

Or could you try this image:
http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img


On Wed, Apr 24, 2013 at 1:06 AM, Balamurugan V G wrote:

> I booted a Ubuntu Image in which I had made sure that there was no
> pre-existing route for 169,254.0.0/16. But its getting the route from DHCP
> once its boots up. So its the DHCP server which is sending this route to
> the VM.
>
> Regards,
> Balu
>
>
> On Wed, Apr 24, 2013 at 12:47 PM, Balamurugan V G  > wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the response. I do not have enable_isolated_metadata_proxy
>> anywhere under /etc/quantum and /etc/nova. The closest I see is
>> 'enable_isolated_metadata' in /etc/quantum/dhcp_agent.ini and even that is
>> commented out. What do you mean by link-local address?
>>
>> Like you said, I suspect that the image has the route. This was was a
>> snapshot taken in a Folsom setup. So its possible that Folsom has injected
>> this route and when I took the snapshot, it became part of the snapshot. I
>> then copied over this snapshot to a new Grizzly setup. Let me check the
>> image and remove it from the image if it has the route. Thanks for the hint
>> again.
>>
>> Regards,
>> Balu
>>
>>
>>
>> On Wed, Apr 24, 2013 at 12:38 PM, Salvatore Orlando 
>> wrote:
>>
>>> The dhcp agent will set a route to 169.254.0.0/16 if
>>> enable_isolated_metadata_proxy=True.
>>> In that case the dhcp port ip will be the nexthop for that route.
>>>
>>> Otherwise, it might be your image might have a 'builtin' route to such
>>> cidr.
>>> What's your nexthop for the link-local address?
>>>
>>> Salvatore
>>>
>>>
>>> On 24 April 2013 08:00, Balamurugan V G  wrote:
>>>
 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in
 the same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen wrote:

> The vm should not have a routing table entry for 169.254.0.0/16  if
> it does i'm not sure how it got there unless it was added by something
> other than dhcp. It seems like that is your problem as the vm is arping
> directly for that address rather than the default gw.
>
>
> On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G <
> balamuruga...@gmail.com> wrote:
>
>> Thanks Aaron.
>>
>> I am perhaps not configuring it right then. I am using Ubuntu 12.04
>> host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I 
>> see
>> that the VM's routing table has an entry for 169.254.0.0/16 but I
>> cant ping 169.254.169.254 from the VM. I am using a single node setup 
>> with
>> two NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP
>>
>> These are my metadata related configurations.
>>
>> */etc/nova/nova.conf *
>> metadata_host = 10.5.12.20
>> metadata_listen = 127.0.0.1
>> metadata_listen_port = 8775
>> metadata_manager=nova.api.manager.MetadataManager
>> service_quantum_metadata_proxy = true
>> quantum_metadata_proxy_shared_secret = metasecret123
>>
>> */etc/quantum/quantum.conf*
>> allow_overlapping_ips = True
>>
>> */etc/quantum/l3_agent.ini*
>> use_namespaces = True
>> auth_url = http://10.5.3.230:35357/v2.0
>> auth_region = RegionOne
>> admin_tenant_name = service
>> admin_user = quantum
>> admin_password = service_pass
>> metadata_ip = 10.5.12.20
>>
>> */etc/quantum/metadata_agent.ini*
>> auth_url = http://10.5.3.230:35357/v2.0
>> auth_region = RegionOne
>> admin_tenant_name = service
>> admin_user = quantum
>> admin_password = service_pass
>> nova_metadata_ip = 127.0.0.1
>> nova_metadata_port = 8775
>> metadata_proxy_shared_secret = metasecret123
>>
>>
>> I see that /usr/bin/quantum-ns-metadata-proxy process is running.
>> When I ping 169.254.169.254 from VM, in the host's router namespace, I 
>> see
>> the ARP request but no response.
>>
>> root@openstack-dev:~# ip netns exec
>> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
>> Kernel IP routing table
>> Destination Gateway Genmask Flags Metric Ref
>> Use Iface
>> 0.0.0.0 10.5.12.1   0.0.0.0 UG0  0
>> 0 qg-193bb8ee-f5
>> 10.5.12.0   0.0.0.0 255.255.255

Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Arindam Choudhury

Hi,

Output from the controller node:
root@aopcach:~# ifconfig
eth0  Link encap:Ethernet  HWaddr 1c:c1:de:65:6f:ee  
  inet addr:158.109.65.21  Bcast:158.109.79.255  Mask:255.255.240.0
  inet6 addr: fe80::1ec1:deff:fe65:6fee/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:111595 errors:0 dropped:0 overruns:0 frame:0
  TX packets:10941 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:18253579 (17.4 MiB)  TX bytes:1747833 (1.6 MiB)
  Interrupt:19 Memory:f300-f302 

loLink encap:Local Loopback  
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:45623 errors:0 dropped:0 overruns:0 frame:0
  TX packets:45623 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:9862970 (9.4 MiB)  TX bytes:9862970 (9.4 MiB)

root@aopcach:~# ps aux | grep dnsmasq
root  6355  0.0  0.0   7828   880 pts/0S+   19:23   0:00 grep dnsmasq


output from the compute node:

root@aopcso1:~# ifconfig
br100 Link encap:Ethernet  HWaddr 38:60:77:0d:31:87  
  inet addr:192.168.100.1  Bcast:192.168.100.255  Mask:255.255.255.0
  inet6 addr: fe80::3a60:77ff:fe0d:3187/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:689504 errors:0 dropped:4251 overruns:0 frame:0
  TX packets:76508 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:89086934 (84.9 MiB)  TX bytes:22405496 (21.3 MiB)

eth0  Link encap:Ethernet  HWaddr 38:60:77:0d:31:87  
  inet6 addr: fe80::3a60:77ff:fe0d:3187/64 Scope:Link
  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:16329913 errors:0 dropped:0 overruns:0 frame:0
  TX packets:927591 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:2596968421 (2.4 GiB)  TX bytes:448796862 (428.0 MiB)
  Interrupt:20 Memory:f710-f712 

loLink encap:Local Loopback  
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:313717 errors:0 dropped:0 overruns:0 frame:0
  TX packets:313717 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:16434128 (15.6 MiB)  TX bytes:16434128 (15.6 MiB)

root@aopcso1:~# ps aux | grep dnsmasq
nobody   12485  0.0  0.0  25124  1104 ?S12:05   0:00 
/usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= 
--domain='novalocal' --pid-file=/var/lib/nova/networks/nova-br100.pid 
--listen-address=192.168.100.1 --except-interface=lo 
--dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s 
--dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf 
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
root 12486  0.0  0.0  25124   472 ?S12:05   0:00 
/usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= 
--domain='novalocal' --pid-file=/var/lib/nova/networks/nova-br100.pid 
--listen-address=192.168.100.1 --except-interface=lo 
--dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s 
--dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf 
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
root 32616  0.0  0.0   7848   876 pts/0S+   19:27   0:00 grep dnsmasq



To: arin...@live.com
CC: openstack@lists.launchpad.net
Subject: RE: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 11:26:23 -0500

Can you provided the output of 'ifconfig'
on the hosting node?  Also 'ps aux | grep dnsmasq' .







Jay S. Bryant

Linux Developer - 

OpenStack Enterprise Edition

   

Department 7YLA, Building 015-2, Office E125, Rochester, MN

Telephone: (507) 253-4270, FAX (507) 253-6410

TIE Line: 553-4270

E-Mail:  jsbry...@us.ibm.com



 All the world's a stage and most of us are desperately unrehearsed.

   -- Sean
O'Casey









From:  
 Arindam Choudhury 

To:  
 Jay S Bryant/Rochester/IBM@IBMUS,
openstack , 

Date:  
 04/24/2013 11:16 AM

Subject:
   RE: [Openstack]
problem with metadata and ping










Hi,



So I added that rule:

iptables -I INPUT -i tap+ -p udp --dport 67:68 --sport 67:68 -j ACCEPT



but still the same problem.



There is another thing:

# nova-manage service list

Binary   Host
 
  Zone Status
State Updated_At

nova-network aopcach
 internal
enabled:-)   2013-04-24
16:07:3

Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Jay S Bryant
Can you provided the output of 'ifconfig' on the hosting node?  Also 'ps 
aux | grep dnsmasq' .



Jay S. Bryant
Linux Developer - 
OpenStack Enterprise Edition
   
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:   Arindam Choudhury 
To: Jay S Bryant/Rochester/IBM@IBMUS, openstack 
, 
Date:   04/24/2013 11:16 AM
Subject:RE: [Openstack] problem with metadata and ping




Hi,

So I added that rule:
iptables -I INPUT -i tap+ -p udp --dport 67:68 --sport 67:68 -j ACCEPT

but still the same problem.

There is another thing:
# nova-manage service list
Binary   Host Zone Status State 
Updated_At
nova-network aopcach  internal enabled :-) 
  2013-04-24 16:07:37
nova-certaopcach  internal enabled :-) 
  2013-04-24 16:07:36
nova-conductor   aopcach  internal enabled :-) 
  2013-04-24 16:07:36
nova-consoleauth aopcach  internal enabled :-) 
  2013-04-24 16:07:36
nova-scheduler   aopcach  internal enabled :-) 
  2013-04-24 16:07:36
nova-network aopcso1  internal enabled :-) 
  2013-04-24 16:07:36
nova-compute aopcso1  nova enabled:-)  
2013-04-24 16:07:37

shows all the host and services. But in dashboard it only shows the 
services running in aopcach.

screenshot: http://imgur.com/ED9nbxU



To: arin...@live.com
CC: openstack@lists.launchpad.net
Subject: RE: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 10:55:12 -0500

Arindam, 

Ooops, I had a typo.   The command should have been:  iptables -I input -i 
tap+ -p udp -dport 67:68 --sport 67:68 -j ACCEPT

You need the iptables configuration on the system where dnsmasq is 
running.  It shouldn't be necessary in the compute nodes that are being 
booted. 


Jay S. Bryant
Linux Developer - 
   OpenStack Enterprise Edition
  
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

All the world's a stage and most of us are desperately unrehearsed.
  -- Sean O'Casey
 



From:Arindam Choudhury  
To:Jay S Bryant/Rochester/IBM@IBMUS, openstack 
, 
Date:04/24/2013 10:47 AM 
Subject:RE: [Openstack] problem with metadata and ping 



Hi,

Thanks for your reply.

The dnsmasq is running properly.

when I tried to run iptables -I input -i tap+ -p udp 67:68 --sport 67:68 
-j ACCEPT 
it says, 
#  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT
Bad argument `67:68'

Do I have to do this iptables configuration in controller or in compute 
nodes also.

To: arin...@live.com
Subject: Re: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 10:17:41 -0500

Arindam, 

I saw a similar problem with quantum.  If you have iptables running on the 
hosting system you may need to update the rules to allow the DHCP Discover 
packet through:  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j 
ACCEPT 

Also ensure that dnsmasq is running properly. 



Jay S. Bryant
Linux Developer - 
  OpenStack Enterprise Edition
 
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

All the world's a stage and most of us are desperately unrehearsed.
 -- Sean O'Casey
 



From:Arindam Choudhury  
To:openstack , 
Date:04/24/2013 10:12 AM 
Subject:Re: [Openstack] problem with metadata and ping 
Sent by:"Openstack" 
 




hi,

I was misled by this:

[(keystone_user)]$ nova list
+--+++---+
| ID   | Name   | Status | Networks  |
+--+++---+
| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | 
private=192.168.100.2 |
+--+++---+

This is a nova-network problem.

From: arin...@live.com
To: openstack@lists.launchpad.net
Date: Wed, 24 Apr 2

Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-04-24 Thread Martinx - ジェームズ
Hi!

The `Ultimate OpenStack Grizzly Guide' is a bit more updated!

There are two new scripts: keystone_basic.sh and
keystone_endpoints_basic.sh which preliminary support for Swift and
Ceilometer.

Check it out! https://gist.github.com/tmartinx/d36536b7b62a48f859c2

Best!
Thiago



On 20 March 2013 19:51, Martinx - ジェームズ  wrote:

> Hi!
>
>  I'm working with Grizzly G3+RC1 on top of Ubuntu 12.04.2 and here is the
> guide I wrote:
>
>  Ultimate OpenStack Grizzly 
> Guide
>
>  It covers:
>
>  * Ubuntu 12.04.2
>  * Basic Ubuntu setup
>  * KVM
>  * OpenvSwitch
>  * Name Resolution for OpenStack components;
>  * LVM for Instances
>  * Keystone
>  * Glance
>  * Quantum - Single Flat, Super Green!!
>  * Nova
>  * Cinder / tgt
>  * Dashboard
>
>  It is still a draft but, every time I deploy Ubuntu and Grizzly, I follow
> this little guide...
>
>  I would like some help to improve this guide... If I'm doing something
> wrong, tell me! Please!
>
>  Probably I'm doing something wrong, I don't know yet, but I'm seeing some
> errors on the logs, already reported here on this list. Like for example:
> nova-novncproxy conflicts with novnc (no VNC console for now),
> dhcp-agent.log / auth.log points to some problems with `sudo' or the
> `rootwarp' subsystem when dealing with metadata (so it isn't working)...
>
>  But in general, it works great!!
>
> Best!
> Thiago
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Should we discourage KVM block-based live migration?

2013-04-24 Thread Razique Mahroua
Thanks for the clarification Daniel
Razique Mahroua - Nuage & Corazique.mahr...@gmail.comTel : +33 9 72 37 94 15

Le 24 avr. 2013 à 17:59, "Daniel P. Berrange"  a écrit :On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:In the docs, we describe how to configure KVM block-based live migration,and it has the advantage of avoiding the need for shared storage ofinstances.However, there's this email from Daniel Berrangé from back in Aug 2012:http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html"Block migration is a part of the KVM that none of the upstream developersreally like, is not entirely reliable, and most distros typically do notwant to support it due to its poor design (eg not supported in RHEL).It is quite likely that it will be removed in favour of an alternativeimplementation. What that alternative impl will be, and when I willarrive, I can't say right now."Based on this info, the OpenStack Ops guide currently recommends againstusing block-based live migration, but the Compute Admin guide has nowarnings about this.I wanted to sanity-check against the mailing list to verify that this wasstill the case. What's the state of block-based live migration with KVM?Should we say be dissuading people from using it, or is it reasonable forpeople to use it?What I wrote above about the existing impl is still accurate. The newblock migration code is now merged into libvirt and makes use of anNBD server built-in to the QMEU process todo block migration. APIwise it should actually work in the same way as the existing deprecatedblock migration code.  So if you have new enough libvirt and new enoughKVM, it probably ought to 'just work' with openstack without needingany code changes in nova. I have not actually tested this myselfthough.So we can probably update the docs - but we'd want to checkout justwhat precise versions of libvirt + qemu are needed, and have someonecheck that it does in fact work.Regards,Daniel-- |: http://berrange.com  -o-    http://www.flickr.com/photos/dberrange/ :||: http://libvirt.org  -o- http://virt-manager.org :||: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :||: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Arindam Choudhury

Hi,

So I added that rule:
iptables -I INPUT -i tap+ -p udp --dport 67:68 --sport 67:68 -j ACCEPT

but still the same problem.

There is another thing:
# nova-manage service list
Binary   Host Zone Status   
  State Updated_At
nova-network aopcach  internal enabled  
  :-)   2013-04-24 16:07:37
nova-certaopcach  internal enabled  
  :-)   2013-04-24 16:07:36
nova-conductor   aopcach  internal enabled  
  :-)   2013-04-24 16:07:36
nova-consoleauth aopcach  internal enabled  
  :-)   2013-04-24 16:07:36
nova-scheduler   aopcach  internal enabled  
  :-)   2013-04-24 16:07:36
nova-network aopcso1  internal enabled  
  :-)   2013-04-24 16:07:36
nova-compute aopcso1  nova enabled  
  :-)   2013-04-24 16:07:37

shows all the host and services. But in dashboard it only shows the services 
running in aopcach.

screenshot: http://imgur.com/ED9nbxU



To: arin...@live.com
CC: openstack@lists.launchpad.net
Subject: RE: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 10:55:12 -0500

Arindam,



Ooops, I had a typo.   The command
should have been:  iptables -I
input -i tap+ -p udp -dport 67:68 --sport 67:68 -j ACCEPT



You need the iptables configuration
on the system where dnsmasq is running.  It shouldn't be necessary
in the compute nodes that are being booted.





Jay S. Bryant

Linux Developer - 

OpenStack Enterprise Edition

   

Department 7YLA, Building 015-2, Office E125, Rochester, MN

Telephone: (507) 253-4270, FAX (507) 253-6410

TIE Line: 553-4270

E-Mail:  jsbry...@us.ibm.com



 All the world's a stage and most of us are desperately unrehearsed.

   -- Sean
O'Casey









From:  
 Arindam Choudhury 

To:  
 Jay S Bryant/Rochester/IBM@IBMUS,
openstack , 

Date:  
 04/24/2013 10:47 AM

Subject:
   RE: [Openstack]
problem with metadata and ping








Hi,



Thanks for your reply.



The dnsmasq is running properly.



when I tried to run iptables -I input
-i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT 

it says, 

#  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT

Bad argument `67:68'



Do I have to do this iptables configuration in controller or in compute
nodes also.




To: arin...@live.com

Subject: Re: [Openstack] problem with metadata and ping

From: jsbry...@us.ibm.com

Date: Wed, 24 Apr 2013 10:17:41 -0500



Arindam, 



I saw a similar problem with quantum.  If you have iptables running
on the hosting system you may need to update the rules to allow the DHCP
Discover packet through:  iptables -I input -i tap+ -p udp 67:68 --sport
67:68 -j ACCEPT 



Also ensure that dnsmasq is running properly. 







Jay S. Bryant

Linux Developer - 

   OpenStack Enterprise Edition

  

Department 7YLA, Building 015-2, Office E125, Rochester, MN

Telephone: (507) 253-4270, FAX (507) 253-6410

TIE Line: 553-4270

E-Mail:  jsbry...@us.ibm.com



All the world's a stage and most of us are desperately unrehearsed.

  -- Sean
O'Casey

 







From:Arindam
Choudhury  

To:openstack
, 

Date:04/24/2013
10:12 AM 

Subject:Re:
[Openstack] problem with metadata and ping


Sent by:"Openstack"












hi,



I was misled by this:



[(keystone_user)]$ nova list

+--+++---+

| ID
  | Name   | Status
| Networks  |

+--+++---+

| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | private=192.168.100.2
|

+--+++---+



This is a nova-network problem.




From: arin...@live.com

To: openstack@lists.launchpad.net

Date: Wed, 24 Apr 2013 16:12:47 +0200

Subject: [Openstack] problem with metadata and ping



Hi,



I having problem with metadata service. I am using nova-network. The console
log says:



Starting network... 

udhcpc (v1.18.5) started 

Sending discover... 

Sending discover... 

Sending discover... 

No lease, failing 

WARN: /etc/rc3.d/S40network failed 

cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid


wget: can't connect to remote host (169.254.169.254): Network is unreachable


cloudsetup: failed 1/30: up 10.06. request failed.



the whole cons

[Openstack] probelm with nova-network

2013-04-24 Thread Arindam Choudhury
Hi,

I having problem with nova-network service. 
Though
[(keystone_user)]$ nova list
+--+++---+
| ID   | Name   | Status | Networks 
 |
+--+++---+
| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | 
private=192.168.100.2 |
+--+++---+

says that the vm is active and have 192.168.100.2 ip but it dont

The console log says:

Starting network...udhcpc (v1.18.5) startedSending discover...Sending 
discover...Sending discover...No lease, failingWARN: /etc/rc3.d/S40network 
failedcloudsetup: checking 
http://169.254.169.254/20090404/metadata/instanceidwget: can't connect to 
remote host (169.254.169.254): Network is unreachablecloudsetup: failed 1/30: 
up 10.06. request failed.

the whole console log is here: https://gist.github.com/arindamchoudhury/5452385
my nova.conf is here: https://gist.github.com/arindamchoudhury/5452410

[(keystone_user)]$ nova network-list 
++-+--+
| ID | Label   | Cidr |
++-+--+
| 1  | private | 192.168.100.0/24 |
++-+--+
[(keystone_user)]$ nova secgroup-list
+-+-+
| Name| Description |
+-+-+
| default | default |
+-+-+
[(keystone_user)]$ nova secgroup-list-rules default
+-+---+-+---+--+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-+---+-+---+--+
| icmp| -1| -1  | 0.0.0.0/0 |  |
| tcp | 22| 22  | 0.0.0.0/0 |  |
+-+---+-+---+--+

  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Should we discourage KVM block-based live migration?

2013-04-24 Thread Daniel P. Berrange
On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:
> In the docs, we describe how to configure KVM block-based live migration,
> and it has the advantage of avoiding the need for shared storage of
> instances.
> 
> However, there's this email from Daniel Berrangé from back in Aug 2012:
> http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html
> 
> "Block migration is a part of the KVM that none of the upstream developers
> really like, is not entirely reliable, and most distros typically do not
> want to support it due to its poor design (eg not supported in RHEL).
> 
> It is quite likely that it will be removed in favour of an alternative
> implementation. What that alternative impl will be, and when I will
> arrive, I can't say right now."
> 
> Based on this info, the OpenStack Ops guide currently recommends against
> using block-based live migration, but the Compute Admin guide has no
> warnings about this.
> 
> I wanted to sanity-check against the mailing list to verify that this was
> still the case. What's the state of block-based live migration with KVM?
> Should we say be dissuading people from using it, or is it reasonable for
> people to use it?

What I wrote above about the existing impl is still accurate. The new
block migration code is now merged into libvirt and makes use of an
NBD server built-in to the QMEU process todo block migration. API
wise it should actually work in the same way as the existing deprecated
block migration code.  So if you have new enough libvirt and new enough
KVM, it probably ought to 'just work' with openstack without needing
any code changes in nova. I have not actually tested this myself
though.

So we can probably update the docs - but we'd want to checkout just
what precise versions of libvirt + qemu are needed, and have someone
check that it does in fact work.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Jay S Bryant
Arindam,

Ooops, I had a typo.   The command should have been:  iptables -I input -i 
tap+ -p udp -dport 67:68 --sport 67:68 -j ACCEPT

You need the iptables configuration on the system where dnsmasq is 
running.  It shouldn't be necessary in the compute nodes that are being 
booted.


Jay S. Bryant
Linux Developer - 
OpenStack Enterprise Edition
   
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:   Arindam Choudhury 
To: Jay S Bryant/Rochester/IBM@IBMUS, openstack 
, 
Date:   04/24/2013 10:47 AM
Subject:RE: [Openstack] problem with metadata and ping



Hi,

Thanks for your reply.

The dnsmasq is running properly.

when I tried to run iptables -I input -i tap+ -p udp 67:68 --sport 67:68 
-j ACCEPT 
it says, 
#  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT
Bad argument `67:68'

Do I have to do this iptables configuration in controller or in compute 
nodes also.

To: arin...@live.com
Subject: Re: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 10:17:41 -0500

Arindam, 

I saw a similar problem with quantum.  If you have iptables running on the 
hosting system you may need to update the rules to allow the DHCP Discover 
packet through:  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j 
ACCEPT 

Also ensure that dnsmasq is running properly. 



Jay S. Bryant
Linux Developer - 
   OpenStack Enterprise Edition
  
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

All the world's a stage and most of us are desperately unrehearsed.
  -- Sean O'Casey
 



From:Arindam Choudhury  
To:openstack , 
Date:04/24/2013 10:12 AM 
Subject:Re: [Openstack] problem with metadata and ping 
Sent by:"Openstack" 
 




hi,

I was misled by this:

[(keystone_user)]$ nova list
+--+++---+
| ID   | Name   | Status | Networks  |
+--+++---+
| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | 
private=192.168.100.2 |
+--+++---+

This is a nova-network problem.

From: arin...@live.com
To: openstack@lists.launchpad.net
Date: Wed, 24 Apr 2013 16:12:47 +0200
Subject: [Openstack] problem with metadata and ping

Hi,

I having problem with metadata service. I am using nova-network. The 
console log says:

Starting network... 
udhcpc (v1.18.5) started 
Sending discover... 
Sending discover... 
Sending discover... 
No lease, failing 
WARN: /etc/rc3.d/S40network failed 
cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid 
wget: can't connect to remote host (169.254.169.254): Network is 
unreachable 
cloudsetup: failed 1/30: up 10.06. request failed.

the whole console log is here: 
https://gist.github.com/arindamchoudhury/5452385
my nova.conf is here: https://gist.github.com/arindamchoudhury/5452410

[(keystone_user)]$ nova network-list 
++-+--+
| ID | Label   | Cidr |
++-+--+
| 1  | private | 192.168.100.0/24 |
++-+--+
[(keystone_user)]$ nova secgroup-list
+-+-+
| Name| Description |
+-+-+
| default | default |
+-+-+
[(keystone_user)]$ nova secgroup-list-rules default
+-+---+-+---+--+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-+---+-+---+--+
| icmp| -1| -1  | 0.0.0.0/0 |  |
| tcp | 22| 22  | 0.0.0.0/0 |  |
+-+---+-+---+--+



___ Mailing list: 
https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net 
Unsubscribe : https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

<><>___

Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Balamurugan V G
Ok thanks, this helps a lot. But isnt this being done to avoid those
disruptions/issues with networking after a restart. Do you mean do
doing this will result in disruptions after a restart?

Regards,
Balu

On Wed, Apr 24, 2013 at 9:12 PM, Steve Heistand  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> The network node probably wont be running quantum server just one
> of the agents, so you put the command in one of those configs not
> quantum-server.
>
> That is what Im doing currently and it is working for me.
> at some point if you have running VMs with active network
> connections and need to restart quantum for some reason this
> 'may' interrupt their connections. something to keep in mind.
>
> steve
>
>
> On 04/24/2013 08:32 AM, Balamurugan V G wrote:
>> Right now, I have a single node setup on which I am qualifying my use cases 
>> but
>> eventually I will have a controller node, network node and several compute 
>> nodes. In
>> that case, do you mean it should something like this?
>>
>> Controller : post-start of quantum-server.cong Network :   post-start of
>> quantum-server.cong Compute:  pre-start of  
>> quantum-plugin-openvswitch-agent.conf
>>
>> Thanks, Balu
>>
>> On Wed, Apr 24, 2013 at 8:52 PM, Steve Heistand  
>> wrote: it
>> was mentioned to me (by Mr Mihaiescu) that this only works if controller and 
>> network
>> node are on the same machine. For the compute nodes I had forgotten its in a
>> different place. On them I am doing it in a pre-start script in
>> quantum-plugin-openvswitch-agent.conf. if the controller/network are on 
>> different
>> machines certainly in the quantum-server.conf work on which ever one of them 
>> is
>> actually using it, if it doesnt the command will have to be in a different 
>> startup
>> script.
>>
>> It was also mentioned that putting things in /etc/rc.local and then 
>> restarting all
>> the quantum related services might work too.
>>
>> steve
>>
>> On 04/24/2013 08:15 AM, Balamurugan V G wrote:
> Thanks Steve.
>
> I came across another way at
> https://bugs.launchpad.net/quantum/+bug/1084355/comments/15. It seems to 
> work
> as well. But your solution is simpler :)
>
> Regards, Balu
>
>
> On Wed, Apr 24, 2013 at 7:41 PM, Steve Heistand 
> wrote: I put it in the file:/etc/init/quantum-server.conf
>
> post-start script /usr/bin/quantum-ovs-cleanup exit 1 end script
>
>
> On 04/24/2013 02:45 AM, Balamurugan V G wrote:
 Hi,

 It seems due to an OVS quantum bug, we need to run the utility
 quantum-ovs-cleanup before any of the quantum services start, upon a
 server reboot.

 Where is the best place to put this utility to run automatically when a
 server reboots so that the OVS issue is automatically addressed? A 
 script
 in /etc/init.d or just plugging in a call for quantum-ovs-cleanup in an
 existing script?

 Thanks, Balu

 ___ Mailing list:
 https://launchpad.net/~openstack Post to :
 openstack@lists.launchpad.net Unsubscribe :
 https://launchpad.net/~openstack More help   :
 https://help.launchpad.net/ListHelp

>
>>
>
> - --
> 
>  Steve Heistand  NASA Ames Research Center
>  email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
>  ph: (650) 604-4369  Bldg. 258, Rm. 232-5
>  Scientific & HPC ApplicationP.O. Box 1
>  Development/OptimizationMoffett Field, CA 94035-0001
> 
>  "Any opinions expressed are those of our alien overlords, not my own."
>
> # For Remedy#
> #Action: Resolve#
> #Resolution: Resolved   #
> #Reason: No Further Action Required #
> #Tier1: User Code   #
> #Tier2: Other   #
> #Tier3: Assistance  #
> #Notification: None #
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2.0.14 (GNU/Linux)
>
> iEYEARECAAYFAlF3/W0ACgkQoBCTJSAkVrHdnwCgrnCfjN1NKCml+jFPtHk0s4iA
> Nx0An3g6abwQons0jMXkJLu4oBhiZ4ot
> =zh9U
> -END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Should we discourage KVM block-based live migration?

2013-04-24 Thread Lorin Hochstein
In the docs, we describe how to configure KVM block-based live migration,
and it has the advantage of avoiding the need for shared storage of
instances.

However, there's this email from Daniel Berrangé from back in Aug 2012:
http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html

"Block migration is a part of the KVM that none of the upstream developers
really like, is not entirely reliable, and most distros typically do not
want to support it due to its poor design (eg not supported in RHEL).

It is quite likely that it will be removed in favour of an alternative
implementation. What that alternative impl will be, and when I will
arrive, I can't say right now."

Based on this info, the OpenStack Ops guide currently recommends against
using block-based live migration, but the Compute Admin guide has no
warnings about this.

I wanted to sanity-check against the mailing list to verify that this was
still the case. What's the state of block-based live migration with KVM?
Should we say be dissuading people from using it, or is it reasonable for
people to use it?

Lorin
-- 
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Steve Heistand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The network node probably wont be running quantum server just one
of the agents, so you put the command in one of those configs not
quantum-server.

That is what Im doing currently and it is working for me.
at some point if you have running VMs with active network
connections and need to restart quantum for some reason this
'may' interrupt their connections. something to keep in mind.

steve


On 04/24/2013 08:32 AM, Balamurugan V G wrote:
> Right now, I have a single node setup on which I am qualifying my use cases 
> but
> eventually I will have a controller node, network node and several compute 
> nodes. In
> that case, do you mean it should something like this?
> 
> Controller : post-start of quantum-server.cong Network :   post-start of
> quantum-server.cong Compute:  pre-start of  
> quantum-plugin-openvswitch-agent.conf
> 
> Thanks, Balu
> 
> On Wed, Apr 24, 2013 at 8:52 PM, Steve Heistand  
> wrote: it
> was mentioned to me (by Mr Mihaiescu) that this only works if controller and 
> network
> node are on the same machine. For the compute nodes I had forgotten its in a
> different place. On them I am doing it in a pre-start script in
> quantum-plugin-openvswitch-agent.conf. if the controller/network are on 
> different
> machines certainly in the quantum-server.conf work on which ever one of them 
> is
> actually using it, if it doesnt the command will have to be in a different 
> startup
> script.
> 
> It was also mentioned that putting things in /etc/rc.local and then 
> restarting all
> the quantum related services might work too.
> 
> steve
> 
> On 04/24/2013 08:15 AM, Balamurugan V G wrote:
 Thanks Steve.
 
 I came across another way at 
 https://bugs.launchpad.net/quantum/+bug/1084355/comments/15. It seems to 
 work
 as well. But your solution is simpler :)
 
 Regards, Balu
 
 
 On Wed, Apr 24, 2013 at 7:41 PM, Steve Heistand 
 wrote: I put it in the file:/etc/init/quantum-server.conf
 
 post-start script /usr/bin/quantum-ovs-cleanup exit 1 end script
 
 
 On 04/24/2013 02:45 AM, Balamurugan V G wrote:
>>> Hi,
>>> 
>>> It seems due to an OVS quantum bug, we need to run the utility 
>>> quantum-ovs-cleanup before any of the quantum services start, upon a
>>> server reboot.
>>> 
>>> Where is the best place to put this utility to run automatically when a
>>> server reboots so that the OVS issue is automatically addressed? A 
>>> script
>>> in /etc/init.d or just plugging in a call for quantum-ovs-cleanup in an
>>> existing script?
>>> 
>>> Thanks, Balu
>>> 
>>> ___ Mailing list: 
>>> https://launchpad.net/~openstack Post to :
>>> openstack@lists.launchpad.net Unsubscribe :
>>> https://launchpad.net/~openstack More help   : 
>>> https://help.launchpad.net/ListHelp
>>> 
 
> 

- -- 

 Steve Heistand  NASA Ames Research Center
 email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
 ph: (650) 604-4369  Bldg. 258, Rm. 232-5
 Scientific & HPC ApplicationP.O. Box 1
 Development/OptimizationMoffett Field, CA 94035-0001

 "Any opinions expressed are those of our alien overlords, not my own."

# For Remedy#
#Action: Resolve#   
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlF3/W0ACgkQoBCTJSAkVrHdnwCgrnCfjN1NKCml+jFPtHk0s4iA
Nx0An3g6abwQons0jMXkJLu4oBhiZ4ot
=zh9U
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Arindam Choudhury
Hi,

Thanks for your reply.

The dnsmasq is running properly.

when I tried to run iptables
-I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT 

it says, 
#  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT
Bad argument `67:68'

Do I have to do this iptables configuration in controller or in compute nodes 
also.

To: arin...@live.com
Subject: Re: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 10:17:41 -0500

Arindam,



I saw a similar problem with quantum.
 If you have iptables running on the hosting system you may need to
update the rules to allow the DHCP Discover packet through:  iptables
-I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT 



Also ensure that dnsmasq is running
properly. 







Jay S. Bryant

Linux Developer - 

OpenStack Enterprise Edition

   

Department 7YLA, Building 015-2, Office E125, Rochester, MN

Telephone: (507) 253-4270, FAX (507) 253-6410

TIE Line: 553-4270

E-Mail:  jsbry...@us.ibm.com



 All the world's a stage and most of us are desperately unrehearsed.

   -- Sean
O'Casey









From:  
 Arindam Choudhury 

To:  
 openstack ,


Date:  
 04/24/2013 10:12 AM

Subject:
   Re: [Openstack]
problem with metadata and ping

Sent by:
   "Openstack"











hi,



I was misled by this:



[(keystone_user)]$ nova list

+--+++---+

| ID
  | Name   | Status
| Networks  |

+--+++---+

| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | private=192.168.100.2
|

+--+++---+



This is a nova-network problem.




From: arin...@live.com

To: openstack@lists.launchpad.net

Date: Wed, 24 Apr 2013 16:12:47 +0200

Subject: [Openstack] problem with metadata and ping



Hi,



I having problem with metadata service. I am using nova-network. The console
log says:



Starting network...

udhcpc (v1.18.5) started

Sending discover...

Sending discover...

Sending discover...

No lease, failing

WARN: /etc/rc3.d/S40network failed

cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid

wget: can't connect to remote host (169.254.169.254):
Network is unreachable

cloudsetup: failed 1/30: up 10.06. request
failed.



the whole console log is here: https://gist.github.com/arindamchoudhury/5452385

my nova.conf is here: https://gist.github.com/arindamchoudhury/5452410



[(keystone_user)]$ nova network-list 

++-+--+

| ID | Label   | Cidr |

++-+--+

| 1  | private | 192.168.100.0/24 |

++-+--+

[(keystone_user)]$ nova secgroup-list

+-+-+

| Name| Description |

+-+-+

| default | default |

+-+-+

[(keystone_user)]$ nova secgroup-list-rules default

+-+---+-+---+--+

| IP Protocol | From Port | To Port | IP Range  | Source Group |

+-+---+-+---+--+

| icmp| -1| -1
 | 0.0.0.0/0 |
 |

| tcp | 22| 22
 | 0.0.0.0/0 |
 |

+-+---+-+---+--+







___ Mailing list: 
https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net Unsubscribe : 
https://launchpad.net/~openstack
More help : 
https://help.launchpad.net/ListHelp___

Mailing list: https://launchpad.net/~openstack

Post to : openstack@lists.launchpad.net

Unsubscribe : https://launchpad.net/~openstack

More help   : https://help.launchpad.net/ListHelp



  <>___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Balamurugan V G
Right now, I have a single node setup on which I am qualifying my use
cases but eventually I will have a controller node, network node and
several compute nodes. In that case, do you mean it should something
like this?

Controller : post-start of quantum-server.cong
Network :   post-start of quantum-server.cong
Compute:  pre-start of  quantum-plugin-openvswitch-agent.conf

Thanks,
Balu

On Wed, Apr 24, 2013 at 8:52 PM, Steve Heistand  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> it was mentioned to me (by Mr Mihaiescu) that this only works if controller 
> and network node
> are on the same machine. For the compute nodes I had forgotten its in a 
> different
> place. On them I am doing it in a pre-start script in 
> quantum-plugin-openvswitch-agent.conf.
> if the controller/network are on different machines certainly in the 
> quantum-server.conf
> work on which ever one of them is actually using it, if it doesnt the command 
> will have
> to be in a different startup script.
>
> It was also mentioned that putting things in /etc/rc.local and then restarting
> all the quantum related services might work too.
>
> steve
>
> On 04/24/2013 08:15 AM, Balamurugan V G wrote:
>> Thanks Steve.
>>
>> I came across another way at
>> https://bugs.launchpad.net/quantum/+bug/1084355/comments/15. It seems to 
>> work as
>> well. But your solution is simpler :)
>>
>> Regards, Balu
>>
>>
>> On Wed, Apr 24, 2013 at 7:41 PM, Steve Heistand  
>> wrote: I
>> put it in the file:/etc/init/quantum-server.conf
>>
>> post-start script /usr/bin/quantum-ovs-cleanup exit 1 end script
>>
>>
>> On 04/24/2013 02:45 AM, Balamurugan V G wrote:
> Hi,
>
> It seems due to an OVS quantum bug, we need to run the utility
> quantum-ovs-cleanup before any of the quantum services start, upon a 
> server
> reboot.
>
> Where is the best place to put this utility to run automatically when a 
> server
> reboots so that the OVS issue is automatically addressed? A script in
> /etc/init.d or just plugging in a call for quantum-ovs-cleanup in an 
> existing
> script?
>
> Thanks, Balu
>
> ___ Mailing list:
> https://launchpad.net/~openstack Post to : 
> openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack More help   :
> https://help.launchpad.net/ListHelp
>
>>
>
> - --
> 
>  Steve Heistand  NASA Ames Research Center
>  email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
>  ph: (650) 604-4369  Bldg. 258, Rm. 232-5
>  Scientific & HPC ApplicationP.O. Box 1
>  Development/OptimizationMoffett Field, CA 94035-0001
> 
>  "Any opinions expressed are those of our alien overlords, not my own."
>
> # For Remedy#
> #Action: Resolve#
> #Resolution: Resolved   #
> #Reason: No Further Action Required #
> #Tier1: User Code   #
> #Tier2: Other   #
> #Tier3: Assistance  #
> #Notification: None #
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2.0.14 (GNU/Linux)
>
> iEYEARECAAYFAlF3+K8ACgkQoBCTJSAkVrFfRACgjiiRXjyRGfc2fGPJWTmJTjnK
> 89cAnRnstn0e/GiYz0Go13R2B+lBUWWw
> =HmUJ
> -END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Steve Heistand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

it was mentioned to me (by Mr Mihaiescu) that this only works if controller and 
network node
are on the same machine. For the compute nodes I had forgotten its in a 
different
place. On them I am doing it in a pre-start script in 
quantum-plugin-openvswitch-agent.conf.
if the controller/network are on different machines certainly in the 
quantum-server.conf
work on which ever one of them is actually using it, if it doesnt the command 
will have
to be in a different startup script.

It was also mentioned that putting things in /etc/rc.local and then restarting
all the quantum related services might work too.

steve

On 04/24/2013 08:15 AM, Balamurugan V G wrote:
> Thanks Steve.
> 
> I came across another way at 
> https://bugs.launchpad.net/quantum/+bug/1084355/comments/15. It seems to work 
> as
> well. But your solution is simpler :)
> 
> Regards, Balu
> 
> 
> On Wed, Apr 24, 2013 at 7:41 PM, Steve Heistand  
> wrote: I
> put it in the file:/etc/init/quantum-server.conf
> 
> post-start script /usr/bin/quantum-ovs-cleanup exit 1 end script
> 
> 
> On 04/24/2013 02:45 AM, Balamurugan V G wrote:
 Hi,
 
 It seems due to an OVS quantum bug, we need to run the utility
 quantum-ovs-cleanup before any of the quantum services start, upon a server
 reboot.
 
 Where is the best place to put this utility to run automatically when a 
 server 
 reboots so that the OVS issue is automatically addressed? A script in
 /etc/init.d or just plugging in a call for quantum-ovs-cleanup in an 
 existing
 script?
 
 Thanks, Balu
 
 ___ Mailing list: 
 https://launchpad.net/~openstack Post to : 
 openstack@lists.launchpad.net 
 Unsubscribe : https://launchpad.net/~openstack More help   : 
 https://help.launchpad.net/ListHelp
 
> 

- -- 

 Steve Heistand  NASA Ames Research Center
 email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
 ph: (650) 604-4369  Bldg. 258, Rm. 232-5
 Scientific & HPC ApplicationP.O. Box 1
 Development/OptimizationMoffett Field, CA 94035-0001

 "Any opinions expressed are those of our alien overlords, not my own."

# For Remedy#
#Action: Resolve#   
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlF3+K8ACgkQoBCTJSAkVrFfRACgjiiRXjyRGfc2fGPJWTmJTjnK
89cAnRnstn0e/GiYz0Go13R2B+lBUWWw
=HmUJ
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Balamurugan V G
Thanks Steve.

I came across another way at
https://bugs.launchpad.net/quantum/+bug/1084355/comments/15. It seems
to work as well. But your solution is simpler :)

Regards,
Balu


On Wed, Apr 24, 2013 at 7:41 PM, Steve Heistand  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> I put it in the file:/etc/init/quantum-server.conf
>
> post-start script
> /usr/bin/quantum-ovs-cleanup
> exit 1
> end script
>
>
> On 04/24/2013 02:45 AM, Balamurugan V G wrote:
>> Hi,
>>
>> It seems due to an OVS quantum bug, we need to run the utility 
>> quantum-ovs-cleanup
>> before any of the quantum services start, upon a server reboot.
>>
>> Where is the best place to put this utility to run automatically when a 
>> server
>> reboots so that the OVS issue is automatically addressed? A script in 
>> /etc/init.d or
>> just plugging in a call for quantum-ovs-cleanup in an existing script?
>>
>> Thanks, Balu
>>
>> ___ Mailing list:
>> https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack More help   :
>> https://help.launchpad.net/ListHelp
>>
>
> - --
> 
>  Steve Heistand  NASA Ames Research Center
>  email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
>  ph: (650) 604-4369  Bldg. 258, Rm. 232-5
>  Scientific & HPC ApplicationP.O. Box 1
>  Development/OptimizationMoffett Field, CA 94035-0001
> 
>  "Any opinions expressed are those of our alien overlords, not my own."
>
> # For Remedy#
> #Action: Resolve#
> #Resolution: Resolved   #
> #Reason: No Further Action Required #
> #Tier1: User Code   #
> #Tier2: Other   #
> #Tier3: Assistance  #
> #Notification: None #
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2.0.14 (GNU/Linux)
>
> iEYEARECAAYFAlF36C8ACgkQoBCTJSAkVrGqMACg3Jm7tTBwx08oOSaiTVux7sRl
> cNMAn0OMrAElV2CZgqZFaayoeOitQMUn
> =TGy3
> -END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Ceilometer does not collect metrics like vcpus and memory

2013-04-24 Thread Giuseppe Civitella
Hi all,

I'm trying to collect Ceilometer's metrics from my test install of
Openstack Grizzly.
I'm able to collect most of the metrics from the central collector and the
nova-compute agents.
But I'm still missing some values like memory and vcpus.
This is an abstract from ceilometer's log on a nova-compute node:

http://paste.openstack.org/show/36567/

vcpus and memory_mb are empty values.
Any idea about how to get them?

Thanks a lot
Giuseppe
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum] traffic routes question

2013-04-24 Thread Steve Heistand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Im having trouble getting the floating IPs on the external network
accessible from the outside world. From the network node they work
fine but somehow I doubt that means anything.

so my network node (also controller node) has 4 ethernets, 1 for management,
1 for VM traffic, 1 that is an extra external connection in case I screw
things up and an external connection that is bridged for the normal br-ex 
interface.
the two internal are 10.X, 172.X types, the two external are on the same
subnet, .100 for extra, .101 for the 1 qrouter set up at the moment.
The external subnet was created with a gateway pointing at the next hop
upstream from the network node.

setting a VM with an external IP (.102) works ok, from the network node I can
ssh into it and all, from other other VMs I can get to the external floating IP.
But I doubt any of that means its really set up correctly.

My question is this, the upstream routers all think the next hop for
the floating IPs are the .101 qrouter IP address. The router that gets
set up will route packets in from this side of things will it not?

Ive tried turning off iptables in case something was blocking traffic
but that didnt help anything.

Ive tried tcpdump on the gateway router device (qg-9dd1a800-c5/.101) and see
stuff going to the .102 IP when its coming from the VMs. But nothing when
I try to connect to it from the outside world.
Dont see any traffic for .102 on the other external network either.

I dont have any access to the next hop upstream to see what packets
are going where.

But this should all work correct?

thanks

s

- -- 

 Steve Heistand  NASA Ames Research Center
 email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
 ph: (650) 604-4369  Bldg. 258, Rm. 232-5
 Scientific & HPC ApplicationP.O. Box 1
 Development/OptimizationMoffett Field, CA 94035-0001

 "Any opinions expressed are those of our alien overlords, not my own."

# For Remedy#
#Action: Resolve#   
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlF38CcACgkQoBCTJSAkVrF16QCfXGes9kYSqi0jS3x5Es5Asrs+
fUUAnAkvRJXLY2eMN5N6+RuxZaWmzZe5
=/qEk
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Arindam Choudhury

hi,

I was misled by this:

[(keystone_user)]$ nova list
+--+++---+
| ID   | Name   | Status | Networks 
 |
+--+++---+
| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | 
private=192.168.100.2 |
+--+++---+

This is a nova-network problem.

From: arin...@live.com
To: openstack@lists.launchpad.net
Date: Wed, 24 Apr 2013 16:12:47 +0200
Subject: [Openstack] problem with metadata and ping




Hi,

I having problem with metadata service. I am using nova-network. The console 
log says:

Starting network...udhcpc (v1.18.5) startedSending discover...Sending 
discover...Sending discover...No lease, failingWARN: /etc/rc3.d/S40network 
failedcloudsetup: checking 
http://169.254.169.254/20090404/metadata/instanceidwget: can't connect to 
remote host (169.254.169.254): Network is unreachablecloudsetup: failed 1/30: 
up 10.06. request failed.

the whole console log is here: https://gist.github.com/arindamchoudhury/5452385
my nova.conf is here: https://gist.github.com/arindamchoudhury/5452410

[(keystone_user)]$ nova network-list 
++-+--+
| ID | Label   | Cidr |
++-+--+
| 1  | private | 192.168.100.0/24 |
++-+--+
[(keystone_user)]$ nova secgroup-list
+-+-+
| Name| Description |
+-+-+
| default | default |
+-+-+
[(keystone_user)]$ nova secgroup-list-rules default
+-+---+-+---+--+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-+---+-+---+--+
| icmp| -1| -1  | 0.0.0.0/0 |  |
| tcp | 22| 22  | 0.0.0.0/0 |  |
+-+---+-+---+--+


  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp   
  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Steve Heistand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I put it in the file:/etc/init/quantum-server.conf

post-start script
/usr/bin/quantum-ovs-cleanup
exit 1
end script


On 04/24/2013 02:45 AM, Balamurugan V G wrote:
> Hi,
> 
> It seems due to an OVS quantum bug, we need to run the utility 
> quantum-ovs-cleanup
> before any of the quantum services start, upon a server reboot.
> 
> Where is the best place to put this utility to run automatically when a server
> reboots so that the OVS issue is automatically addressed? A script in 
> /etc/init.d or
> just plugging in a call for quantum-ovs-cleanup in an existing script?
> 
> Thanks, Balu
> 
> ___ Mailing list:
> https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net 
> Unsubscribe : https://launchpad.net/~openstack More help   :
> https://help.launchpad.net/ListHelp
> 

- -- 

 Steve Heistand  NASA Ames Research Center
 email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
 ph: (650) 604-4369  Bldg. 258, Rm. 232-5
 Scientific & HPC ApplicationP.O. Box 1
 Development/OptimizationMoffett Field, CA 94035-0001

 "Any opinions expressed are those of our alien overlords, not my own."

# For Remedy#
#Action: Resolve#   
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlF36C8ACgkQoBCTJSAkVrGqMACg3Jm7tTBwx08oOSaiTVux7sRl
cNMAn0OMrAElV2CZgqZFaayoeOitQMUn
=TGy3
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] problem with metadata and ping

2013-04-24 Thread Arindam Choudhury
Hi,

I having problem with metadata service. I am using nova-network. The console 
log says:

Starting network...udhcpc (v1.18.5) startedSending discover...Sending 
discover...Sending discover...No lease, failingWARN: /etc/rc3.d/S40network 
failedcloudsetup: checking 
http://169.254.169.254/20090404/metadata/instanceidwget: can't connect to 
remote host (169.254.169.254): Network is unreachablecloudsetup: failed 1/30: 
up 10.06. request failed.

the whole console log is here: https://gist.github.com/arindamchoudhury/5452385
my nova.conf is here: https://gist.github.com/arindamchoudhury/5452410

[(keystone_user)]$ nova network-list 
++-+--+
| ID | Label   | Cidr |
++-+--+
| 1  | private | 192.168.100.0/24 |
++-+--+
[(keystone_user)]$ nova secgroup-list
+-+-+
| Name| Description |
+-+-+
| default | default |
+-+-+
[(keystone_user)]$ nova secgroup-list-rules default
+-+---+-+---+--+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-+---+-+---+--+
| icmp| -1| -1  | 0.0.0.0/0 |  |
| tcp | 22| 22  | 0.0.0.0/0 |  |
+-+---+-+---+--+


  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fwd: [Quantum] Query regarding floating IP configuration

2013-04-24 Thread Sylvain Bauza

Hi Anil,

What you quoted is about L3 management and bridging and the need of 
flexibility. It means that the physical NIC will have a whole bunch of 
IP addresses, one per Quantum router you define.


Should you want to deploy a Controler on that node, you would need to 
have a second NIC with external access (what is called "API Network" in 
the docpage Simon quoted).


There is also need for "Data Network" with ideally a third NIC (if you 
want to provide separate IP ranges for API and data network) but you can 
bypass that in a lab environment by assumpting that your data network IP 
range is externally reachable and consequently the management IP of the 
controler/network node is the public IP (for the API purpose) (here, 
NIC2 IP address)


Is it clearer ?

-Sylvain



Le 18/04/2013 21:00, Anil Vishnoi a écrit :

Re-sending it, with the hope of response :-)

-- Forwarded message --
From: *Anil Vishnoi* >

Date: Thu, Apr 18, 2013 at 1:59 AM
Subject: [Openstack][Quantum] Query regarding floating IP configuration
To: "openstack@lists.launchpad.net 
" >




Hi All,

I am trying to setup openstack in my lab, where i have a plan to run 
Controller+Network node on one physical machine and two compute node. 
Controller/Network physical machine has 2 NIc, one connected to 
externet network (internet) and second nic is on private network.


OS Network Administrator Guide says "The node running quantum-l3-agent 
should not have an IP address manually configured on the NIC connected 
to the external network. Rather, you must have a range of IP addresses 
from the external network that can be used by OpenStack Networking for 
routers that uplink to the external network.". So my confusion is, if 
i want to send any REST API call to my controller/network node from 
external network, i obviously need public IP address. But instruction 
i quoted says that we should not have manual IP address on the NIC.


Does it mean we can't create floating IP pool in this kind of setup? 
Or we need 3 NIC, 1 for private network, 1 for floating ip pool 
creation and 1 for external access to the machine?


OR is it that we can assign the public ip address to the br-ex, and 
remove it from physical NIC? Please let me know if my query is not clear.

--
Thanks
Anil



--
Thanks
Anil


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Ceilometer Install

2013-04-24 Thread Riki Arslan
Hi,

We are trying to install "ceilometer-2013.1~g2.tar.gz" which presumably has
Folsom compatibility.

The requirment is "python-keystoneclient>=0.2,<0.3" and we have the version
2.3.

But, still, setup quits with the following message:

"error: Installed distribution python-keystoneclient 0.2.3 conflicts with
requirement python-keystoneclient>=0.1.2,<0.2"

The funny thing is, although pip-requires states
"python-keystoneclient>=0.2,<0.3", the error message complains that it is
not "python-keystoneclient>=0.1.2,<0.2".

Your help is greatly appreciated.

Thank you in advance.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Query regarding floating IP configuration

2013-04-24 Thread Daniels Cai
Anil

It is not necessarily to not configur an IP address for l3 agent ,
2 nics can work in this scenario .config an IP address as you like

Daniels Cai

http://dnscai.com

在 2013-4-24,1:48,Edgar Magana  写道:

Anil,

If you are testing multiple vNICs I will recommend you to use the following
image:
IMAGE_URLS=http://www.openvswitch.org/tty-quantum.tgz

In your localrc add the above string and you are all set up!

Thanks,

Edgar

From: Anil Vishnoi 
Date: Wednesday, April 17, 2013 1:29 PM
To: "openstack@lists.launchpad.net" 
Subject: [Openstack] [Quantum] Query regarding floating IP configuration


Hi All,

I am trying to setup openstack in my lab, where i have a plan to run
Controller+Network node on one physical machine and two compute node.
Controller/Network physical machine has 2 NIc, one connected to externet
network (internet) and second nic is on private network.

OS Network Administrator Guide says "The node running quantum-l3-agent
should not have an IP address manually configured on the NIC connected to
the external network. Rather, you must have a range of IP addresses from
the external network that can be used by OpenStack Networking for routers
that uplink to the external network.". So my confusion is, if i want to
send any REST API call to my controller/network node from external network,
i obviously need public IP address. But instruction i quoted says that we
should not have manual IP address on the NIC.

Does it mean we can't create floating IP pool in this kind of setup? Or we
need 3 NIC, 1 for private network, 1 for floating ip pool creation and 1
for external access to the machine?

OR is it that we can assign the public ip address to the br-ex, and remove
it from physical NIC? Please let me know if my query is not clear.
-- 
Thanks
Anil
___ Mailing list:
https://launchpad.net/~openstack Post to :
openstack@lists.launchpad.netUnsubscribe :
https://launchpad.net/~openstack More help :
https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] error in nova-network start-up

2013-04-24 Thread Arindam Choudhury

Hi Razique,

Thanks a lot. So lesson learned, dnsmasq should not be running.
Subject: Re: [Openstack] error in nova-network start-up
From: razique.mahr...@gmail.com
Date: Wed, 24 Apr 2013 12:01:16 +0200
CC: openstack@lists.launchpad.net
To: arin...@live.com

Ok that's the Process 9033 - try a $ kill 9033 and you should be good!

Razique Mahroua - Nuage & Corazique.mahroua@gmail.comTel : +33 9 72 37 94 15


Le 24 avr. 2013 à 11:52, Arindam Choudhury  a écrit :Hi,
Thanks for your reply,
Here is the output:

netstat -tanpu | grep LISTEN
tcp0  0 0.0.0.0:43690.0.0.0:*   LISTEN  
13837/epmd  
tcp0  0 0.0.0.0:45746   0.0.0.0:*   LISTEN  
2104/rpc.statd  
tcp0  0 0.0.0.0:756 0.0.0.0:*   LISTEN  
3123/ypbind 
tcp0  0 0.0.0.0:53  0.0.0.0:*   LISTEN  
9033/dnsmasq
tcp0  0 0.0.0.0:22  0.0.0.0:*   LISTEN  
16165/sshd  
tcp0  0 0.0.0.0:16509   0.0.0.0:*   LISTEN  
4267/libvirtd   
tcp0  0 0.0.0.0:38465   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:38466   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:38467   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:56196   0.0.0.0:*   LISTEN  
15134/beam.smp  
tcp0  0 0.0.0.0:24007   0.0.0.0:*   LISTEN  
3053/glusterd   
tcp0  0 0.0.0.0:50503   0.0.0.0:*   LISTEN  
-   
tcp0  0 0.0.0.0:86490.0.0.0:*   LISTEN  
4081/gmond  
tcp0  0 0.0.0.0:24009   0.0.0.0:*   LISTEN  
4572/glusterfsd 
tcp0  0 0.0.0.0:33060.0.0.0:*   LISTEN  
4916/mysqld 
tcp0  0 0.0.0.0:111 0.0.0.0:*   LISTEN  
2093/rpcbind
tcp6   0  0 :::53   :::*LISTEN  
9033/dnsmasq
tcp6   0  0 :::22   :::*LISTEN  
16165/sshd  
tcp6   0  0 :::51129:::*LISTEN  
2104/rpc.statd  
tcp6   0  0 :::16509:::*LISTEN  
4267/libvirtd   
tcp6   0  0 :::5672 :::*LISTEN  
15134/beam.smp  
tcp6   0  0 :::54121:::*LISTEN  
-   
tcp6   0  0 :::111  :::*LISTEN  
2093/rpcbind

Subject: Re: [Openstack] error in nova-network start-up
From: razique.mahr...@gmail.com
Date: Wed, 24 Apr 2013 11:37:15 +0200
CC: openstack@lists.launchpad.net
To: arin...@live.com

Hi Arindam, looks like the port you are trying to bind the process to is 
already used, can you run : $ netstat -tanpu | grep LISTENand paste the 
output?thanks!

Razique Mahroua - Nuage & Corazique.mahroua@gmail.comTel : +33 9 72 37 94 
15
Le 24 avr. 2013 à 11:13, Arindam Choudhury  a écrit :Hi,

When I try to start the nova-network, I am getting this error:

2013-04-24 11:12:30.926 10327 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 7472
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 18
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 8
2013-04-24 11:12:31.183 10327 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for aopcso1:aopcso1.uab.es
root@aopcso1:/etc/nova# cat /var/log/nova/nova-network.log 
2013-04-24 11:12:22.140 11502 INFO nova.manager [-] Skipping periodic task 
_periodic_update_dns because its interval is negative
2013-04-24 11:12:22.141 11502 INFO nova.network.driver [-] Loading network 
driver 'nova.network.linux_net'
2013-04-24 11:12:22.147 11502 AUDIT nova.service [-] Starting network node 
(version 2013.1)
2013-04-24 11:12:23.590 11502 CRITICAL nova [-] Unexpected error while running 
command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env 
CONFIG_FILE=["/etc/nova/nova.conf"] NETWORK_ID=1 dnsmasq --strict-order 
--bind-interfaces --conf-file= --domain='novalocal' 
--pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1 
--except-interface=lo 
--dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s 
--dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf 
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
Exit code: 2
Stdout: ''
Stderr: "2013-04-24 11:12:23.481 INFO nova.manager 
[req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Skipping periodic task 
_periodic_update_dns because i

Re: [Openstack] error in nova-network start-up

2013-04-24 Thread Razique Mahroua
Ok that's the Process 9033 - try a $ kill 9033 and you should be good!
Razique Mahroua - Nuage & Corazique.mahr...@gmail.comTel : +33 9 72 37 94 15

Le 24 avr. 2013 à 11:52, Arindam Choudhury  a écrit :Hi,Thanks for your reply,Here is the output:netstat -tanpu | grep LISTENtcp    0  0 0.0.0.0:4369    0.0.0.0:*   LISTEN  13837/epmd  tcp    0  0 0.0.0.0:45746   0.0.0.0:*   LISTEN  2104/rpc.statd  tcp    0  0 0.0.0.0:756 0.0.0.0:*   LISTEN  3123/ypbind tcp    0  0 0.0.0.0:53  0.0.0.0:*   LISTEN  9033/dnsmasqtcp    0  0 0.0.0.0:22  0.0.0.0:*   LISTEN  16165/sshd  tcp    0  0 0.0.0.0:16509   0.0.0.0:*   LISTEN  4267/libvirtd   tcp    0  0 0.0.0.0:38465   0.0.0.0:*   LISTEN  4577/glusterfs  tcp    0  0 0.0.0.0:38466   0.0.0.0:*   LISTEN  4577/glusterfs  tcp    0  0 0.0.0.0:38467   0.0.0.0:*   LISTEN  4577/glusterfs  tcp    0  0 0.0.0.0:56196   0.0.0.0:*   LISTEN  15134/beam.smp  tcp    0  0 0.0.0.0:24007   0.0.0.0:*   LISTEN  3053/glusterd   tcp    0  0 0.0.0.0:50503   0.0.0.0:*   LISTEN  -   tcp    0  0 0.0.0.0:8649    0.0.0.0:*   LISTEN  4081/gmond  tcp    0  0 0.0.0.0:24009   0.0.0.0:*   LISTEN  4572/glusterfsd tcp    0  0 0.0.0.0:3306    0.0.0.0:*   LISTEN  4916/mysqld tcp    0  0 0.0.0.0:111 0.0.0.0:*   LISTEN  2093/rpcbindtcp6   0  0 :::53   :::*    LISTEN  9033/dnsmasqtcp6   0  0 :::22   :::*    LISTEN  16165/sshd  tcp6   0  0 :::51129    :::*    LISTEN  2104/rpc.statd  tcp6   0  0 :::16509    :::*    LISTEN  4267/libvirtd   tcp6   0  0 :::5672 :::*    LISTEN  15134/beam.smp  tcp6   0  0 :::54121    :::*    LISTEN  -   tcp6   0  0 :::111  :::*    LISTEN  2093/rpcbindSubject: Re: [Openstack] error in nova-network start-upFrom: razique.mahr...@gmail.comDate: Wed, 24 Apr 2013 11:37:15 +0200CC: openstack@lists.launchpad.netTo: arin...@live.comHi Arindam, looks like the port you are trying to bind the process to is already used, can you run : $ netstat -tanpu | grep LISTENand paste the output?thanks!Razique Mahroua - Nuage & Corazique.mahr...@gmail.comTel : +33 9 72 37 94 15Le 24 avr. 2013 à 11:13, Arindam Choudhury  a écrit :Hi,When I try to start the nova-network, I am getting this error:2013-04-24 11:12:30.926 10327 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 74722013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 182013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 82013-04-24 11:12:31.183 10327 INFO nova.compute.resource_tracker [-] Compute_service record updated for aopcso1:aopcso1.uab.esroot@aopcso1:/etc/nova# cat /var/log/nova/nova-network.log 2013-04-24 11:12:22.140 11502 INFO nova.manager [-] Skipping periodic task _periodic_update_dns because its interval is negative2013-04-24 11:12:22.141 11502 INFO nova.network.driver [-] Loading network driver 'nova.network.linux_net'2013-04-24 11:12:22.147 11502 AUDIT nova.service [-] Starting network node (version 2013.1)2013-04-24 11:12:23.590 11502 CRITICAL nova [-] Unexpected error while running command.Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env CONFIG_FILE=["/etc/nova/nova.conf"] NETWORK_ID=1 dnsmasq --strict-order --bind-interfaces --conf-file= --domain='novalocal' --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1 --except-interface=lo --dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s --dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-roExit code: 2Stdout: ''Stderr: "2013-04-24 11:12:23.481 INFO nova.manager [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Skipping periodic task _periodic_update_dns because its interval is negative\n2013-04-24 11:12:23.482 INFO nova.network.driver [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Loading network driver 'nova.network.linux_net'\n\ndnsmasq: failed to create listening socket for 192.168.100.1: La direcci\xc3\xb3n ya se est\xc3\xa1 usando\n"2013-04-24 11:12:23.590 11502 TRACE no

Re: [Openstack] error in nova-network start-up

2013-04-24 Thread Arindam Choudhury
Hi,
Thanks for your reply,
Here is the output:

netstat -tanpu | grep LISTEN
tcp0  0 0.0.0.0:43690.0.0.0:*   LISTEN  
13837/epmd  
tcp0  0 0.0.0.0:45746   0.0.0.0:*   LISTEN  
2104/rpc.statd  
tcp0  0 0.0.0.0:756 0.0.0.0:*   LISTEN  
3123/ypbind 
tcp0  0 0.0.0.0:53  0.0.0.0:*   LISTEN  
9033/dnsmasq
tcp0  0 0.0.0.0:22  0.0.0.0:*   LISTEN  
16165/sshd  
tcp0  0 0.0.0.0:16509   0.0.0.0:*   LISTEN  
4267/libvirtd   
tcp0  0 0.0.0.0:38465   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:38466   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:38467   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:56196   0.0.0.0:*   LISTEN  
15134/beam.smp  
tcp0  0 0.0.0.0:24007   0.0.0.0:*   LISTEN  
3053/glusterd   
tcp0  0 0.0.0.0:50503   0.0.0.0:*   LISTEN  
-   
tcp0  0 0.0.0.0:86490.0.0.0:*   LISTEN  
4081/gmond  
tcp0  0 0.0.0.0:24009   0.0.0.0:*   LISTEN  
4572/glusterfsd 
tcp0  0 0.0.0.0:33060.0.0.0:*   LISTEN  
4916/mysqld 
tcp0  0 0.0.0.0:111 0.0.0.0:*   LISTEN  
2093/rpcbind
tcp6   0  0 :::53   :::*LISTEN  
9033/dnsmasq
tcp6   0  0 :::22   :::*LISTEN  
16165/sshd  
tcp6   0  0 :::51129:::*LISTEN  
2104/rpc.statd  
tcp6   0  0 :::16509:::*LISTEN  
4267/libvirtd   
tcp6   0  0 :::5672 :::*LISTEN  
15134/beam.smp  
tcp6   0  0 :::54121:::*LISTEN  
-   
tcp6   0  0 :::111  :::*LISTEN  
2093/rpcbind

Subject: Re: [Openstack] error in nova-network start-up
From: razique.mahr...@gmail.com
Date: Wed, 24 Apr 2013 11:37:15 +0200
CC: openstack@lists.launchpad.net
To: arin...@live.com

Hi Arindam, looks like the port you are trying to bind the process to is 
already used, can you run : $ netstat -tanpu | grep LISTENand paste the 
output?thanks!


Razique Mahroua - Nuage & Corazique.mahroua@gmail.comTel : +33 9 72 37 94 15


Le 24 avr. 2013 à 11:13, Arindam Choudhury  a écrit :Hi,

When I try to start the nova-network, I am getting this error:

2013-04-24 11:12:30.926 10327 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 7472
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 18
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 8
2013-04-24 11:12:31.183 10327 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for aopcso1:aopcso1.uab.es
root@aopcso1:/etc/nova# cat /var/log/nova/nova-network.log 
2013-04-24 11:12:22.140 11502 INFO nova.manager [-] Skipping periodic task 
_periodic_update_dns because its interval is negative
2013-04-24 11:12:22.141 11502 INFO nova.network.driver [-] Loading network 
driver 'nova.network.linux_net'
2013-04-24 11:12:22.147 11502 AUDIT nova.service [-] Starting network node 
(version 2013.1)
2013-04-24 11:12:23.590 11502 CRITICAL nova [-] Unexpected error while running 
command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env 
CONFIG_FILE=["/etc/nova/nova.conf"] NETWORK_ID=1 dnsmasq --strict-order 
--bind-interfaces --conf-file= --domain='novalocal' 
--pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1 
--except-interface=lo 
--dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s 
--dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf 
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
Exit code: 2
Stdout: ''
Stderr: "2013-04-24 11:12:23.481 INFO nova.manager 
[req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Skipping periodic task 
_periodic_update_dns because its interval is negative\n2013-04-24 11:12:23.482 
INFO nova.network.driver [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] 
Loading network driver 'nova.network.linux_net'\n\ndnsmasq: failed to create 
listening socket for 192.168.100.1: La direcci\xc3\xb3n ya se est\xc3\xa1 
usando\n"
2013-04-24 11:12:23.590 11502 TRACE nova Traceback (most recent call last):
2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/bin/nova-network", line 
54, in 
2013-04

[Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Balamurugan V G
Hi,

It seems due to an OVS quantum bug, we need to run the utility
quantum-ovs-cleanup before any of the quantum services start, upon a
server reboot.

Where is the best place to put this utility to run automatically when
a server reboots so that the OVS issue is automatically addressed? A
script in /etc/init.d or just plugging in a call for
quantum-ovs-cleanup in an existing script?

Thanks,
Balu

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] error in nova-network start-up

2013-04-24 Thread Razique Mahroua
Hi Arindam, looks like the port you are trying to bind the process to is already used, can you run : $ netstat -tanpu | grep LISTENand paste the output?thanks!
Razique Mahroua - Nuage & Corazique.mahr...@gmail.comTel : +33 9 72 37 94 15

Le 24 avr. 2013 à 11:13, Arindam Choudhury  a écrit :Hi,When I try to start the nova-network, I am getting this error:2013-04-24 11:12:30.926 10327 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 74722013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 182013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 82013-04-24 11:12:31.183 10327 INFO nova.compute.resource_tracker [-] Compute_service record updated for aopcso1:aopcso1.uab.esroot@aopcso1:/etc/nova# cat /var/log/nova/nova-network.log 2013-04-24 11:12:22.140 11502 INFO nova.manager [-] Skipping periodic task _periodic_update_dns because its interval is negative2013-04-24 11:12:22.141 11502 INFO nova.network.driver [-] Loading network driver 'nova.network.linux_net'2013-04-24 11:12:22.147 11502 AUDIT nova.service [-] Starting network node (version 2013.1)2013-04-24 11:12:23.590 11502 CRITICAL nova [-] Unexpected error while running command.Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env CONFIG_FILE=["/etc/nova/nova.conf"] NETWORK_ID=1 dnsmasq --strict-order --bind-interfaces --conf-file= --domain='novalocal' --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1 --except-interface=lo --dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s --dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-roExit code: 2Stdout: ''Stderr: "2013-04-24 11:12:23.481 INFO nova.manager [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Skipping periodic task _periodic_update_dns because its interval is negative\n2013-04-24 11:12:23.482 INFO nova.network.driver [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Loading network driver 'nova.network.linux_net'\n\ndnsmasq: failed to create listening socket for 192.168.100.1: La direcci\xc3\xb3n ya se est\xc3\xa1 usando\n"2013-04-24 11:12:23.590 11502 TRACE nova Traceback (most recent call last):2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/bin/nova-network", line 54, in 2013-04-24 11:12:23.590 11502 TRACE nova service.wait()2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/nova/service.py", line 689, in wait2013-04-24 11:12:23.590 11502 TRACE nova _launcher.wait()2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/nova/service.py", line 209, in wait2013-04-24 11:12:23.590 11502 TRACE nova super(ServiceLauncher, self).wait()2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/nova/service.py", line 179, in wait2013-04-24 11:12:23.590 11502 TRACE nova service.wait()2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait2013-04-24 11:12:23.590 11502 TRACE nova return self._exit_event.wait()2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait2013-04-24 11:12:23.590 11502 TRACE nova return hubs.get_hub().switch()2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch2013-04-24 11:12:23.590 11502 TRACE nova return self.greenlet.switch()2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main2013-04-24 11:12:23.590 11502 TRACE nova result = function(*args, **kwargs)2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/nova/service.py", line 147, in run_server2013-04-24 11:12:23.590 11502 TRACE nova server.start()2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/nova/service.py", line 429, in start2013-04-24 11:12:23.590 11502 TRACE nova self.manager.init_host()2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1602, in init_host2013-04-24 11:12:23.590 11502 TRACE nova super(FlatDHCPManager, self).init_host()2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 345, in init_host2013-04-24 11:12:23.590 11502 TRACE nova self._setup_network_on_host(ctxt, network)2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1617, in _setup_network_on_host2013-04-24 11:12:23.590 11502 TRACE nova self.driver.update_dhcp(elevated, dev, network)2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py

[Openstack] error in nova-network start-up

2013-04-24 Thread Arindam Choudhury
Hi,

When I try to start the nova-network, I am getting this error:

2013-04-24 11:12:30.926 10327 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 7472
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 18
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 8
2013-04-24 11:12:31.183 10327 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for aopcso1:aopcso1.uab.es
root@aopcso1:/etc/nova# cat /var/log/nova/nova-network.log 
2013-04-24 11:12:22.140 11502 INFO nova.manager [-] Skipping periodic task 
_periodic_update_dns because its interval is negative
2013-04-24 11:12:22.141 11502 INFO nova.network.driver [-] Loading network 
driver 'nova.network.linux_net'
2013-04-24 11:12:22.147 11502 AUDIT nova.service [-] Starting network node 
(version 2013.1)
2013-04-24 11:12:23.590 11502 CRITICAL nova [-] Unexpected error while running 
command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env 
CONFIG_FILE=["/etc/nova/nova.conf"] NETWORK_ID=1 dnsmasq --strict-order 
--bind-interfaces --conf-file= --domain='novalocal' 
--pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1 
--except-interface=lo 
--dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s 
--dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf 
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
Exit code: 2
Stdout: ''
Stderr: "2013-04-24 11:12:23.481 INFO nova.manager 
[req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Skipping periodic task 
_periodic_update_dns because its interval is negative\n2013-04-24 11:12:23.482 
INFO nova.network.driver [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] 
Loading network driver 'nova.network.linux_net'\n\ndnsmasq: failed to create 
listening socket for 192.168.100.1: La direcci\xc3\xb3n ya se est\xc3\xa1 
usando\n"
2013-04-24 11:12:23.590 11502 TRACE nova Traceback (most recent call last):
2013-04-24 11:12:23.590 11502 TRACE nova   File "/usr/bin/nova-network", line 
54, in 
2013-04-24 11:12:23.590 11502 TRACE nova service.wait()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 689, in wait
2013-04-24 11:12:23.590 11502 TRACE nova _launcher.wait()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 209, in wait
2013-04-24 11:12:23.590 11502 TRACE nova super(ServiceLauncher, self).wait()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 179, in wait
2013-04-24 11:12:23.590 11502 TRACE nova service.wait()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait
2013-04-24 11:12:23.590 11502 TRACE nova return self._exit_event.wait()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
2013-04-24 11:12:23.590 11502 TRACE nova return hubs.get_hub().switch()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch
2013-04-24 11:12:23.590 11502 TRACE nova return self.greenlet.switch()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main
2013-04-24 11:12:23.590 11502 TRACE nova result = function(*args, **kwargs)
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 147, in run_server
2013-04-24 11:12:23.590 11502 TRACE nova server.start()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 429, in start
2013-04-24 11:12:23.590 11502 TRACE nova self.manager.init_host()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1602, in 
init_host
2013-04-24 11:12:23.590 11502 TRACE nova super(FlatDHCPManager, 
self).init_host()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 345, in 
init_host
2013-04-24 11:12:23.590 11502 TRACE nova self._setup_network_on_host(ctxt, 
network)
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1617, in 
_setup_network_on_host
2013-04-24 11:12:23.590 11502 TRACE nova self.driver.update_dhcp(elevated, 
dev, network)
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 938, in 
update_dhcp
2013-04-24 11:12:23.590 11502 TRACE nova restart_dhcp(context, dev, 
network_ref)
2013-04-24 11:12:23.590 11502 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/lo

Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Balamurugan V G
Thanks Razique, I'll try this as well. I am also trying for out of the box
options like file injection and meta-data service.

Regards,
Balu


On Wed, Apr 24, 2013 at 1:57 PM, Razique Mahroua
wrote:

> Hi Balu,
> check this out
> http://www.cloudbase.it/cloud-init-for-windows-instances/
>
> It's a great tool, I just had issues myself with the Admin. password
> changing
>
> Regards,
> Razique
>
> *Razique Mahroua** - **Nuage & Co*
> razique.mahr...@gmail.com
> Tel : +33 9 72 37 94 15
>
>
> Le 24 avr. 2013 à 08:17, Balamurugan V G  a
> écrit :
>
> Hi,
>
> I am able to get File Injection to work during a CentOS or Ubuntu VM
> instance creation. But it doesnt work for a Windows VM. Is there a way to
> get it to work for windows VM or it going to be a limitation we have to
> live with, perhaps due to filesystem differences?
>
> Regards,
> Balu
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>
<>___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Razique Mahroua
Hi Balu, check this outhttp://www.cloudbase.it/cloud-init-for-windows-instances/It's a great tool, I just had issues myself with the Admin. password changing Regards,Razique
Razique Mahroua - Nuage & Corazique.mahr...@gmail.comTel : +33 9 72 37 94 15

Le 24 avr. 2013 à 08:17, Balamurugan V G  a écrit :Hi,I am able to get File Injection to work during a CentOS or Ubuntu VM instance creation. But it doesnt work for a Windows VM. Is there a way to get it to work for windows VM or it going to be a limitation we have to live with, perhaps due to filesystem differences?
Regards,Balu
___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help   : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
I booted a Ubuntu Image in which I had made sure that there was no
pre-existing route for 169,254.0.0/16. But its getting the route from DHCP
once its boots up. So its the DHCP server which is sending this route to
the VM.

Regards,
Balu


On Wed, Apr 24, 2013 at 12:47 PM, Balamurugan V G
wrote:

> Hi Salvatore,
>
> Thanks for the response. I do not have enable_isolated_metadata_proxy
> anywhere under /etc/quantum and /etc/nova. The closest I see is
> 'enable_isolated_metadata' in /etc/quantum/dhcp_agent.ini and even that is
> commented out. What do you mean by link-local address?
>
> Like you said, I suspect that the image has the route. This was was a
> snapshot taken in a Folsom setup. So its possible that Folsom has injected
> this route and when I took the snapshot, it became part of the snapshot. I
> then copied over this snapshot to a new Grizzly setup. Let me check the
> image and remove it from the image if it has the route. Thanks for the hint
> again.
>
> Regards,
> Balu
>
>
>
> On Wed, Apr 24, 2013 at 12:38 PM, Salvatore Orlando 
> wrote:
>
>> The dhcp agent will set a route to 169.254.0.0/16 if
>> enable_isolated_metadata_proxy=True.
>> In that case the dhcp port ip will be the nexthop for that route.
>>
>> Otherwise, it might be your image might have a 'builtin' route to such
>> cidr.
>> What's your nexthop for the link-local address?
>>
>> Salvatore
>>
>>
>> On 24 April 2013 08:00, Balamurugan V G  wrote:
>>
>>> Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
>>> the VMs routing table, I could access the metadata service!
>>>
>>> The route for 169.254.0.0/16 is added automatically when the instance
>>> boots up, so I assume its coming from the DHCP. Any idea how this can be
>>> suppressed?
>>>
>>> Strangely though, I do not see this route in a WindowsXP VM booted in
>>> the same network as the earlier Ubuntu VM and the Windows VM can reach the
>>> metadata service with out me doing anything. The issue is with the Ubuntu
>>> VM.
>>>
>>> Thanks,
>>> Balu
>>>
>>>
>>>
>>> On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen  wrote:
>>>
 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G <
 balamuruga...@gmail.com> wrote:

> Thanks Aaron.
>
> I am perhaps not configuring it right then. I am using Ubuntu 12.04
> host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
> that the VM's routing table has an entry for 169.254.0.0/16 but I
> cant ping 169.254.169.254 from the VM. I am using a single node setup with
> two NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP
>
> These are my metadata related configurations.
>
> */etc/nova/nova.conf *
> metadata_host = 10.5.12.20
> metadata_listen = 127.0.0.1
> metadata_listen_port = 8775
> metadata_manager=nova.api.manager.MetadataManager
> service_quantum_metadata_proxy = true
> quantum_metadata_proxy_shared_secret = metasecret123
>
> */etc/quantum/quantum.conf*
> allow_overlapping_ips = True
>
> */etc/quantum/l3_agent.ini*
> use_namespaces = True
> auth_url = http://10.5.3.230:35357/v2.0
> auth_region = RegionOne
> admin_tenant_name = service
> admin_user = quantum
> admin_password = service_pass
> metadata_ip = 10.5.12.20
>
> */etc/quantum/metadata_agent.ini*
> auth_url = http://10.5.3.230:35357/v2.0
> auth_region = RegionOne
> admin_tenant_name = service
> admin_user = quantum
> admin_password = service_pass
> nova_metadata_ip = 127.0.0.1
> nova_metadata_port = 8775
> metadata_proxy_shared_secret = metasecret123
>
>
> I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
> I ping 169.254.169.254 from VM, in the host's router namespace, I see the
> ARP request but no response.
>
> root@openstack-dev:~# ip netns exec
> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric Ref
> Use Iface
> 0.0.0.0 10.5.12.1   0.0.0.0 UG0  0
> 0 qg-193bb8ee-f5
> 10.5.12.0   0.0.0.0 255.255.255.0   U 0  0
> 0 qg-193bb8ee-f5
> 192.168.2.0 0.0.0.0 255.255.255.0   U 0  0
> 0 qr-59e69986-6e
> root@openstack-dev:~# ip netns exec
> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
> tcpdump: verbose output suppressed, use -v or -vv for full protocol
> decode
> listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
> 65535 bytes
> ^C23:32:09.638289 ARP, Request who-has 192

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
The routing table in the VM is:

root@vm:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 192.168.2.1 0.0.0.0 UG0  00 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000   00 eth0
192.168.2.0 0.0.0.0 255.255.255.0   U 1  00 eth0
root@vm:~#

And the routing table in the OpenStack node(single node host) is:

root@openstack-dev:~# ip netns exec
qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
qg-193bb8ee-f5
10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
qg-193bb8ee-f5
192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
qr-59e69986-6e
root@openstack-dev:~#

Regards,
Balu




On Wed, Apr 24, 2013 at 12:41 PM, Aaron Rosen  wrote:

> Yup,  That's only if your subnet does not have a default gateway set.
> Providing the output of route -n would be helpful .
>
>
> On Wed, Apr 24, 2013 at 12:08 AM, Salvatore Orlando 
> wrote:
>
>> The dhcp agent will set a route to 169.254.0.0/16 if
>> enable_isolated_metadata_proxy=True.
>> In that case the dhcp port ip will be the nexthop for that route.
>>
>> Otherwise, it might be your image might have a 'builtin' route to such
>> cidr.
>> What's your nexthop for the link-local address?
>>
>> Salvatore
>>
>>
>> On 24 April 2013 08:00, Balamurugan V G  wrote:
>>
>>> Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
>>> the VMs routing table, I could access the metadata service!
>>>
>>> The route for 169.254.0.0/16 is added automatically when the instance
>>> boots up, so I assume its coming from the DHCP. Any idea how this can be
>>> suppressed?
>>>
>>> Strangely though, I do not see this route in a WindowsXP VM booted in
>>> the same network as the earlier Ubuntu VM and the Windows VM can reach the
>>> metadata service with out me doing anything. The issue is with the Ubuntu
>>> VM.
>>>
>>> Thanks,
>>> Balu
>>>
>>>
>>>
>>> On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen  wrote:
>>>
 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G <
 balamuruga...@gmail.com> wrote:

> Thanks Aaron.
>
> I am perhaps not configuring it right then. I am using Ubuntu 12.04
> host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
> that the VM's routing table has an entry for 169.254.0.0/16 but I
> cant ping 169.254.169.254 from the VM. I am using a single node setup with
> two NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP
>
> These are my metadata related configurations.
>
> */etc/nova/nova.conf *
> metadata_host = 10.5.12.20
> metadata_listen = 127.0.0.1
> metadata_listen_port = 8775
> metadata_manager=nova.api.manager.MetadataManager
> service_quantum_metadata_proxy = true
> quantum_metadata_proxy_shared_secret = metasecret123
>
> */etc/quantum/quantum.conf*
> allow_overlapping_ips = True
>
> */etc/quantum/l3_agent.ini*
> use_namespaces = True
> auth_url = http://10.5.3.230:35357/v2.0
> auth_region = RegionOne
> admin_tenant_name = service
> admin_user = quantum
> admin_password = service_pass
> metadata_ip = 10.5.12.20
>
> */etc/quantum/metadata_agent.ini*
> auth_url = http://10.5.3.230:35357/v2.0
> auth_region = RegionOne
> admin_tenant_name = service
> admin_user = quantum
> admin_password = service_pass
> nova_metadata_ip = 127.0.0.1
> nova_metadata_port = 8775
> metadata_proxy_shared_secret = metasecret123
>
>
> I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
> I ping 169.254.169.254 from VM, in the host's router namespace, I see the
> ARP request but no response.
>
> root@openstack-dev:~# ip netns exec
> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric Ref
> Use Iface
> 0.0.0.0 10.5.12.1   0.0.0.0 UG0  0
> 0 qg-193bb8ee-f5
> 10.5.12.0   0.0.0.0 255.255.255.0   U 0  0
> 0 qg-193bb8ee-f5
> 192.168.2.0 0.0.0.0 255.255.255.0   U 0  0
> 0 qr-59e69986-6e
> root@openstack-dev:~# ip netns exec
> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
> tcpdump: verbose output suppressed, use -v or -vv for full protocol
> decode
>>

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
I do not have any thing running in the VM which could add this route. With
the route removed, when I disable and enable networking, so that it gets
back the details from DHCP server, I see that the route is getting added
again.

So DHCP seems to be my issue. I guess this rules out any pre-existing route
in the image as well.

Regards,
Balu


On Wed, Apr 24, 2013 at 12:39 PM, Aaron Rosen  wrote:

> Hrm, I'd do quantum subnet-list and see if you happened to create a subnet
> 169.254.0.0/16? Otherwise I think there is probably some software in your
> vm image that is adding this route. One thing to test is if you delete this
> route and then rerun dhclient to see if it's added again via dhcp.
>
>
> On Wed, Apr 24, 2013 at 12:00 AM, Balamurugan V G  > wrote:
>
>> Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
>> the VMs routing table, I could access the metadata service!
>>
>> The route for 169.254.0.0/16 is added automatically when the instance
>> boots up, so I assume its coming from the DHCP. Any idea how this can be
>> suppressed?
>>
>> Strangely though, I do not see this route in a WindowsXP VM booted in the
>> same network as the earlier Ubuntu VM and the Windows VM can reach the
>> metadata service with out me doing anything. The issue is with the Ubuntu
>> VM.
>>
>> Thanks,
>> Balu
>>
>>
>>
>> On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen  wrote:
>>
>>> The vm should not have a routing table entry for 169.254.0.0/16  if it
>>> does i'm not sure how it got there unless it was added by something other
>>> than dhcp. It seems like that is your problem as the vm is arping directly
>>> for that address rather than the default gw.
>>>
>>>
>>> On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G <
>>> balamuruga...@gmail.com> wrote:
>>>
 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I cant
 ping 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
Hi Salvatore,

Thanks for the response. I do not have enable_isolated_metadata_proxy
anywhere under /etc/quantum and /etc/nova. The closest I see is
'enable_isolated_metadata' in /etc/quantum/dhcp_agent.ini and even that is
commented out. What do you mean by link-local address?

Like you said, I suspect that the image has the route. This was was a
snapshot taken in a Folsom setup. So its possible that Folsom has injected
this route and when I took the snapshot, it became part of the snapshot. I
then copied over this snapshot to a new Grizzly setup. Let me check the
image and remove it from the image if it has the route. Thanks for the hint
again.

Regards,
Balu



On Wed, Apr 24, 2013 at 12:38 PM, Salvatore Orlando wrote:

> The dhcp agent will set a route to 169.254.0.0/16 if
> enable_isolated_metadata_proxy=True.
> In that case the dhcp port ip will be the nexthop for that route.
>
> Otherwise, it might be your image might have a 'builtin' route to such
> cidr.
> What's your nexthop for the link-local address?
>
> Salvatore
>
>
> On 24 April 2013 08:00, Balamurugan V G  wrote:
>
>> Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
>> the VMs routing table, I could access the metadata service!
>>
>> The route for 169.254.0.0/16 is added automatically when the instance
>> boots up, so I assume its coming from the DHCP. Any idea how this can be
>> suppressed?
>>
>> Strangely though, I do not see this route in a WindowsXP VM booted in the
>> same network as the earlier Ubuntu VM and the Windows VM can reach the
>> metadata service with out me doing anything. The issue is with the Ubuntu
>> VM.
>>
>> Thanks,
>> Balu
>>
>>
>>
>> On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen  wrote:
>>
>>> The vm should not have a routing table entry for 169.254.0.0/16  if it
>>> does i'm not sure how it got there unless it was added by something other
>>> than dhcp. It seems like that is your problem as the vm is arping directly
>>> for that address rather than the default gw.
>>>
>>>
>>> On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G <
>>> balamuruga...@gmail.com> wrote:
>>>
 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I cant
 ping 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
>>>

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
Yup,  That's only if your subnet does not have a default gateway set.
Providing the output of route -n would be helpful .


On Wed, Apr 24, 2013 at 12:08 AM, Salvatore Orlando wrote:

> The dhcp agent will set a route to 169.254.0.0/16 if
> enable_isolated_metadata_proxy=True.
> In that case the dhcp port ip will be the nexthop for that route.
>
> Otherwise, it might be your image might have a 'builtin' route to such
> cidr.
> What's your nexthop for the link-local address?
>
> Salvatore
>
>
> On 24 April 2013 08:00, Balamurugan V G  wrote:
>
>> Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
>> the VMs routing table, I could access the metadata service!
>>
>> The route for 169.254.0.0/16 is added automatically when the instance
>> boots up, so I assume its coming from the DHCP. Any idea how this can be
>> suppressed?
>>
>> Strangely though, I do not see this route in a WindowsXP VM booted in the
>> same network as the earlier Ubuntu VM and the Windows VM can reach the
>> metadata service with out me doing anything. The issue is with the Ubuntu
>> VM.
>>
>> Thanks,
>> Balu
>>
>>
>>
>> On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen  wrote:
>>
>>> The vm should not have a routing table entry for 169.254.0.0/16  if it
>>> does i'm not sure how it got there unless it was added by something other
>>> than dhcp. It seems like that is your problem as the vm is arping directly
>>> for that address rather than the default gw.
>>>
>>>
>>> On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G <
>>> balamuruga...@gmail.com> wrote:
>>>
 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I cant
 ping 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen wrote:

> Yup, If your host supports namespaces this can be done via the
> quantum-metadata-agent.  The following setting is also required in your
>  nova.conf: service_quantum_met

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Salvatore Orlando
The dhcp agent will set a route to 169.254.0.0/16 if
enable_isolated_metadata_proxy=True.
In that case the dhcp port ip will be the nexthop for that route.

Otherwise, it might be your image might have a 'builtin' route to such
cidr.
What's your nexthop for the link-local address?

Salvatore


On 24 April 2013 08:00, Balamurugan V G  wrote:

> Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
> the VMs routing table, I could access the metadata service!
>
> The route for 169.254.0.0/16 is added automatically when the instance
> boots up, so I assume its coming from the DHCP. Any idea how this can be
> suppressed?
>
> Strangely though, I do not see this route in a WindowsXP VM booted in the
> same network as the earlier Ubuntu VM and the Windows VM can reach the
> metadata service with out me doing anything. The issue is with the Ubuntu
> VM.
>
> Thanks,
> Balu
>
>
>
> On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen  wrote:
>
>> The vm should not have a routing table entry for 169.254.0.0/16  if it
>> does i'm not sure how it got there unless it was added by something other
>> than dhcp. It seems like that is your problem as the vm is arping directly
>> for that address rather than the default gw.
>>
>>
>> On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G <
>> balamuruga...@gmail.com> wrote:
>>
>>> Thanks Aaron.
>>>
>>> I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
>>> and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
>>> the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
>>> 169.254.169.254 from the VM. I am using a single node setup with two
>>> NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP
>>>
>>> These are my metadata related configurations.
>>>
>>> */etc/nova/nova.conf *
>>> metadata_host = 10.5.12.20
>>> metadata_listen = 127.0.0.1
>>> metadata_listen_port = 8775
>>> metadata_manager=nova.api.manager.MetadataManager
>>> service_quantum_metadata_proxy = true
>>> quantum_metadata_proxy_shared_secret = metasecret123
>>>
>>> */etc/quantum/quantum.conf*
>>> allow_overlapping_ips = True
>>>
>>> */etc/quantum/l3_agent.ini*
>>> use_namespaces = True
>>> auth_url = http://10.5.3.230:35357/v2.0
>>> auth_region = RegionOne
>>> admin_tenant_name = service
>>> admin_user = quantum
>>> admin_password = service_pass
>>> metadata_ip = 10.5.12.20
>>>
>>> */etc/quantum/metadata_agent.ini*
>>> auth_url = http://10.5.3.230:35357/v2.0
>>> auth_region = RegionOne
>>> admin_tenant_name = service
>>> admin_user = quantum
>>> admin_password = service_pass
>>> nova_metadata_ip = 127.0.0.1
>>> nova_metadata_port = 8775
>>> metadata_proxy_shared_secret = metasecret123
>>>
>>>
>>> I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
>>> ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
>>> request but no response.
>>>
>>> root@openstack-dev:~# ip netns exec
>>> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
>>> Kernel IP routing table
>>> Destination Gateway Genmask Flags Metric RefUse
>>> Iface
>>> 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
>>> qg-193bb8ee-f5
>>> 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
>>> qg-193bb8ee-f5
>>> 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
>>> qr-59e69986-6e
>>> root@openstack-dev:~# ip netns exec
>>> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol
>>> decode
>>> listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
>>> 65535 bytes
>>> ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
>>> length 28
>>> 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
>>> Unknown), length 28
>>> 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>>> length 28
>>> 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>>> length 28
>>> 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>>> length 28
>>> 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>>> length 28
>>>
>>> 6 packets captured
>>> 6 packets received by filter
>>> 0 packets dropped by kernel
>>> root@openstack-dev:~#
>>>
>>>
>>> Any help will be greatly appreciated.
>>>
>>> Thanks,
>>> Balu
>>>
>>>
>>> On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen  wrote:
>>>
 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G <
 balamuruga...@gmail.com> wrote:

> Hi,
>
> In Grizzly, when using quantum and overlapping IPs, does metadata
> service work? This wasnt working in Folsom.
>
> Thanks,
> Balu
>
> _

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
Hrm, I'd do quantum subnet-list and see if you happened to create a subnet
169.254.0.0/16? Otherwise I think there is probably some software in your
vm image that is adding this route. One thing to test is if you delete this
route and then rerun dhclient to see if it's added again via dhcp.


On Wed, Apr 24, 2013 at 12:00 AM, Balamurugan V G
wrote:

> Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
> the VMs routing table, I could access the metadata service!
>
> The route for 169.254.0.0/16 is added automatically when the instance
> boots up, so I assume its coming from the DHCP. Any idea how this can be
> suppressed?
>
> Strangely though, I do not see this route in a WindowsXP VM booted in the
> same network as the earlier Ubuntu VM and the Windows VM can reach the
> metadata service with out me doing anything. The issue is with the Ubuntu
> VM.
>
> Thanks,
> Balu
>
>
>
> On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen  wrote:
>
>> The vm should not have a routing table entry for 169.254.0.0/16  if it
>> does i'm not sure how it got there unless it was added by something other
>> than dhcp. It seems like that is your problem as the vm is arping directly
>> for that address rather than the default gw.
>>
>>
>> On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G <
>> balamuruga...@gmail.com> wrote:
>>
>>> Thanks Aaron.
>>>
>>> I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
>>> and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
>>> the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
>>> 169.254.169.254 from the VM. I am using a single node setup with two
>>> NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP
>>>
>>> These are my metadata related configurations.
>>>
>>> */etc/nova/nova.conf *
>>> metadata_host = 10.5.12.20
>>> metadata_listen = 127.0.0.1
>>> metadata_listen_port = 8775
>>> metadata_manager=nova.api.manager.MetadataManager
>>> service_quantum_metadata_proxy = true
>>> quantum_metadata_proxy_shared_secret = metasecret123
>>>
>>> */etc/quantum/quantum.conf*
>>> allow_overlapping_ips = True
>>>
>>> */etc/quantum/l3_agent.ini*
>>> use_namespaces = True
>>> auth_url = http://10.5.3.230:35357/v2.0
>>> auth_region = RegionOne
>>> admin_tenant_name = service
>>> admin_user = quantum
>>> admin_password = service_pass
>>> metadata_ip = 10.5.12.20
>>>
>>> */etc/quantum/metadata_agent.ini*
>>> auth_url = http://10.5.3.230:35357/v2.0
>>> auth_region = RegionOne
>>> admin_tenant_name = service
>>> admin_user = quantum
>>> admin_password = service_pass
>>> nova_metadata_ip = 127.0.0.1
>>> nova_metadata_port = 8775
>>> metadata_proxy_shared_secret = metasecret123
>>>
>>>
>>> I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
>>> ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
>>> request but no response.
>>>
>>> root@openstack-dev:~# ip netns exec
>>> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
>>> Kernel IP routing table
>>> Destination Gateway Genmask Flags Metric RefUse
>>> Iface
>>> 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
>>> qg-193bb8ee-f5
>>> 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
>>> qg-193bb8ee-f5
>>> 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
>>> qr-59e69986-6e
>>> root@openstack-dev:~# ip netns exec
>>> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
>>> tcpdump: verbose output suppressed, use -v or -vv for full protocol
>>> decode
>>> listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
>>> 65535 bytes
>>> ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
>>> length 28
>>> 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
>>> Unknown), length 28
>>> 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>>> length 28
>>> 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>>> length 28
>>> 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>>> length 28
>>> 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>>> length 28
>>>
>>> 6 packets captured
>>> 6 packets received by filter
>>> 0 packets dropped by kernel
>>> root@openstack-dev:~#
>>>
>>>
>>> Any help will be greatly appreciated.
>>>
>>> Thanks,
>>> Balu
>>>
>>>
>>> On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen  wrote:
>>>
 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G <
 balamuruga...@gmail.com> wrote:

> Hi,
>
> In Grizzly, when using quantum and overlapping IPs, does metadata
> service work? This wasnt working in Folsom.
>
> Thanks,
> Balu
>
> _

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16 from
the VMs routing table, I could access the metadata service!

The route for 169.254.0.0/16 is added automatically when the instance boots
up, so I assume its coming from the DHCP. Any idea how this can be
suppressed?

Strangely though, I do not see this route in a WindowsXP VM booted in the
same network as the earlier Ubuntu VM and the Windows VM can reach the
metadata service with out me doing anything. The issue is with the Ubuntu
VM.

Thanks,
Balu



On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen  wrote:

> The vm should not have a routing table entry for 169.254.0.0/16  if it
> does i'm not sure how it got there unless it was added by something other
> than dhcp. It seems like that is your problem as the vm is arping directly
> for that address rather than the default gw.
>
>
> On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G  > wrote:
>
>> Thanks Aaron.
>>
>> I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
>> and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
>> the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
>> 169.254.169.254 from the VM. I am using a single node setup with two
>> NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP
>>
>> These are my metadata related configurations.
>>
>> */etc/nova/nova.conf *
>> metadata_host = 10.5.12.20
>> metadata_listen = 127.0.0.1
>> metadata_listen_port = 8775
>> metadata_manager=nova.api.manager.MetadataManager
>> service_quantum_metadata_proxy = true
>> quantum_metadata_proxy_shared_secret = metasecret123
>>
>> */etc/quantum/quantum.conf*
>> allow_overlapping_ips = True
>>
>> */etc/quantum/l3_agent.ini*
>> use_namespaces = True
>> auth_url = http://10.5.3.230:35357/v2.0
>> auth_region = RegionOne
>> admin_tenant_name = service
>> admin_user = quantum
>> admin_password = service_pass
>> metadata_ip = 10.5.12.20
>>
>> */etc/quantum/metadata_agent.ini*
>> auth_url = http://10.5.3.230:35357/v2.0
>> auth_region = RegionOne
>> admin_tenant_name = service
>> admin_user = quantum
>> admin_password = service_pass
>> nova_metadata_ip = 127.0.0.1
>> nova_metadata_port = 8775
>> metadata_proxy_shared_secret = metasecret123
>>
>>
>> I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
>> ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
>> request but no response.
>>
>> root@openstack-dev:~# ip netns exec
>> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
>> Kernel IP routing table
>> Destination Gateway Genmask Flags Metric RefUse
>> Iface
>> 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
>> qg-193bb8ee-f5
>> 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
>> qg-193bb8ee-f5
>> 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
>> qr-59e69986-6e
>> root@openstack-dev:~# ip netns exec
>> qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
>> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
>> listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
>> 65535 bytes
>> ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
>> length 28
>> 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
>> Unknown), length 28
>> 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>> length 28
>> 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>> length 28
>> 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>> length 28
>> 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
>> length 28
>>
>> 6 packets captured
>> 6 packets received by filter
>> 0 packets dropped by kernel
>> root@openstack-dev:~#
>>
>>
>> Any help will be greatly appreciated.
>>
>> Thanks,
>> Balu
>>
>>
>> On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen  wrote:
>>
>>> Yup, If your host supports namespaces this can be done via the
>>> quantum-metadata-agent.  The following setting is also required in your
>>>  nova.conf: service_quantum_metadata_proxy=True
>>>
>>>
>>> On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G <
>>> balamuruga...@gmail.com> wrote:
>>>
 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


>>>
>>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp