Yup, If your host supports namespaces this can be done via the
quantum-metadata-agent. The following setting is also required in your
nova.conf: service_quantum_metadata_proxy=True
On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G
balamuruga...@gmail.comwrote:
Hi,
In Grizzly, when using
Hi,
I am able to get File Injection to work during a CentOS or Ubuntu VM
instance creation. But it doesnt work for a Windows VM. Is there a way to
get it to work for windows VM or it going to be a limitation we have to
live with, perhaps due to filesystem differences?
Regards,
Balu
Thanks Aaron.
I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
169.254.169.254 from the VM. I am using a single node setup with
Hi Balamurugan,
What the edition of nova you are running? is there any trace log in
nova-compute.log(default path: /var/log/nova/nova-compute.log)?
and what the edition of your windows VM(winxp/win7/win8)? if it is win7 or
win8, the injected files may exist in the system reserved partition, you
The vm should not have a routing table entry for 169.254.0.0/16 if it does
i'm not sure how it got there unless it was added by something other than
dhcp. It seems like that is your problem as the vm is arping directly for
that address rather than the default gw.
On Tue, Apr 23, 2013 at 11:34
Hi Wangpan,
Thanks for the response. The file injection is actually working, sorry my
bad I was setting the dst-path incorrectly. I am using Nova 2013.1(Grizzly)
and Windows XP 32bit VM.
When I used the following command, it worked:
nova boot --flavor f43c36f9-de6a-42f4--edcedafe371a
Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16 from
the VMs routing table, I could access the metadata service!
The route for 169.254.0.0/16 is added automatically when the instance boots
up, so I assume its coming from the DHCP. Any idea how this can be
suppressed?
Hrm, I'd do quantum subnet-list and see if you happened to create a subnet
169.254.0.0/16? Otherwise I think there is probably some software in your
vm image that is adding this route. One thing to test is if you delete this
route and then rerun dhclient to see if it's added again via dhcp.
On
The dhcp agent will set a route to 169.254.0.0/16 if
enable_isolated_metadata_proxy=True.
In that case the dhcp port ip will be the nexthop for that route.
Otherwise, it might be your image might have a 'builtin' route to such
cidr.
What's your nexthop for the link-local address?
Salvatore
On
Yup, That's only if your subnet does not have a default gateway set.
Providing the output of route -n would be helpful .
On Wed, Apr 24, 2013 at 12:08 AM, Salvatore Orlando sorla...@nicira.comwrote:
The dhcp agent will set a route to 169.254.0.0/16 if
enable_isolated_metadata_proxy=True.
In
Hi Salvatore,
Thanks for the response. I do not have enable_isolated_metadata_proxy
anywhere under /etc/quantum and /etc/nova. The closest I see is
'enable_isolated_metadata' in /etc/quantum/dhcp_agent.ini and even that is
commented out. What do you mean by link-local address?
Like you said, I
I do not have any thing running in the VM which could add this route. With
the route removed, when I disable and enable networking, so that it gets
back the details from DHCP server, I see that the route is getting added
again.
So DHCP seems to be my issue. I guess this rules out any pre-existing
The routing table in the VM is:
root@vm:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 192.168.2.1 0.0.0.0 UG0 00 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0
I booted a Ubuntu Image in which I had made sure that there was no
pre-existing route for 169,254.0.0/16. But its getting the route from DHCP
once its boots up. So its the DHCP server which is sending this route to
the VM.
Regards,
Balu
On Wed, Apr 24, 2013 at 12:47 PM, Balamurugan V G
Hi Balu,check this outhttp://www.cloudbase.it/cloud-init-for-windows-instances/It's a great tool, I just had issues myself with the Admin. password changingRegards,Razique
Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15
Le 24 avr. 2013 à 08:17, Balamurugan V G
Thanks Razique, I'll try this as well. I am also trying for out of the box
options like file injection and meta-data service.
Regards,
Balu
On Wed, Apr 24, 2013 at 1:57 PM, Razique Mahroua
razique.mahr...@gmail.comwrote:
Hi Balu,
check this out
Hi,
When I try to start the nova-network, I am getting this error:
2013-04-24 11:12:30.926 10327 AUDIT nova.compute.resource_tracker [-] Auditing
locally available compute resources
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free ram
(MB): 7472
2013-04-24
Hi Arindam,looks like the port you are trying to bind the process to is already used, can you run :$ netstat -tanpu | grep LISTENand paste the output?thanks!
Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15
Le 24 avr. 2013 à 11:13, Arindam Choudhury arin...@live.com a
Hi,
It seems due to an OVS quantum bug, we need to run the utility
quantum-ovs-cleanup before any of the quantum services start, upon a
server reboot.
Where is the best place to put this utility to run automatically when
a server reboots so that the OVS issue is automatically addressed? A
script
Hi,
Thanks for your reply,
Here is the output:
netstat -tanpu | grep LISTEN
tcp0 0 0.0.0.0:43690.0.0.0:* LISTEN
13837/epmd
tcp0 0 0.0.0.0:45746 0.0.0.0:* LISTEN
2104/rpc.statd
tcp0 0
Ok that's the Process 9033 - try a $ kill 9033 and you should be good!
Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15
Le 24 avr. 2013 à 11:52, Arindam Choudhury arin...@live.com a écrit :Hi,Thanks for your reply,Here is the output:netstat -tanpu | grep LISTENtcp 0 0
Hi Razique,
Thanks a lot. So lesson learned, dnsmasq should not be running.
Subject: Re: [Openstack] error in nova-network start-up
From: razique.mahr...@gmail.com
Date: Wed, 24 Apr 2013 12:01:16 +0200
CC: openstack@lists.launchpad.net
To: arin...@live.com
Ok that's the Process 9033 - try a $
Anil
It is not necessarily to not configur an IP address for l3 agent ,
2 nics can work in this scenario .config an IP address as you like
Daniels Cai
http://dnscai.com
在 2013-4-24,1:48,Edgar Magana emag...@plumgrid.com 写道:
Anil,
If you are testing multiple vNICs I will recommend you to use
Hi,
We are trying to install ceilometer-2013.1~g2.tar.gz which presumably has
Folsom compatibility.
The requirment is python-keystoneclient=0.2,0.3 and we have the version
2.3.
But, still, setup quits with the following message:
error: Installed distribution python-keystoneclient 0.2.3
Hi Anil,
What you quoted is about L3 management and bridging and the need of
flexibility. It means that the physical NIC will have a whole bunch of
IP addresses, one per Quantum router you define.
Should you want to deploy a Controler on that node, you would need to
have a second NIC with
Hi,
I having problem with metadata service. I am using nova-network. The console
log says:
Starting network...udhcpc (v1.18.5) startedSending discover...Sending
discover...Sending discover...No lease, failingWARN: /etc/rc3.d/S40network
failedcloudsetup: checking
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I put it in the file:/etc/init/quantum-server.conf
post-start script
/usr/bin/quantum-ovs-cleanup
exit 1
end script
On 04/24/2013 02:45 AM, Balamurugan V G wrote:
Hi,
It seems due to an OVS quantum bug, we need to run the
hi,
I was misled by this:
[(keystone_user)]$ nova list
+--+++---+
| ID | Name | Status | Networks
|
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Im having trouble getting the floating IPs on the external network
accessible from the outside world. From the network node they work
fine but somehow I doubt that means anything.
so my network node (also controller node) has 4 ethernets, 1 for
Hi all,
I'm trying to collect Ceilometer's metrics from my test install of
Openstack Grizzly.
I'm able to collect most of the metrics from the central collector and the
nova-compute agents.
But I'm still missing some values like memory and vcpus.
This is an abstract from ceilometer's log on a
Thanks Steve.
I came across another way at
https://bugs.launchpad.net/quantum/+bug/1084355/comments/15. It seems
to work as well. But your solution is simpler :)
Regards,
Balu
On Wed, Apr 24, 2013 at 7:41 PM, Steve Heistand steve.heist...@nasa.gov wrote:
-BEGIN PGP SIGNED MESSAGE-
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
it was mentioned to me (by Mr Mihaiescu) that this only works if controller and
network node
are on the same machine. For the compute nodes I had forgotten its in a
different
place. On them I am doing it in a pre-start script in
Right now, I have a single node setup on which I am qualifying my use
cases but eventually I will have a controller node, network node and
several compute nodes. In that case, do you mean it should something
like this?
Controller : post-start of quantum-server.cong
Network : post-start of
Hi,
Thanks for your reply.
The dnsmasq is running properly.
when I tried to run iptables
-I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT
it says,
# iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT
Bad argument `67:68'
Do I have to do this iptables configuration in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
The network node probably wont be running quantum server just one
of the agents, so you put the command in one of those configs not
quantum-server.
That is what Im doing currently and it is working for me.
at some point if you have running VMs with
In the docs, we describe how to configure KVM block-based live migration,
and it has the advantage of avoiding the need for shared storage of
instances.
However, there's this email from Daniel Berrangé from back in Aug 2012:
http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html
Ok thanks, this helps a lot. But isnt this being done to avoid those
disruptions/issues with networking after a restart. Do you mean do
doing this will result in disruptions after a restart?
Regards,
Balu
On Wed, Apr 24, 2013 at 9:12 PM, Steve Heistand steve.heist...@nasa.gov wrote:
-BEGIN
Arindam,
Ooops, I had a typo. The command should have been: iptables -I input -i
tap+ -p udp -dport 67:68 --sport 67:68 -j ACCEPT
You need the iptables configuration on the system where dnsmasq is
running. It shouldn't be necessary in the compute nodes that are being
booted.
Jay S.
On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:
In the docs, we describe how to configure KVM block-based live migration,
and it has the advantage of avoiding the need for shared storage of
instances.
However, there's this email from Daniel Berrangé from back in Aug 2012:
Hi,
I having problem with nova-network service.
Though
[(keystone_user)]$ nova list
+--+++---+
| ID | Name | Status | Networks
|
Hi,
So I added that rule:
iptables -I INPUT -i tap+ -p udp --dport 67:68 --sport 67:68 -j ACCEPT
but still the same problem.
There is another thing:
# nova-manage service list
Binary Host Zone Status
State Updated_At
nova-network
Thanks for the clarification Daniel
Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15
Le 24 avr. 2013 à 17:59, "Daniel P. Berrange" d...@berrange.com a écrit :On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:In the docs, we describe how to configure KVM
Hi!
The `Ultimate OpenStack Grizzly Guide' is a bit more updated!
There are two new scripts: keystone_basic.sh and
keystone_endpoints_basic.sh which preliminary support for Swift and
Ceilometer.
Check it out! https://gist.github.com/tmartinx/d36536b7b62a48f859c2
Best!
Thiago
On 20 March
Can you provided the output of 'ifconfig' on the hosting node? Also 'ps
aux | grep dnsmasq' .
Jay S. Bryant
Linux Developer -
OpenStack Enterprise Edition
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE
Hi,
Output from the controller node:
root@aopcach:~# ifconfig
eth0 Link encap:Ethernet HWaddr 1c:c1:de:65:6f:ee
inet addr:158.109.65.21 Bcast:158.109.79.255 Mask:255.255.240.0
inet6 addr: fe80::1ec1:deff:fe65:6fee/64 Scope:Link
UP BROADCAST RUNNING
Can you show us a quantum subnet-show for the subnet your vm has an ip on.
Is it possible that you added a host_route to the subnet for 169.254/16?
Or could you try this image:
http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
On Wed, Apr 24, 2013 at 1:06
Hi Balu!
Listen, is your metadata service up and running?!
If yes, which guide you used?
I'm trying everything I can to enable metadata without L3 with a Quantum
Single Flat topology for my own guide:
https://gist.github.com/tmartinx/d36536b7b62a48f859c2
I really appreciate any feedback!
Tks!
On Wed, Apr 24, 2013 at 9:17 AM, Riki Arslan riki.ars...@cloudturk.netwrote:
Hi,
We are trying to install ceilometer-2013.1~g2.tar.gz which presumably
has Folsom compatibility.
The requirment is python-keystoneclient=0.2,0.3 and we have
the version 2.3.
But, still, setup quits with the
Hi Doug,
Thank you for the reply. I have previously installed Ceilometer version
0.1. Do you think that could be the reason?
Thanks.
On Wed, Apr 24, 2013 at 11:49 PM, Doug Hellmann doug.hellm...@dreamhost.com
wrote:
On Wed, Apr 24, 2013 at 9:17 AM, Riki Arslan
Community,
I am trying to install Keystone Grizzly following these instructions:
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/install-keystone.html
When I try to start the service (before db sync), I get the following error
message: Starting keytonestartproc: exit status
What happens when you run keystone-all directly?
-Dolph
On Wed, Apr 24, 2013 at 4:23 PM, Viktor Viking
viktor.viking...@gmail.comwrote:
Community,
I am trying to install Keystone Grizzly following these instructions:
Hi Doug,
Your email helped me. It was actually glanceclient version 0.5.1 that was
causing the conflict. After updating it, the conflict error disappeared.
I hope this would help someone else too.
Thanks again.
On Wed, Apr 24, 2013 at 11:49 PM, Doug Hellmann doug.hellm...@dreamhost.com
On Wed, Apr 24, 2013 at 11:59 AM, Daniel P. Berrange d...@berrange.comwrote:
On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:
In the docs, we describe how to configure KVM block-based live migration,
and it has the advantage of avoiding the need for shared storage of
Hi Dolph,
Now I got an exception. It seems like I am missing repoze.lru. I will
download and install it. I will let you know if it works.
Thank you,
Viktor
On Wed, Apr 24, 2013 at 11:25 PM, Dolph Mathews dolph.math...@gmail.comwrote:
What happens when you run keystone-all directly?
Hi Wanpan,
While I am able to inject files in to WindowsXP, CentOS5.9 and
Ubuntu12.04. I am unable to do it for Windows8Entrprise OS. I did
search the entire drive for the file I injected but couldnt file.
Below is the log from nova-compute.log.
2013-04-24 01:41:27.973 AUDIT
have you open and check the 'system reserved partition'? see the refer at
bellow:
http://www.techfeb.com/how-to-open-windows-7-hidden-system-reserved-partition/
2013-04-25
Wangpan
发件人:Balamurugan V G
发送时间:2013-04-25 12:34
主题:Re: [Openstack] [OpenStack] Files Injection in to Windows VMs
Hi,
Following the positive feedback after the 1st OpenStack User Group Nordics
(OSUGN) in
Stockholmhttp://www.meetup.com/OpenStack-User-Group-Nordics/events/95258382/,
we thought it's time to schedule the next meetup!
This is a call for speakers for the 2nd OSUGN meetup in
Hi Aaron,
I tried the image you pointed and it worked fine out of the box. That is it
did not get the route to 169.254.0.0.26 on boot and I am able to retrieve
info from metadata service. The image I was using earlier is a Ubuntu 12.04
LTS desktop image. What do you think could be wrong with my
I'm not sure but if it works fine with the ubuntu cloud image and not with
your ubuntu image than there is something in your image adding that route.
On Wed, Apr 24, 2013 at 10:06 PM, Balamurugan V G
balamuruga...@gmail.comwrote:
Hi Aaron,
I tried the image you pointed and it worked fine out
at 20130424-0604Build needed 00:01:42, 12072k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild
at 20130424
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/32/Project:precise_havana_keystone_trunkDate of build:Wed, 24 Apr 2013 13:01:38 -0400Build duration:2 min 30 secBuild cause:Started by an SCM changeBuilt
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/33/Project:precise_havana_keystone_trunkDate of build:Wed, 24 Apr 2013 13:31:37 -0400Build duration:2 min 17 secBuild cause:Started by an SCM changeBuilt
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/34/Project:precise_havana_keystone_trunkDate of build:Wed, 24 Apr 2013 15:31:36 -0400Build duration:2 min 27 secBuild cause:Started by an SCM changeBuilt
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/55/Project:precise_havana_quantum_trunkDate of build:Wed, 24 Apr 2013 22:31:36 -0400Build duration:2 min 3 secBuild cause:Started by an SCM changeBuilt
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/56/Project:precise_havana_quantum_trunkDate of build:Wed, 24 Apr 2013 23:31:36 -0400Build duration:1 min 54 secBuild cause:Started by an SCM changeBuilt
See http://10.189.74.7:8080/job/cloud-archive_grizzly_version-drift/3/
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe :
Title: precise_havana_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_cinder_trunk/28/Project:precise_havana_cinder_trunkDate of build:Thu, 25 Apr 2013 01:31:35 -0400Build duration:1 min 5 secBuild cause:Started by an SCM changeBuilt
68 matches
Mail list logo