On Thu, Nov 01 2012, Julien Danjou wrote:
On Thu, Nov 01 2012, Zehnder Toni (zehndton) wrote:
My goal is to offer monitored data to the admin and customers. The
admin is interested in the utilization of the physical components and
the virtual machines and the customer is interested to know
Hi Johanna,
Using Security Groups you can ping and SSh to your VM.
http://docs.openstack.org/trunk/openstack-compute/admin/content/enabling-ping-and-ssh-on-vms.html
Regards,
Veera.
On Mon, Nov 5, 2012 at 2:24 PM, Heinonen, Johanna (NSN - FI/Espoo)
johanna.heino...@nsn.com wrote:
Hi
Hi Veera,
I forgot to mention that I have already configured the security groups
for bothe ssh/icmp, but this did not help.
Regards,
Johanna
From: ext Veera Reddy [mailto:veerare...@gmail.com]
Sent: Monday, November 05, 2012 11:02 AM
To: Heinonen, Johanna (NSN - FI/Espoo)
Cc: ext
Hi Ray,
Have you try to upload a generalized image with sysprep to glance? If you put
the Product Key in an unattend file, when the generalized image is deployed in
the concrete virtual machine the activation process will be done during setup.
The main drawback is that a generalized image
Hello Stackers !
i am finding a weird error in my l3_agent.log file:
Stderr: ''
2012-11-05 10:22:59ERROR [quantum.agent.l3_agent] Error running
l3_nat daemon_loop
Traceback (most recent call last):
File
Hi,
On Mon, 2012-11-05 at 10:52 +0100, Skible OpenStack wrote:
Hello Stackers !
i am finding a weird error in my l3_agent.log file:
Stdout: 'Unauthorized command: /sbin/iptables-save -t filter\n'
Your sudoers config doesn't allow this command - you'll want to fix
that.
Cheers,
--
Stephen
I think we already fixed this bug.
please see if it helps:
https://review.openstack.org/#/c/14756/
On 11/05/2012 05:52 PM, Skible OpenStack wrote:
Hello Stackers !
i am finding a weird error in my l3_agent.log file:
Stderr: ''
2012-11-05 10:22:59ERROR
Skible,
Looks to me like a reported bug.
https://bugs.launchpad.net/quantum/+bug/1069966
From: openstack-bounces+atul.jha=csscorp@lists.launchpad.net
[openstack-bounces+atul.jha=csscorp@lists.launchpad.net] on behalf of
Skible OpenStack
Hi,
The bug has been fixed upstream and is merged into the stable folsom
branch. Please note that this may not have been packaged by the various
linux distributions.
If you need to fix this locally then please look at
Thanks Gary, Atul, gong and Stephan
I am using ubuntu 12.10 and i think it is not yet packaged !
I fixed it manually and everything is working now !
Thanks !
Le 05/11/2012 11:27, Gary Kotton a écrit :
Hi,
The bug has been fixed upstream and is merged into the stable folsom
branch. Please
On Fri, Nov 2, 2012 at 4:42 PM, Dan Dyer dan.dye...@gmail.com wrote:
Yes, I am assuming the service controller provides a different stream of
data from the lower level VM events. So the question is how to represent
and store this additional meta data in ceilometer. Note that there doesn't
On Fri, Nov 2, 2012 at 3:07 AM, Patrick Petit
patrick.michel.pe...@gmail.com wrote:
Folks,
I'd like to add to this that physical server metering shouldn't be treated
differently in Ceilometer now that bare metal provisioning framework enters
into Grizzly. Physical servers will just become
Hi,
My Network Configuration in nova.conf
libvirt_vif_type=ethernet
linuxnet_vif_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
libvirt_use_virtio_for_bridges=True
Regards,
Veera.
On Mon, Nov 5, 2012 at 5:12 PM, Gary Kotton
On Mon, Nov 05 2012, Doug Hellmann wrote:
If we make the current compute agent take an option telling it which
pollster namespace to use, then the same framework can load different
pollsters. However, there is a fundamental security issue with
communicating from an agent running inside a
Hi Balaji,
I am not sure I understand your questions. I think that with clients your
referring to python-novaclient and/or python-quantumclient.
If that is correct, those are merely applications that provide users with
tools for access the respective endpoints. These applications are usually
not
Hello.
Something has been keeping my nova modules from running, and by looking at
the logs, I've noticed that the reason is that the modules can't reach the
rabbitmq server:
2012-11-05 10:25:44 INFO nova.openstack.common.rpc.common [-]
Reconnecting to AMQP server on my_ip:5672
2012-11-05
Thanks Salvatore.
It gave me good understanding of these python-*clients.
On Mon, Nov 5, 2012 at 6:46 PM, Salvatore Orlando sorla...@nicira.comwrote:
Hi Balaji,
I am not sure I understand your questions. I think that with clients
your referring to python-novaclient and/or
Hi all,
I am using Folsom on my set up. I followed steps from this :
https://github.com/Amseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
I am able to login in Dashboard and also to launch VMs.
But the VMs are not getting IPs although
Using nova reboot --hard uuid has a better chance of working than nova
start. This should bring back everything but volumes. There are a couple if
bugfixes being back ported to stable folsom. When those are in it should
reconnect everything for you.
On Nov 5, 2012 1:57 AM, gtt116 gtt...@126.com
This happens when your credentials are wrong. Make sure the rabbit_user and
rabbit_password match what is set in rabbit.
On Nov 5, 2012 5:34 AM, Johannes Baltimore johannes.b...@gmail.com
wrote:
Hello.
Something has been keeping my nova modules from running, and by looking at
the logs, I've
Hi all -
Nothing like a Monday morning to get me to figure out what's going on!
Sorry for the lack of reports since the Summit. Here goes:
1. In review and merged:
We've merged in over 40 doc patches in the last two weeks, some highlights:
Monitoring information brought in for nova from Mirantis
On Mon, Nov 05 2012, Doug Hellmann wrote:
When an image is deployed to bare metal, there is no container, right?
Ah, I see the confusion. There's 2 bare metal, I think, the ones run by
the the platform operator and the ones run to replace virtual instances
for any project.
I was actually
Yes, that makes sense. I was not thinking about multiple physical nics in
the provider network space.
I am trying to get a better understanding of how the vif plugins in the
br-int and the bridge providing external connectivity interact.
The quantum vif plug-in will do the work to configure the
Hello,
In my setup, I have two nodes, controller node (for running all services) and
one compute node (to host VMs). Both have two physical NICs, eth0 has an
assigned IP address for management of the host from outside, eth1 in
promiscuous mode for VM communication. Is the following
Hi Vinay,
I have sent the following email out a while ago,
---
In the following quantum command,
quantum net-create --tenant-id $TENANT_ID net1 --provider:network_type vlan
--provider:physical_network physnet1 --provider:segmentation_id 1024
provider:segmentation_id is actually a VLAN id
Thanks Dennis.
I don't have a switch in-between the two nodes so I don't have the default
native VLAN issue. The two nodes are connected back to back.
I managed to console into the VM's on the compute node and saw that they
don't have the IP address. The VM's on the controller node (also where
Hello,
I am following the steps outlined in OpenStack Install and Deploy – Ubuntu to
setup a two node configuration. I plan to use Cinder as the block storage
instead of nova-volume. I have a few question regarding sample nova.conf file
mentioned in the doc (
Hi guys,
can anyone tell me (with an example) how to use the extra_specs variable for
an instance_type??
Best Regards
Viktor
smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to
you can't run nova volume and cinder but it uses all the same settings so you
can use the same service entry. You just run cinder-api instead of
nova-api-os-volume (or disable osapi_volume if you are using nova-api), run
cinder-volume instead of nova-volume and run cinder-scheduler.
Vish
On
Skible,
Followed your guide, everything went through fine until I started my
VM. The VM image I used is a ubuntu cloud image
(http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img).
After boot up, the VM can not access meta server. It complains about
not route
The simplest way would be to create key/value pairs for flavor types
(instance types).
This information would be associated in a separate table in nova db
(instance_type_extra_specs) and would go along with the instance type.
Once it is in the database, you can use this information to customize
Shouldnt you be using nova APIs instead of the NOVA DB? Whats your use case?
On Mon, Nov 5, 2012 at 10:46 PM, Trinath Somanchi
trinath.soman...@gmail.com wrote:
Hi-
While going through the SQLALchemy of the Quantum, I was struck on how to
Access the NOVA DB tables from Quantum SQLAlchemy.
The information in the Nova DB should provide by a client API interface.
Quantum should use the client and invoke the API call.
From: openstack-bounces+zhongyue.nah=intel@lists.launchpad.net
[mailto:openstack-bounces+zhongyue.nah=intel@lists.launchpad.net] On Behalf
Of Debojyoti Dutta
For nova client to provide the data, do we need the project id as a
mandatory parameter.
Please help me in this regard.
On Tue, Nov 6, 2012 at 1:03 PM, Nah, Zhongyue zhongyue@intel.comwrote:
The information in the Nova DB should provide by a client API interface.
Quantum should use the
Reposting to cross post to the new openstack-qa list
-Sean
On 11/02/2012 04:34 PM, Sean Dague wrote:
Out of the nova live upgrade, and full gate in tempest sessions at
OpenStack Summit I think I've come up with the following blueprints that
we should be looking at over grizzly.
*
Title: precise_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/9/Project:precise_grizzly_quantum_trunkDate of build:Mon, 05 Nov 2012 05:31:21 -0500Build duration:1 min 40 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/10/Project:precise_grizzly_quantum_trunkDate of build:Mon, 05 Nov 2012 06:01:22 -0500Build duration:1 min 37 secBuild cause:Started by an SCM changeBuilt
at 20121105
Title: raring_grizzly_quantum_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/9/Project:raring_grizzly_quantum_trunkDate of build:Mon, 05 Nov 2012 07:01:21 -0500Build duration:6 min 42 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/12/Project:precise_grizzly_quantum_trunkDate of build:Mon, 05 Nov 2012 11:31:21 -0500Build duration:1 min 28 secBuild cause:Started by an SCM changeBuilt
at 20121105-1140Build needed 00:10:15, 95876k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'raring
at 20121105-1245Build needed 00:10:10, 95916k disc
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/18/Project:precise_grizzly_nova_trunkDate of build:Mon, 05 Nov 2012 15:01:25 -0500Build duration:3 min 21 secBuild cause:Started by an SCM changeBuilt
at 20121105
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/22/Project:precise_grizzly_nova_trunkDate of build:Mon, 05 Nov 2012 19:31:25 -0500Build duration:2 min 30 secBuild cause:Started by an SCM changeBuilt
at 20121105
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/23/Project:precise_grizzly_nova_trunkDate of build:Mon, 05 Nov 2012 20:31:25 -0500Build duration:2 min 44 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_python-novaclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_python-novaclient_trunk/4/Project:precise_grizzly_python-novaclient_trunkDate of build:Mon, 05 Nov 2012 21:31:21 -0500Build duration:3 min 2 secBuild
at 20121105
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/24/Project:precise_grizzly_nova_trunkDate of build:Mon, 05 Nov 2012 21:34:24 -0500Build duration:2 min 57 secBuild cause:Started by an SCM changeBuilt
at 20121105
Title: precise_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/14/Project:precise_grizzly_quantum_trunkDate of build:Mon, 05 Nov 2012 22:01:22 -0500Build duration:1 min 35 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_python-keystoneclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_python-keystoneclient_trunk/4/Project:precise_grizzly_python-keystoneclient_trunkDate of build:Tue, 06 Nov 2012 01:31:21 -0500Build duration:2 min 49
Title: precise_grizzly_python-novaclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_python-novaclient_trunk/5/Project:precise_grizzly_python-novaclient_trunkDate of build:Tue, 06 Nov 2012 02:01:21 -0500Build duration:1 min 16 secBuild
Title: raring_grizzly_python-novaclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_python-novaclient_trunk/4/Project:raring_grizzly_python-novaclient_trunkDate of build:Tue, 06 Nov 2012 02:01:21 -0500Build duration:1 min 43 secBuild
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/25/Project:precise_grizzly_nova_trunkDate of build:Tue, 06 Nov 2012 02:31:23 -0500Build duration:2 min 31 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/27/Project:raring_grizzly_nova_trunkDate of build:Tue, 06 Nov 2012 02:31:24 -0500Build duration:14 minBuild cause:Started by an SCM changeBuilt
57 matches
Mail list logo