Hi ,
For keystone 2.0 auth
the request should provide a json format which includes username / tenant /
password .
In your curl test , you provide two headers to auth 2.0 .
Please have a look at officail document to get the right API call.
2012/11/21 Shashank Sahni shredde...@gmail.com
Hi,
Hi everyone:
I'd like to know your opinion as nova experts:
Would you recommend CephFS as shared storage in /var/lib/nova/instances?
Another option it would be use GlusterFS or MooseFS for
/var/lib/nova/instances directory and Ceph RBD for Glance and Nova volumes,
don't you think?
Thanks for
I had the same issue at first, but Vish is right, once you start spawning an instance, everything should be brought upRegards,Razique
Nuage Co - Razique Mahrouarazique.mahr...@gmail.com
Le 20 nov. 2012 à 23:54, Vishvananda Ishaya vishvana...@gmail.com a écrit :The vlans and bridges are not
AFAIR it was also the case with Essex.
Cheers!
On Wed, Nov 21, 2012 at 9:46 AM, Razique Mahroua
razique.mahr...@gmail.comwrote:
I had the same issue at first, but Vish is right, once you start spawning
an instance, everything should be brought up
Regards,
Razique
*Nuage Co - Razique
Hey Edwards,that is a concern many arise, today, there is not any both answer and implementations that would allow you to setup such.One approach is to work with stateless instances - using an orchestration tool such a Puppet of Chef, which would take care by itself of the spawning and
Not being a nova expert, and not using ceph with nova, I can tell you
that I've been testing ceph extensively and it seems to be like 'the
thing made for nova'. it seems stable, it is reasonable fast and the
features are unbeaten. but I wouldn't use it for hosting qemu-images in
Even Diablo and Cactus before that I might say Seb :)
Nuage Co - Razique Mahrouarazique.mahr...@gmail.com
Le 21 nov. 2012 à 09:56, Sébastien Han han.sebast...@gmail.com a écrit :AFAIR it was also the case with Essex.Cheers!
On Wed, Nov 21, 2012 at 9:46 AM, Razique Mahroua
Hi,
I don't think it's the best place to ask your question since it's not
directly related to OpenStack but more about Ceph. I just put in c/c
the ceph ML. Anyway, CephFS is not ready yet for production but I
heard that some people use it. People from Inktank (the company behind
Ceph) don't
Hi,
For the cloud controller, use 2 machines with a pacemaker setup with those
resource agents. Simple as that.
We have 2 branches, one for Essex and one for Folsom.
https://github.com/madkiss/openstack-resource-agents
Cheers!
On Wed, Nov 21, 2012 at 9:59 AM, Razique Mahroua
Ah ok, I barely started with Diablo and quickly moved to Essex so I didn't
know ;-). Thanks for the input :)
Cheers!
On Wed, Nov 21, 2012 at 10:04 AM, Razique Mahroua razique.mahr...@gmail.com
wrote:
Even Diablo and Cactus before that I might say Seb :)
*Nuage Co - Razique Mahroua** *
At least, you can get byes-used and count statistics information by using
GET operation on account/container/object.
here's an example for account level:
Request: GET
http://localhost:8080/v1.0/AUTH_3cf0193e7e5d45e0945d0b377528faed/?format=json
Response:
Headers:
X-Account-Bytes-Used:
The OpenStack Technical Committee (TC) met in #openstack-meeting at
20:00 UTC yesterday.
Here is a quick summary of the outcome of this meeting:
* The TC agreed on a general vision on the question of the future of
Incubation and Core within the new OpenStack governance, based on a
separation
Ok guys!
Finally I will deploy MooseFS for nova instances.
I'll continue tuned about Ceph releases.
Thanks!
JuanFra.
2012/11/21 Sébastien Han han.sebast...@gmail.com
Hi,
I don't think it's the best place to ask your question since it's not
directly related to OpenStack but more about Ceph.
Let me know if you intend to run some benches and the results you obtain :)cheers
Nuage Co - Razique Mahrouarazique.mahr...@gmail.com
Le 21 nov. 2012 à 12:21, JuanFra Rodríguez Cardoso juanfra.rodriguez.card...@gmail.com a écrit :Ok guys!Finally I will deploy MooseFS for nova instances.I'll
Ok, no problem!
As soon as I have results, I will share it with the community.
Thanks.
2012/11/21 Razique Mahroua razique.mahr...@gmail.com
Let me know if you intend to run some benches and the results you obtain :)
cheers
*Nuage Co - Razique Mahroua** *
razique.mahr...@gmail.com
Le 21
Hi-
With respect to folsom release,
What does this sqlalchemy and rpc based error denote.
File /usr/lib/python2.7/dist-packages/quantum/db/db_base_plugin_v2.py,
line 90, in _model_query
query = context.session.query(model)
File
Thank you again for you help, much appreciated. I run the needed nova-*
services on nova-compute and give it a try.
Regards,
Ahmed.
From: Vishvananda Ishaya vishvana...@gmail.commailto:vishvana...@gmail.com
Date: Tuesday, November 20, 2012 8:59 PM
To: Ahmed Al-Mehdi
Hi,
can u give out more traceback?
quantum context has a session attribute. It seems this context is not
quantum context.
On 11/21/2012 07:54 PM, Trinath Somanchi wrote:
Hi-
With respect to folsom release,
.
What does this sqlalchemy and rpc based error denote.
File
On 11/21/2012 04:15 AM, Ahmed Al-Mehdi wrote:
Hello,
I am getting a RPC message timeout in nova-network.
2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] Making
asynchronous call on network.sonoma ... from (pid=1375) multicall
Hello,
Is there anyway we can disable security group in nova, as i would be using
an external firewall to do that.
--
* With Regards
*
* Ritesh Nanda
*
***
*
http://www.ericsson.com/
___
Mailing list: https://launchpad.net/~openstack
Post to :
Your firewall driver in your nova.conf seems to be incorrect
2012-11-21 01:33:47 TRACE nova self.firewall_driver =
fw_class(xenapi_session=self._session)
2012-11-21 01:33:47 TRACE nova File
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py, line
227, in __init__
It's trying to
Hi Ritesh,
You will need to have enabled some rules - even if you provide rules that
give carte blanch access to your instances. This is courtesy of the
'default' security group - that by design prevents any access and by
design, is a default if you don't specify any security groups when
launching
I've never used it - but I believe you can just set the firewall_driver
config var to nova.virt.firewall.NoopFirewallDriver
eg in nova.conf add:
--firewall_driver=nova.virt.firewall.NoopFirewallDriver
Thanks,
Kiall
On Wed, Nov 21, 2012 at 2:14 PM, Kevin Jackson
+1 you can’t use the libvirt specific firewall driver with XenAPI
There are some example nova.conf files here that may help:
http://docs.openstack.org/folsom/openstack-compute/admin/content/xenapi-flat-dhcp-networking.html
If you use DevStack, it should have “chosen” the correct firewall driver
Hi
could someone please explain how to get traffic flowing correctly with
quantum? We are loosing traffic from quantum-server host back to guest
network. Guest ping works towards the host, but reply doesn't get sent.
Guests can also make traffic out of the cloud.
This page tells to setup
On Tue, Nov 20, 2012 at 03:03:37PM -0500, Lars Kellogg-Stedman wrote:
automatically assigned ip address for several minutes (possibly more
than 10 or 15) after the system boots.
In fact, 30 minutes. I spent some time staring at the clock
yesterday.
I'm assuming that the calls to
Hi,
I've found the auto_assign_floating_ip very useful, especially from the
dashboard. But in some cases and from the command line, i don't want to
have an ip automatically associated with an instance.
So i'm founding a way to disable auto_assign_floating_ip from the nova
boot command. is it
@Kevin I am using nova vlan manager , adding rule for every vlan would be
then one more task todo.
This is first scenerio
In my case i am using nova-network with vlan manager , so i would like to
use my
own router instead of the bridge that openstack creates, even i have
implemented inter-vlan
This filter was created to allow you to access OpenStack without the need
to provide either username/password or any token. Under the cover this
filter simply authenticate with keystone and get the auth token and stick
it back to the request. If your work is mostly dealing with API (not using
JuanFra,
I do use cephfs in production, but not for the /var/lib/instances directory. I
do host the openstack database and the openstack configuration files on it for
an HA cloud controller cluster, but I am probably crazier than most people, and
I have a very small deployment. I currently
Hi Anne and all.
Anne, thanks for your reply and suggestions.
I made some more investigation on this subject, found more information
(maybe too many pages, wikies, blogs, emails, etc...) and these are my
findings and comments:
0) This is the page where I found most of the states (
compute.api.associate_floating_ip. Do automatically assigned
addresses follow the same process as manually assigned ones?
The answer is NO!
- compute.manager._allocate_network calls:
network_info = self.network_api.allocate_for_instance(
context,
Hey all,
I am having a rather serious with the central (OpenStack' Essex)
nova-network gateway we have set up.
We have quite some floating IPs assigned to a few virtual machines, and it
just works.
But since a few days (or weeks) I notice that some VM does not get inbound
traffic from
external
On Nov 21, 2012, at 7:40 AM, Lars Kellogg-Stedman l...@seas.harvard.edu wrote:
compute.api.associate_floating_ip. Do automatically assigned
addresses follow the same process as manually assigned ones?
The answer is NO!
- compute.manager._allocate_network calls:
There is currently no way to do this. If auto_assign is true, it will be true
for all vms.
Vish
On Nov 21, 2012, at 6:59 AM, Olivier Archer olivier.arc...@ifremer.fr wrote:
Hi,
I've found the auto_assign_floating_ip very useful, especially from the
dashboard. But in some cases and
On Nov 21, 2012, at 8:10 AM, Christian Parpart tra...@gmail.com wrote:
Hey all,
I am having a rather serious with the central (OpenStack' Essex) nova-network
gateway we have set up.
We have quite some floating IPs assigned to a few virtual machines, and it
just works.
But since a few
Hello,
I have to shut down some running instances from worker nodes, as the
physical machine suffer disk errors.
After I remove them from worker nodes, nova still thinks the instances
are in shutoff state, although they
are completely disappeared already on worker nodes. I tried nova
delete
You will have to manually clean them up from the database. Folsom can actually
handle this delete path but essex cannot.
Vish
On Nov 21, 2012, at 11:32 AM, Xin Zhao xz...@bnl.gov wrote:
Hello,
I have to shut down some running instances from worker nodes, as the physical
machine suffer
Hi all
I am following this tutorial
https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
When I start the instance, it starts well but it doesnot get the IP
however I can see the ip assigned at Horizon. I am seeing this on
That's i think a clever approach - to set a data cluster as a backend for the configuration files - which are de facto as important as the instances themselves.Regarding the performance, it should not be a problem - the only data that gets frequently updated being the database.regards,Razique
As far I'm concerned, I will never put config files on share storage
(especially on a non-production ready), these are too critical. I will only
do it if the application specifically requires it like shared web
applications that needs auto vhost sync (or stuff like that).
If you want to keep them
For critical services (i.e. database, message queue, conf files), I'd
rather to use HA architectures like this examples:
http://www.mirantis.com/blog/intro-to-openstack-in-production/
JuanFra.
2012/11/21 Sébastien Han han.sebast...@gmail.com
As far I'm concerned, I will never put config files
Hi All,
I am trying to get trusted compute pools working in my installation of open
stack Folsom but so far am unable to get it to work. Currently when I spawn a
new instance I don't see any interaction with the attestation server and the
instance spawns just fine on a untrusted host. I
Hi,
I was wondering if someone could have a look at
https://bugs.launchpad.net/nova/+bug/1055069
It's been marked as low but it my opinion it should be critical, we can't
launch large instances because of it.
Cheers,
Sam
___
Mailing list:
On 11/20/2012 01:50 PM, Mark McLoughlin wrote:
Hey,
We're hoping to publish Nova, Glance, Keystone, Quantum, Cinder and
Horizon 2012.2.1 next week (Nov 29).
The list of issues fixed so far can be seen here:
https://launchpad.net/nova/+milestone/2012.2.1
Hello,
I have a question about setting up Quantum, following the steps described by
Bilel Msekni (
https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
), which uses 3 NICs.
(Similar document/setup is also described by
On 11/22/2012 08:23 AM, Ahmed Al-Mehdi wrote:
Hello,
I have a question about setting up Quantum, following the steps
described by Bilel Msekni (
https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
), which uses 3 NICs.
Using the Python API, what's the best of getting a list of floating
ips assigned to an instance? The Server.addresses dictionary contains
*both* fixed and floating ips, and doesn't appear to differentiate
between them. E.g:
srvr = client.servers.find(name='myinstance')
print
On Wed, Nov 21, 2012 at 09:12:36AM -0800, Vishvananda Ishaya wrote:
This appears to be essex.
That's correct.
be called on the network_api side before returning from
allocate_for_instance.
I agree.
If you look at folsom, you'll see there is a
decorator for this purpose called
I think you can just set 'deleted' to 1 in instance table, and clean the
association in fixed_ips and floating ips table.
- Ray
Yours faithfully, Kind regards.
CIeNET Technologies (Beijing) Co., Ltd
Email: qsun01...@cienet.com.cn
Office Phone: +86-01081470088-7079
Mobile Phone: +86-13581988291
Hi,
I'm following the steps mentioned in the official object storage
documentation.
http://docs.openstack.org/folsom/openstack-object-storage/admin/content/verify-swift-installation.html
I followed the steps as it is and all the services are up and running with
no traces of any error in
Hi-
Thanks for the reply.
Which modules populates this Session info into the context.
can u guide me in this regard.
On Wed, Nov 21, 2012 at 6:37 PM, gong yong sheng gong...@linux.vnet.ibm.com
wrote:
Hi,
can u give out more traceback?
quantum context has a session attribute. It seems
2012/11/22 Lars Kellogg-Stedman l...@seas.harvard.edu:
Any chance we can get it fixed in Essex, too? Or has this release
been abandoned? I'm not clear on what the maintenance schedule looks
like as the steamroller of progress moves forward.
Current stable branch policy is documented in
Title: precise_folsom_nova_stable
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_nova_stable/658/Project:precise_folsom_nova_stableDate of build:Wed, 21 Nov 2012 04:01:23 -0500Build duration:2 min 31 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_python-keystoneclient_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_python-keystoneclient_trunk/18/Project:raring_grizzly_python-keystoneclient_trunkDate of build:Wed, 21 Nov 2012 05:24:56 -0500Build duration:2 min 55
Title: precise_folsom_nova_stable
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_nova_stable/659/Project:precise_folsom_nova_stableDate of build:Wed, 21 Nov 2012 09:01:23 -0500Build duration:2 min 51 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/156/Project:precise_grizzly_nova_trunkDate of build:Wed, 21 Nov 2012 10:31:25 -0500Build duration:5 min 14 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_keystone_trunk/36/Project:raring_grizzly_keystone_trunkDate of build:Wed, 21 Nov 2012 11:10:23 -0500Build duration:3 min 44 secBuild cause:Started by user James PageBuilt
Title: precise_grizzly_python-cinderclient_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_python-cinderclient_trunk/10/Project:precise_grizzly_python-cinderclient_trunkDate of build:Wed, 21 Nov 2012 11:27:35 -0500Build duration:4 min 7 secBuild
Title: precise_grizzly_python-keystoneclient_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_python-keystoneclient_trunk/20/Project:precise_grizzly_python-keystoneclient_trunkDate of build:Wed, 21 Nov 2012 11:29:55 -0500Build duration:2 min 25
Title: precise_grizzly_python-quantumclient_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_python-quantumclient_trunk/8/Project:precise_grizzly_python-quantumclient_trunkDate of build:Wed, 21 Nov 2012 11:31:43 -0500Build duration:2 min 25 secBuild
Title: precise_grizzly_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/157/Project:precise_grizzly_nova_trunkDate of build:Wed, 21 Nov 2012 11:26:34 -0500Build duration:9 min 2 secBuild cause:Started by user James PageBuilt
Title: precise_grizzly_quantum_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/63/Project:precise_grizzly_quantum_trunkDate of build:Wed, 21 Nov 2012 11:32:21 -0500Build duration:8 min 17 secBuild cause:Started by user James PageBuilt
Title: precise_grizzly_swift_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_swift_trunk/35/Project:precise_grizzly_swift_trunkDate of build:Wed, 21 Nov 2012 11:45:33 -0500Build duration:2 min 58 secBuild cause:Started by user James PageBuilt
Title: raring_grizzly_python-glanceclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_python-glanceclient_trunk/11/Project:raring_grizzly_python-glanceclient_trunkDate of build:Wed, 21 Nov 2012 13:10:45 -0500Build duration:1 min 53 secBuild
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/3/
--
Started by timer
Building on master in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[workspace] $ /bin/bash -xe /tmp/hudson5121655004598995735.sh
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/4/
--
Started by timer
Building on master in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[workspace] $ /bin/bash -xe /tmp/hudson3486801238089048471.sh
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8/
--
Started by timer
Building on master in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[workspace] $ /bin/bash -xe /tmp/hudson4716785934960048279.sh
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/9/
--
Started by timer
Building on master in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[workspace] $ /bin/bash -xe /tmp/hudson7501164457043083269.sh
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/10/
--
Started by timer
Building on master in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[workspace] $ /bin/bash -xe /tmp/hudson8659192691561358643.sh
+
Title: precise_grizzly_horizon_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_horizon_trunk/18/Project:precise_grizzly_horizon_trunkDate of build:Wed, 21 Nov 2012 15:31:22 -0500Build duration:3 min 40 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_horizon_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_horizon_trunk/18/Project:raring_grizzly_horizon_trunkDate of build:Wed, 21 Nov 2012 15:31:22 -0500Build duration:4 min 23 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_python-glanceclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_python-glanceclient_trunk/12/Project:raring_grizzly_python-glanceclient_trunkDate of build:Wed, 21 Nov 2012 16:01:21 -0500Build duration:2 min 29 secBuild
Title: precise_folsom_deploy
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_deploy/361/Project:precise_folsom_deployDate of build:Wed, 21 Nov 2012 22:02:49 -0500Build duration:14 minBuild cause:Started by user Adam GandelmanBuilt on:masterHealth
74 matches
Mail list logo