[Openstack] nova-api start failed in multi_host compute nodes.

2012-04-03 Thread 한승진
Hi all

I am trying to start nova-api in my compute node to use metadata.

I couldn't success yet. I found this log in nova-api log

2012-04-03 15:18:43,908 CRITICAL nova [-] Could not load paste app
'metadata' from /etc/nova/api-paste.ini
 36 (nova): TRACE: Traceback (most recent call last):
 37 (nova): TRACE:   File /usr/local/bin/nova-api, line 51, in module
 38 (nova): TRACE: servers.append(service.WSGIService(api))
 39 (nova): TRACE:   File
/usr/local/lib/python2.7/dist-packages/nova/service.py, line 299, in
__init__
 40 (nova): TRACE: self.app = self.loader.load_app(name)
 41 (nova): TRACE:   File
/usr/local/lib/python2.7/dist-packages/nova/wsgi.py, line 414, in load_app
 42 (nova): TRACE: raise exception.PasteAppNotFound(name=name,
path=self.config_path)
 43 (nova): TRACE: PasteAppNotFound: Could not load paste app 'metadata'
from /etc/nova/api-paste.ini
 44 (nova): TRACE:
 45 2012-04-03 15:20:43,786 ERROR nova.wsgi [-] No section 'metadata'
(prefixed by 'app' or 'application' or 'composite' or 'composit' or
'pipeline' or 'filter-app') found in config /etc/nova/api-paste.ini

I added the flag in my nova.conf

--enbled_apis=metadata

Here is my api-paste.ini

###
# EC2 #
###

[composite:ec2]
use = egg:Paste#urlmap
/: ec2versions
/services/Cloud: ec2cloud
/services/Admin: ec2admin
/latest: ec2metadata
/2007-01-19: ec2metadata
/2007-03-01: ec2metadata
/2007-08-29: ec2metadata
/2007-10-10: ec2metadata
/2007-12-15: ec2metadata
/2008-02-01: ec2metadata
/2008-09-01: ec2metadata
/2009-04-04: ec2metadata

[pipeline:ec2cloud]
pipeline = logrequest ec2noauth cloudrequest authorizer ec2executor
# NOTE(vish): use the following pipeline for deprecated auth
#pipeline = logrequest authenticate cloudrequest authorizer ec2executor

[pipeline:ec2admin]
pipeline = logrequest ec2noauth adminrequest authorizer ec2executor
# NOTE(vish): use the following pipeline for deprecated auth
#pipeline = logrequest authenticate adminrequest authorizer ec2executor

[pipeline:ec2metadata]
pipeline = logrequest ec2md

[pipeline:ec2versions]
pipeline = logrequest ec2ver

[filter:logrequest]
paste.filter_factory = nova.api.ec2:RequestLogging.factory

[filter:ec2lockout]
paste.filter_factory = nova.api.ec2:Lockout.factory

[filter:ec2noauth]
paste.filter_factory = nova.api.ec2:NoAuth.factory

[filter:authenticate]
paste.filter_factory = nova.api.ec2:Authenticate.factory

[filter:cloudrequest]
controller = nova.api.ec2.cloud.CloudController
paste.filter_factory = nova.api.ec2:Requestify.factory

[filter:adminrequest]
controller = nova.api.ec2.admin.AdminController
paste.filter_factory = nova.api.ec2:Requestify.factory

[filter:authorizer]
paste.filter_factory = nova.api.ec2:Authorizer.factory

[app:ec2executor]
paste.app_factory = nova.api.ec2:Executor.factory

[app:ec2ver]
paste.app_factory = nova.api.ec2:Versions.factory

[app:ec2md]
paste.app_factory =
nova.api.ec2.metadatarequesthandler:MetadataRequestHandler.factory

#
# Openstack #
#

[composite:osapi]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: osversions
/v1.1: openstackapi11

[pipeline:openstackapi11]
pipeline = faultwrap noauth ratelimit serialize extensions osapiapp11
# NOTE(vish): use the following pipeline for deprecated auth
# pipeline = faultwrap auth ratelimit serialize extensions osapiapp11

[filter:faultwrap]
paste.filter_factory = nova.api.openstack:FaultWrapper.factory

[filter:auth]
paste.filter_factory = nova.api.openstack.auth:AuthMiddleware.factory

[filter:noauth]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory

[filter:ratelimit]
paste.filter_factory =
nova.api.openstack.limits:RateLimitingMiddleware.factory

[filter:serialize]
paste.filter_factory =
nova.api.openstack.wsgi:LazySerializationMiddleware.factory

[filter:extensions]
paste.filter_factory =
nova.api.openstack.extensions:ExtensionMiddleware.factory

[app:osapiapp11]
paste.app_factory = nova.api.openstack:APIRouter.factory

[pipeline:osversions]
pipeline = faultwrap osversionapp

[app:osversionapp]
paste.app_factory = nova.api.openstack.versions:Versions.factory


How can I solve this error??
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Reminder: OpenStack Project meeting - 21:00 UTC

2012-04-03 Thread Thierry Carrez
Hello everyone,

Our weekly project  release status meeting will take place at 21:00
UTC this Tuesday in #openstack-meeting on IRC. PTLs, if you can't make
it, please name a substitute on [2].

Two days before Essex final release, we will concentrate on the pending
RC2 publications for Horizon and Keystone, as well as discuss any
regression that could warrant a respin of Glance and Nova.

You can doublecheck what 21:00 UTC means for your timezone at [1]:
[1] http://www.timeanddate.com/worldclock/fixedtime.html?iso=20120327T21

See the meeting agenda, edit the wiki to add new topics for discussion:
[2] http://wiki.openstack.org/Meetings/ProjectMeeting

Cheers,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Limit flavors to specific hosts

2012-04-03 Thread Day, Phil
Yes - Its more generic that hypervisor capabilities –  my main problem with 
Host Aggregates is that it limits it to some specific 1:1 groupings based on 
hypervisor functionality.

Use cases I want to be able to cover include:


-  Rolling new hardware through an existing cluster, and limiting some 
new flavors (which might for example provide higher network bandwidth) to just 
those servers



-  Providing a range of flavours that are dependent on specific 
hardware features (GPU)


-  There may be a further group that couples flavour and/or images to 
host groups – for example it’s possible to imagine a scenario where an image is 
only licensed to some specific subset of servers, or where a subset of nodes 
are running LXC (in which case the image is in effect pre-defined). In this 
case the image metadata could, for example, specify the flavors that it can be 
used with, and those flavors are in turn limited to specific hosts.   I don’t 
really like this model of linking Glance objects (images) to Nova Objects 
(flavors), but I’m not sure what an alternative would be.

On the config file vs REST API for configuration debate (maybe this needs to be 
a Design Summit subject in its own right), I agree that we should make a 
distinction between items that are deploy time configuration (which hypervisor 
to use, network driver, etc) and items that could change whilst the system is 
running (rate limits is a good example). I don’t however see this as being 
an REST API vs config file issue  - more a configuration repository issue.   
I’d also add that anything which is going to be configured via a REST API needs 
to also provide a command line tool to drive that interface – so that out of 
the box the system can be installed and configured via the tools and scripts 
shipped with it.

Phil



From: openstack-bounces+philip.day=hp@lists.launchpad.net 
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of 
Jan Drake
Sent: 03 April 2012 02:23
To: Lorin Hochstein
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Limit flavors to specific hosts

If I understand this correctly, the motivation is to be able to provide a hint 
to schedulers on host-level appropriateness based on information external to 
that found in the hyperviser.

Right/Wrong/Close?

It would help to have a real-world example of where basic host resource  
evalution for scheduling would cause a situation requiring the host-level 
hard-coding of what is essentially a flavor-constraint.

I'll hold further thoughts for downstream.


Jan

On Apr 2, 2012, at 6:06 PM, Lorin Hochstein 
lo...@nimbisservices.commailto:lo...@nimbisservices.com wrote:
Just created a blueprint for this:

https://blueprints.launchpad.net/nova/+spec/host-capabilities-api


Take care,

Lorin
--
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.comhttps://www.nimbisservices.com/




On Apr 2, 2012, at 3:29 PM, Jay Pipes wrote:


Can I add a feature request to the below thoughtstream? Can we make it so that 
the management of these things can be done outside of config files? i.e. via a 
REST API with some simple middleware exposing the particular scheduler nodes' 
understanding of which capabilities/filters it is using to apply its scheduling 
algorithm?

Making changes to configuration files works OK for simple uses and testing, not 
so much for on-demand operations :) I say this after grumbling about similar 
configuration obstacles with ratelimits.

Best,
-jay

On 04/02/2012 02:37 PM, Chris Behrens wrote:

I have some plans for being able to set arbitrary capabilities for
hosts via nova.conf that you can use to build scheduler filters.

Right now, there are capabilities, but I believe we're only creating
these from hypervisor stats. You can filter on those today. What I'm
planning on adding is a way to specify additional keyname/value pairs in
nova.conf to supplement the capabilities we build from hypervisor stats.
You could set things like this in your nova.conf:

--host_capabilities=instance_type_ids=1,2,3;keyX;keyY=something

etc. Since capabilities are already passed to scheduler rules, you could
add some basic filters that do:

if 'instance_type_ids' in capabilities and instance_type.id not in
capabilities['instance_type_ids']:
return False

Since host_capabilities are just arbitrary keyname/value pairs, you can
pretty much add anything you want to --host_capabilities and then write
some matching scheduler filter rules.

That's the basic idea, anyway. The exact same behavior will apply to
'cells' and the cells scheduler as well. (Except you'll have
cells_capabilities= somewhere (prob nova.conf for the cells service).

- Chris


On Apr 2, 2012, at 10:36 AM, Day, Phil wrote:

Hi Folks,
I’m looking for a capability to limit some flavours to some hosts. I
want the mapping to be as flexible as possible, and work within a
zone/cell (I don’t want to add zones just to get 

Re: [Openstack] Moving nova.rpc into openstack.common

2012-04-03 Thread John Garbutt
+1 for move to nova.common

I remember discussion about versioning these messages to aid rolling / 
zero-downtime upgrades.

Might be worth considering those when doing the decoupling?

Cheers,
John

 -Original Message-
 From: openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net
 [mailto:openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net]
 On Behalf Of Chris Behrens
 Sent: 02 April 2012 23:53
 To: Russell Bryant
 Cc: Duncan McGreggor; openstack@lists.launchpad.net; Chris Behrens
 Subject: Re: [Openstack] Moving nova.rpc into openstack.common
 
 Awesome, thanks :)
 
 On Apr 2, 2012, at 3:46 PM, Russell Bryant wrote:
 
  I just threw up a patch a little while ago:
 
  https://review.openstack.org/6119
 
  --
  Russell Bryant
 
  On 04/02/2012 06:37 PM, Chris Behrens wrote:
  Seems like a sensible plan.  Carrot can go now.  I marked it deprecated so
 we can remove in folsom.  I can take care of this today, even.
 
  On Apr 2, 2012, at 3:02 PM, Duncan McGreggor dun...@dreamhost.com
 wrote:
 
  +1
 
  Thanks for exploring this, Russell!
 
  Next step: getting a common REST API abstraction in place that all
  the projects can use... ;-)
 
  d
 
  On Mon, Apr 2, 2012 at 4:26 PM, Russell Bryant rbry...@redhat.com
 wrote:
  Greetings,
 
  There was a thread on this list a little while ago about moving the
  notification drivers that are in nova and glance into
  openstack.common since they provide very similar functionality, but
  have implementations that have diverged over time.  Is anyone
  actively working on this?  If so, please let me know.
 
  For the message queue notifiers, nova uses nova.rpc to do the heavy
  lifting.  Glance has notifiers written against the messaging libs
  directly.  I think it makes sense to take the nova approach.  This
  would mean moving nova.rpc into openstack.common before the
  notifiers can get moved.
 
  I have started looking at moving nova.rpc to openstack.common.rpc.
  My plan is:
 
  1) Write a series of patches that reduces coupling in nova.rpc on
  the rest of nova.
 
  2) Submit changes needed to support this decoupling to
 openstack.common.
 
  3) Once nova.rpc is sufficiently decoupled, move it to
 openstack.common.
 
  While doing the above, I want to aim for the least amount of
  disruption to the rpc interface as is practical.
 
  While we're at it, is it time to drop nova.rpc.impl_carrot?  It is
  marked as deprecated in Essex.  Is there any reason anyone would
  still use impl_carrot over impl_kombu?
 
  Thoughts?
 
  Thanks,
 
  --
  Russell Bryant
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Xen Hypervisor

2012-04-03 Thread Alexandre Leites

Dom0: 192.168.100.251DomU: 192.168.100.238
nova.conf http://pastebin.com/B0PVVWivifconfig (dom0 and domU): 
http://pastebin.com/iCLX91RSnova network table: http://pastebin.com/k5XcXHee
Please, explain me how to take others informations if you want, i think i have 
been taken all information.
Thank you.
From: john.garb...@citrix.com
To: alex_...@live.com; openstack@lists.launchpad.net
Date: Mon, 2 Apr 2012 16:05:15 +0100
Subject: RE: [Openstack] [OpenStack] Xen Hypervisor

Just double checking, but about the other machine you have put on the same 
network as your VMs, is the interface configured in the same subnet as the VMs? 
Also, Just to be clear, not sure anyone has ever tried the sort of setup you 
are wanting (Single interface, with no VLANs). I have seen many setups using a 
single interface and VLANs, and many setups using different physical 
interfaces, and some combinations of the two, but I haven’t seen anyone try 
collapsing everything onto a single physical interface. It might not work due 
to the way nova network in flatDHCP works (i.e. adds a DHCP server onto your 
network). Just checking, but you are using FlatDHCP networking, I presume? If 
you haven’t assigned a floating ip and you are using some kind of flat 
networking, the public network configuration should be largely unimportant. I 
think you want these settings:flat_networking_bridge=xenbr0flat_interface=eth0 
What is the state of your VM, does it seem to have the correct IP address from 
the nova network DHCP? I think you are close with your flags now, but I can’t 
be specific with the help without more information:· a list of your 
flags in nova.conf· networking info (ifconfig or otherwise) from both 
Dom0 and DomU (with compute running) and the VM· networking config from 
XenServer (networks, DomU VIFs and VM VIFs)· a copy (in text form) of 
the network related tables in your DB (or all the values from your “nova-manage 
network create” and related calls) More general advice is: To make sure all the 
new network settings get applied, I would recommend:· stop all nova 
services· reset your nova DBo   delete the old DBo   Create the DB, do 
DB migration again, etc.o   Add in your network again, (please tell us what 
values you use for that)· start all nova services It might be worth 
adding a second VIF on the same network, calling that eth1 in the Nova domU and 
then using that as the flat_interface. Normally it is not recommended that you 
configure an IP address on the interface that nova-network uses for the guest 
networking. Not sure what you are trying will work. Thanks,John From: 
openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net 
[mailto:openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net] On 
Behalf Of Alexandre Leites
Sent: 02 April 2012 14:53
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] [OpenStack] Xen Hypervisor I tried the last thing 
about changing public interface to eth0... but it still doesn't work. I still 
with erro of can't ping anything outside of VM (created by nova on xen). From: 
todd.desh...@xen.org
 Date: Mon, 2 Apr 2012 09:15:52 -0400
 Subject: Re: [Openstack] [OpenStack] Xen Hypervisor
 To: alex_...@live.com
 CC: openstack@lists.launchpad.net
 
 On Mon, Apr 2, 2012 at 8:49 AM, Alexandre Leites alex_...@live.com wrote:
  Ok, anyway i tested it and didn't worked. Any other solution?
 
 
 You should be more specific.
 
 You should explain the specific flags you tried and then post the
 relevant logs files.
 
 Thanks,
 Todd
 
 
 -- 
 Todd Deshane
 http://www.linkedin.com/in/deshantm
 http://blog.xen.org/
 http://wiki.xen.org/___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Moving nova.rpc into openstack.common

2012-04-03 Thread Thierry Carrez
Russell Bryant wrote:
 I proposed a session to discuss this a bit at the summit:
 
 http://summit.openstack.org/sessions/view/95
 
 There are a lot of ways this could be approached.  I'm going to try to
 write up a proposal at some point to get a discussion moving.

Note that there is also the following discussion:
http://summit.openstack.org/sessions/view/39

which was folded into amore generic openstack-common plans session:
http://summit.openstack.org/sessions/view/28

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Xen Hypervisor

2012-04-03 Thread John Garbutt
Thanks for the info. The one thing I am missing is the ifconfig info from 
inside your VM instance (I would personally use XenCenter to access the console 
and see what is going on). I am assuming that it is not getting the correct IP 
address from the DHCP server in nova-network. And I am assuming there is no 
other DHCP server on that network.

OK, so it looks like you have this:
--flat_network_bridge=xenbr0
--flat_interface=eth0
--network_manager=nova.network.manager.FlatDHCPManager
--public_interface=eth0

Good news is that the DB looks to correctly have the flat_interface.

I can't see how your DomU can be listening on 192.168.100.228, given your 
current config0, looks like nova network has attached and reconfigured eth0 for 
you. Are you launching instances using horizon or the nova CLI?

One idea that might make this work (others please correct me if I am wrong):

* Add two extra VIFs (virtual network interface) on your DomU VM

* Attach those VIFs to xenbr0, just like the other VIF that is eth0 on 
your DomU

Now try the following configuration:

* Configure eth0 to have the IP address you want (presumably it is a 
static address, as you can't have a DHCP on that network and use flatDHCP?): 
192.168.100.238

* Do not configure eth1 to have any address

* Configure eth2 to have another IP address that is public facing: 
10.42.0.42 / 255.255.255.0, something in the floating ip network maybe

* Change the flags:

--flat_network_bridge=xenbr0
--flat_interface=eth1
--network_manager=nova.network.manager.FlatDHCPManager
--public_interface=eth2

That might leave things a little less broken than trying with a single 
interface in DomU?

However, I am no expert on how nova-network works, hopefully others can confirm 
the best way forward.

Cheers,
John

From: openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net 
[mailto:openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net] On 
Behalf Of Alexandre Leites
Sent: 03 April 2012 13:35
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] [OpenStack] Xen Hypervisor

Dom0: 192.168.100.251
DomU: 192.168.100.238

nova.conf http://pastebin.com/B0PVVWiv
ifconfig (dom0 and domU): http://pastebin.com/iCLX91RS
nova network table: http://pastebin.com/k5XcXHee

Please, explain me how to take others informations if you want, i think i have 
been taken all information.

Thank you.


From: john.garb...@citrix.com
To: alex_...@live.com; openstack@lists.launchpad.net
Date: Mon, 2 Apr 2012 16:05:15 +0100
Subject: RE: [Openstack] [OpenStack] Xen Hypervisor


Just double checking, but about the other machine you have put on the same 
network as your VMs, is the interface configured in the same subnet as the VMs?

Also, Just to be clear, not sure anyone has ever tried the sort of setup you 
are wanting (Single interface, with no VLANs). I have seen many setups using a 
single interface and VLANs, and many setups using different physical 
interfaces, and some combinations of the two, but I haven't seen anyone try 
collapsing everything onto a single physical interface. It might not work due 
to the way nova network in flatDHCP works (i.e. adds a DHCP server onto your 
network).

Just checking, but you are using FlatDHCP networking, I presume?

If you haven't assigned a floating ip and you are using some kind of flat 
networking, the public network configuration should be largely unimportant.

I think you want these settings:
flat_networking_bridge=xenbr0
flat_interface=eth0

What is the state of your VM, does it seem to have the correct IP address from 
the nova network DHCP?

I think you are close with your flags now, but I can't be specific with the 
help without more information:
* a list of your flags in nova.conf
* networking info (ifconfig or otherwise) from both Dom0 and DomU (with 
compute running) and the VM
* networking config from XenServer (networks, DomU VIFs and VM VIFs)
* a copy (in text form) of the network related tables in your DB (or 
all the values from your nova-manage network create and related calls)

More general advice is:

To make sure all the new network settings get applied, I would recommend:

 *   stop all nova services
 *   reset your nova DB

*   delete the old DB
*   Create the DB, do DB migration again, etc.
*   Add in your network again, (please tell us what values you use for that)

 *   start all nova services

It might be worth adding a second VIF on the same network, calling that eth1 in 
the Nova domU and then using that as the flat_interface. Normally it is not 
recommended that you configure an IP address on the interface that nova-network 
uses for the guest networking. Not sure what you are trying will work.

Thanks,
John

From: openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net 
[mailto:openstack-bounces+john.garbutt=eu.citrix@lists.launchpad.net] On 
Behalf Of 

Re: [Openstack] Moving nova.rpc into openstack.common

2012-04-03 Thread Swaminathan Venkataraman
Hi,
I'm actively working on the notification part. I did some analysis on the
code and dependencies and was planning to submit a blueprint by end of the
week. We can use that to finalize the interface for the notification. The
rpc implementation is rich (compared to just what we need for
notifications) because nova uses it for all rpc related communications. The
idea that I was working with was to just move what we need for
notifications. In that scenario we do not really need all of rpc in
openstack-common. If we do want a common implementation that all openstack
components can use to communicate the middleware, it might make to sense to
move the whole of rpc to openstack-common.

Thoughts?


Anyways, here is the analysis and some of the comments I got...

Cheers,
Venkat

   -
   -- Forwarded message --
   From: Swaminathan Venkataraman venkat...@gmail.com
   Date: Mon, Mar 19, 2012 at 8:31 PM
   Subject: Re: [Openstack] Notifiers
   To: Monsyne Dragon mdra...@rackspace.com
   Cc: Mark McLoughlin mar...@redhat.com, Jason Kölker 
   jason.koel...@rackspace.com, Jay Pipes jaypi...@gmail.com


   I did some analysis on notifier and rpc in nova and there are a bunch of
   dependencies that have to be sorted out before we can move them to
   openstack-common. Here are some of the details.

   - notifier and rpc use flags, utils, logging, context, db, exception,
  from nova.
  -
  - The modules in notfier and rpc use FLAGS from flags.py which is an
  instance of NovaConfigOpts. They mainly use it to register the config
  options and access them. Given that, it seems like we could use
  CommonConfigOpts directly to register the options. This will
eliminate the
  dependency on flags and flagfile.
  -
  - There are three functions that are used from utils - utcnow,
  import_object, and to_primitive. There is a utils in
openstack-common which
  already contains utcnow and import_object. The code also macthes line to
  line with the implementation in nova. The to_primitive function
is missing
  in openstack-common. One option could be to move this function alone to
  openstack-common which should eliminate the dependency on the nova based
  utils.
  -
  - notifier and api use log from nova. In fact they work with an
  instance of NovaContextAdapter which in turn is an instance of
  LoggerAdapter. NovaContextAdapter is used to pass the context,
the instance
  uuid, and the nova version to the logger. The modules in openstack-common
  are using the python logging module directly. So, if we need
notifier to be
  able to print contextual information we will have to add this
functionality
  to openstack-common.
  -
  - Both nova and openstack-common have an implementation of
  RequestContext. The one in Nova is richer and both notifier and rpc use
  functionality from RequestContext in nova. The other difference
is that the
  RequestContext in nova uses a weak refernce store to save the context
  information. I did see a couple of instances where the context
information
  was deleted from the store, but I'm not sure whether it is being
accessed.
  So, should the context in openstack-common be enhanced?
  -
  - db from nova is used only by capacity_notifier. It looks like it
  sends events that are only related to compute manager events. So, should
  this be part of openstack-common?
   I've not looked at exception. I'll also have to look at rpc in more
   detail. Please do let me know if this is the right direction.

   thanks,
   Venkat

-- Forwarded message --
From: Mark McLoughlin mar...@redhat.com
Date: Tue, Mar 20, 2012 at 8:05 PM
Subject: Re: [Openstack] Notifiers
To: Swaminathan Venkataraman venkat...@gmail.com
Cc: Monsyne Dragon mdra...@rackspace.com, Jason Kölker 
jason.koel...@rackspace.com, Jay Pipes jaypi...@gmail.com

-- Forwarded message --
From: Mark McLoughlin mar...@redhat.com
Date: Tue, Mar 20, 2012 at 1:25 PM
Subject: Re: [Openstack] Notifiers
To: Swaminathan Venkataraman venkat...@gmail.com
Cc: Monsyne Dragon mdra...@rackspace.com, Jason Kölker 
jason.koel...@rackspace.com, Jay Pipes jaypi...@gmail.com


Hi Venkat,

Could you file a bug or blueprint against openstack-common with all this
great info?

Cheers,
Mark.

On Tue, 2012-03-20 at 19:37 +0530, Swaminathan Venkataraman wrote:
 Sure Mark, but here is a bit more of analysis that I did. I'll file a
 blueprint because I'm not sure if this is a bug.


 There is an exception module defined in openstack-common. This has a
 class named openstackException which is similar to NovaException and I
 guess is to be subclassed to define exceptions that go in
 openstack-common. openstack-common also defines a decorator for
 wrapping  methods to catch exceptions, but it does not try to send the
 exception to the notification system like the one in nova.exception
 does. Based on 

Re: [Openstack] question about keystone and dashboard on RHEL6

2012-04-03 Thread Russell Bryant
On 04/02/2012 08:44 PM, Xin Zhao wrote:
 On 4/2/2012 6:35 PM, Russell Bryant wrote:
 On 04/02/2012 03:09 PM, Xin Zhao wrote:
 Hello,

 I am new to OpenStack and trying to install the diablo release on a
 RHEL6 cluster. I follow instructions here:
 http://fedoraproject.org/wiki/Getting_started_with_OpenStack_Nova

 The instruction doesn't mention how to install and configure Keystone
 and dashboard services, I wonder:

 1) are these two services available for RHEL6, in diablo release?
 2) do I need to go to the latest Essex release, and where the
 instructions is?
 The dashboard, horizon, is not included with the Diablo packages that
 you find in EPEL6 right now.  When we update EPEL6 to Essex, which
 should be within the next few weeks, horizon will be included as well.

 
 How about keystone, the instructions here
 (http://fedoraproject.org/wiki/Getting_started_with_OpenStack_Nova) doesn't
 mention how to install and configure keystone, although it tells how to
 clean up keystone, which makes me think there is something
 missing in the earlier sections of this instruction.

Keystone is there.  It has already been updated to one of the Essex RCs,
acutally.  As EPEL6 gets updated to Essex, these instructions will
become the ones you want to follow, and they include Keystone:

https://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17

-- 
Russell Bryant

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Moving nova.rpc into openstack.common

2012-04-03 Thread Swaminathan Venkataraman
Hi,
I'm actively working on the notification part. I did some analysis on the
code and dependencies and was planning to submit a blueprint by end of the
week. We can use that to finalize the interface for the notification. The
rpc implementation is rich (compared to just what we need for
notifications) because nova uses it for all rpc related communications. The
idea that I was working with was to just move what we need for
notifications. In that scenario we do not really need all of rpc in
openstack-common. If we do want a common implementation that all openstack
components can use to communicate the middleware, it might make to sense to
move the whole of rpc to openstack-common.

Thoughts?


Anyways, here is the analysis and some of the comments I got...

Cheers,
Venkat

   -
   -- Forwarded message --
   From: Swaminathan Venkataraman venkat...@gmail.com
   Date: Mon, Mar 19, 2012 at 8:31 PM
   Subject: Re: [Openstack] Notifiers
   To: Monsyne Dragon mdra...@rackspace.com
   Cc: Mark McLoughlin mar...@redhat.com, Jason Kölker 
   jason.koel...@rackspace.com, Jay Pipes jaypi...@gmail.com


   I did some analysis on notifier and rpc in nova and there are a bunch of
   dependencies that have to be sorted out before we can move them to
   openstack-common. Here are some of the details.

   - notifier and rpc use flags, utils, logging, context, db, exception,
  from nova.
  -
  - The modules in notfier and rpc use FLAGS from flags.py which is an
  instance of NovaConfigOpts. They mainly use it to register the config
  options and access them. Given that, it seems like we could use
  CommonConfigOpts directly to register the options. This will
eliminate the
  dependency on flags and flagfile.
  -
  - There are three functions that are used from utils - utcnow,
  import_object, and to_primitive. There is a utils in
openstack-common which
  already contains utcnow and import_object. The code also macthes line to
  line with the implementation in nova. The to_primitive function
is missing
  in openstack-common. One option could be to move this function alone to
  openstack-common which should eliminate the dependency on the nova based
  utils.
  -
  - notifier and api use log from nova. In fact they work with an
  instance of NovaContextAdapter which in turn is an instance of
  LoggerAdapter. NovaContextAdapter is used to pass the context,
the instance
  uuid, and the nova version to the logger. The modules in openstack-common
  are using the python logging module directly. So, if we need
notifier to be
  able to print contextual information we will have to add this
functionality
  to openstack-common.
  -
  - Both nova and openstack-common have an implementation of
  RequestContext. The one in Nova is richer and both notifier and rpc use
  functionality from RequestContext in nova. The other difference
is that the
  RequestContext in nova uses a weak refernce store to save the context
  information. I did see a couple of instances where the context
information
  was deleted from the store, but I'm not sure whether it is being
accessed.
  So, should the context in openstack-common be enhanced?
  -
  - db from nova is used only by capacity_notifier. It looks like it
  sends events that are only related to compute manager events. So, should
  this be part of openstack-common?
   I've not looked at exception. I'll also have to look at rpc in more
   detail. Please do let me know if this is the right direction.

   thanks,
   Venkat

-- Forwarded message --
From: Mark McLoughlin mar...@redhat.com
Date: Tue, Mar 20, 2012 at 8:05 PM
Subject: Re: [Openstack] Notifiers
To: Swaminathan Venkataraman venkat...@gmail.com
Cc: Monsyne Dragon mdra...@rackspace.com, Jason Kölker 
jason.koel...@rackspace.com, Jay Pipes jaypi...@gmail.com

-- Forwarded message --
From: Mark McLoughlin mar...@redhat.com
Date: Tue, Mar 20, 2012 at 1:25 PM
Subject: Re: [Openstack] Notifiers
To: Swaminathan Venkataraman venkat...@gmail.com
Cc: Monsyne Dragon mdra...@rackspace.com, Jason Kölker 
jason.koel...@rackspace.com, Jay Pipes jaypi...@gmail.com


Hi Venkat,

Could you file a bug or blueprint against openstack-common with all this
great info?

Cheers,
Mark.

On Tue, 2012-03-20 at 19:37 +0530, Swaminathan Venkataraman wrote:
 Sure Mark, but here is a bit more of analysis that I did. I'll file a
 blueprint because I'm not sure if this is a bug.


 There is an exception module defined in openstack-common. This has a
 class named openstackException which is similar to NovaException and I
 guess is to be subclassed to define exceptions that go in
 openstack-common. openstack-common also defines a decorator for
 wrapping  methods to catch exceptions, but it does not try to send the
 exception to the notification system like the one in nova.exception
 does. Based on 

Re: [Openstack] Moving nova.rpc into openstack.common

2012-04-03 Thread Russell Bryant
Thanks for sharing this information.  For the future, I think this type
of analysis and discussion is something that is great to have on the
mailing list instead of just a private group.  I wish I had seen it sooner.

The code in nova.rpc seems useful enough that it very well may be used
elsewhere.  I know it's more than what is needed for notifications, but
it does support what is needed for notifications (and more).  I like the
idea of moving the whole thing instead of having separate messaging code
for just notifications.

-- 
Russell Bryant

On 04/03/2012 09:20 AM, Swaminathan Venkataraman wrote:
 Hi,
 I'm actively working on the notification part. I did some analysis on
 the code and dependencies and was planning to submit a blueprint by end
 of the week. We can use that to finalize the interface for the
 notification. The rpc implementation is rich (compared to just what we
 need for notifications) because nova uses it for all rpc related
 communications. The idea that I was working with was to just move what
 we need for notifications. In that scenario we do not really need all of
 rpc in openstack-common. If we do want a common implementation that all
 openstack components can use to communicate the middleware, it might
 make to sense to move the whole of rpc to openstack-common. 
 
 Thoughts?
 
 
 Anyways, here is the analysis and some of the comments I got...
 
 Cheers,
 Venkat
 
   *
 
 
 -- Forwarded message --
 From: *Swaminathan Venkataraman* venkat...@gmail.com
 mailto:venkat...@gmail.com
 Date: Mon, Mar 19, 2012 at 8:31 PM
 Subject: Re: [Openstack] Notifiers
 To: Monsyne Dragon mdra...@rackspace.com
 mailto:mdra...@rackspace.com
 Cc: Mark McLoughlin mar...@redhat.com mailto:mar...@redhat.com,
 Jason Kölker jason.koel...@rackspace.com
 mailto:jason.koel...@rackspace.com, Jay Pipes jaypi...@gmail.com
 mailto:jaypi...@gmail.com
 
 
 I did some analysis on notifier and rpc in nova and there are a
 bunch of dependencies that have to be sorted out before we can move
 them to openstack-common. Here are some of the details.
  
   o notifier and rpc use flags, utils, logging, context, db,
 exception, from nova.
   o
   o The modules in notfier and rpc use FLAGS from flags.py which is
 an instance of NovaConfigOpts. They mainly use it to register
 the config options and access them. Given that, it seems like we
 could use CommonConfigOpts directly to register the options.
 This will eliminate the dependency on flags and flagfile.
   o
   o There are three functions that are used from utils - utcnow,
 import_object, and to_primitive. There is a utils in
 openstack-common which already contains utcnow and
 import_object. The code also macthes line to line with the
 implementation in nova. The to_primitive function is missing in
 openstack-common. One option could be to move this function
 alone to openstack-common which should eliminate the dependency
 on the nova based utils.
   o
   o notifier and api use log from nova. In fact they work with an
 instance of NovaContextAdapter which in turn is an instance of
 LoggerAdapter. NovaContextAdapter is used to pass the context,
 the instance uuid, and the nova version to the logger. The
 modules in openstack-common are using the python logging module
 directly. So, if we need notifier to be able to print contextual
 information we will have to add this functionality to
 openstack-common.
   o
   o Both nova and openstack-common have an implementation of
 RequestContext. The one in Nova is richer and both notifier and
 rpc use functionality from RequestContext in nova. The other
 difference is that the RequestContext in nova uses a weak
 refernce store to save the context information. I did see a
 couple of instances where the context information was deleted
 from the store, but I'm not sure whether it is being accessed.
 So, should the context in openstack-common be enhanced?
   o
   o db from nova is used only by capacity_notifier. It looks like it
 sends events that are only related to compute manager events.
 So, should this be part of openstack-common?
 I've not looked at exception. I'll also have to look at rpc in more
 detail. Please do let me know if this is the right direction.
  
 thanks,
 Venkat
 
 -- Forwarded message --
 From: *Mark McLoughlin* mar...@redhat.com mailto:mar...@redhat.com
 Date: Tue, Mar 20, 2012 at 8:05 PM
 Subject: Re: [Openstack] Notifiers
 To: Swaminathan Venkataraman venkat...@gmail.com
 mailto:venkat...@gmail.com
 Cc: Monsyne Dragon mdra...@rackspace.com
 mailto:mdra...@rackspace.com, Jason Kölker
 jason.koel...@rackspace.com 

Re: [Openstack] (Chef) knife openstack with Essex RC1?

2012-04-03 Thread Matt Ray
I haven't had a chance to test with Essex yet, but I did get a session
accepted at the Developer Summit to get everyone on the same page as
far as Fog goes. Hopefully I'll get some cycles in on Essex before the
Developer Summit and get it fixed and working before then. Feel free
to email me if you have any questions or patches.
http://summit.openstack.org/sessions/view/44

Thanks,
Matt Ray
Senior Technical Evangelist | Opscode Inc.
m...@opscode.com | (512) 731-2218
Twitter, IRC, GitHub: mattray



On Mon, Apr 2, 2012 at 8:36 PM, Philipp Wollermann
wollermann_phil...@cyberagent.co.jp wrote:
 Hi,

 I wrote a mail to Matt @ Opscode, who wrote that guide, some days ago 
 inquiring about the state of the guide and his cookbooks repository.
 He told me that it's out of date and shouldn't be used for newer versions of 
 OpenStack - however there is much work going on for OpenStack Essex + Chef at 
 https://github.com/rcbops/chef-cookbooks/tree/ubuntu-precise :) According to 
 him, this repository will probably become the base for his own guide again, 
 once he has time to work on it.

 Best regards,
 Philipp Wollermann

 Infrastructure Engineer
 CyberAgent, Inc. (Tokyo)
 https://github.com/philwo


 On 2012/04/03, at 1:31, Brian Parker wrote:

 Hi!

 Has anyone tried following the instructions for knife openstack with 
 Essex?  We are trying these steps:

 http://wiki.opscode.com/display/chef/OpenStack+Bootstrap+Fast+Start+Guide

 When attempting knife openstack flavor list, we are getting a ERROR: 
 RuntimeError: Unable to parse service catalog. error because the service 
 catalog is empty in the response.

 Thanks!

 Brian
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Moving nova.rpc into openstack.common

2012-04-03 Thread Mark McLoughlin
Hey Russell,

On Mon, 2012-04-02 at 16:26 -0400, Russell Bryant wrote:
 Greetings,
 
 There was a thread on this list a little while ago about moving the
 notification drivers that are in nova and glance into openstack.common
 since they provide very similar functionality, but have implementations
 that have diverged over time.  Is anyone actively working on this?  If
 so, please let me know.
 
 For the message queue notifiers, nova uses nova.rpc to do the heavy
 lifting.  Glance has notifiers written against the messaging libs
 directly.  I think it makes sense to take the nova approach.  This would
 mean moving nova.rpc into openstack.common before the notifiers can get
 moved.
 
 I have started looking at moving nova.rpc to openstack.common.rpc.  My
 plan is:
 
 1) Write a series of patches that reduces coupling in nova.rpc on the
 rest of nova.
 
 2) Submit changes needed to support this decoupling to openstack.common.
 
 3) Once nova.rpc is sufficiently decoupled, move it to openstack.common.
 
 While doing the above, I want to aim for the least amount of disruption
 to the rpc interface as is practical.

That looks like a good plan. Have you got a rough idea already what
needs to happen for (2) in openstack-common?

Cheers,
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Best approach for deploying Essex?

2012-04-03 Thread Lillie Ross-CDSR11
With the pending release of Essex, I'm making plans to upgrade our internal 
cloud infrastructure.  My question is what will be the best approach?

Our cloud is being used to support internal research activities and thus needs 
to be 'relatively' stable, however as new features become available (major 
features), it would be nice to be able to role them into our operational cloud 
relatively painlessly (wishful thinking perhaps).

My question is, should I base our new installation directly off the Essex 
branch in the git repository, or use the packages that will be deployed as part 
of the associated Ubuntu 12.04LTS release?  With Diablo, I was forced to use 
packages from the ManagedIT PPA with additional Keystone patches to get a 
consistent, stable platform up and running.  Obviously, some of these problems 
were due to confusion caused by various documents describing different 
incarnations of Openstack, and not really knowing what was current and stable.  
Especially the packages shipped with Ubuntu made assumptions about how 
Openstack was to be deployed that wasn't really apparent.

Just wondering.  Any thoughts appreciated.

Regards,
Ross


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Moving nova.rpc into openstack.common

2012-04-03 Thread Jay Pipes

On 04/03/2012 09:36 AM, Russell Bryant wrote:

Thanks for sharing this information.  For the future, I think this type
of analysis and discussion is something that is great to have on the
mailing list instead of just a private group.  I wish I had seen it sooner.


In Venkat's defense, I believe he did reach out originally to the 
mailing list on this topic:


https://lists.launchpad.net/openstack/msg08539.html

Agree that it's always good to have more analysis in a public forum, but 
Venkat did havejust start contributing, so we can give him a break I 
think! :)


Best,

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Moving nova.rpc into openstack.common

2012-04-03 Thread Jay Pipes

On 04/03/2012 11:16 AM, Mark McLoughlin wrote:

4) nova.exception

nova.rpc defines two exceptions based on NovaException.  They could be
based on OpenstackException from openstack-common, instead.  There's
also an RPC exception defined in nova.exception, but that can be moved
into nova.rpc with the others.


Is there any great value to having a base Exception class?

e.g. a cfg exception and an rpc exception are completely unrelated, so
I'd just have those modules define unrelated exceptions


The code also uses wrap_exception.  The one in openstack-common seems
sufficient for how it's used.

However, I'm not sure how people would feel about having both
openstack.common.exception and nova.exception in the tree since they
overlap quite a bit.  I like being able to do work in pieces, but having
them both in the tree leaves the code in an odd state, so we need some
end goal in mind.


I'm not a huge fan of openstack.common.exception


Yeah, I don't see too much value in having everything inherit from a 
common exception base class. I think the different projects will 
typically want to customize the error message text...



5) nova.context

I haven't looked at this one in detail, yet.  We'll have to sort out
what to do with RequestContext.  I see in the message from Swaminathan
Venkataraman that both openstack-common and nova have RequestContext,
but there's more code in the nova version.  I suppose we should look at
making the openstack-common version sufficient for nova and then switch
nova to it.


Glance also has a different RequestContext:

https://github.com/openstack/glance/blob/master/glance/common/context.py

I wouldn't mind if the Glance-specific RequestContext inherited from a 
base openstack-common one, though, and just added its stuff related to 
Glance's owner_is_tenant config option


Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift and keystone: asking for an auth token.

2012-04-03 Thread Dolph Mathews
Included one answer for you below :)

-Dolph

On Tue, Apr 3, 2012 at 9:53 AM, Pierre Amadio
pierre.ama...@canonical.comwrote:


 The ubuntu user is associated to the admin role (i know i did it with
 keystone user-role-add , altough i m not sure how to list the role of a
 given user to double check, if you know how to do that, please let me
 know).


See `keystone help role-list`. I had a review to change this to
user-role-list to complement user-role-add and user-role-remove, but the
code review just expired. I may re-propose once folsom development picks up.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Moving nova.rpc into openstack.common

2012-04-03 Thread Russell Bryant
On 04/03/2012 12:23 PM, Jay Pipes wrote:
 On 04/03/2012 09:36 AM, Russell Bryant wrote:
 Thanks for sharing this information.  For the future, I think this type
 of analysis and discussion is something that is great to have on the
 mailing list instead of just a private group.  I wish I had seen it
 sooner.
 
 In Venkat's defense, I believe he did reach out originally to the
 mailing list on this topic:
 
 https://lists.launchpad.net/openstack/msg08539.html
 
 Agree that it's always good to have more analysis in a public forum, but
 Venkat did havejust start contributing, so we can give him a break I
 think! :)

Yep.  I didn't mean for that to come off in a negative way, so sorry
about that.

I did see the thread.  It wasn't clear that anyone was actively working
on it, though.

-- 
Russell Bryant

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Moving nova.rpc into openstack.common

2012-04-03 Thread John Garbutt
  However, I'm not sure how people would feel about having both
  openstack.common.exception and nova.exception in the tree since they
  overlap quite a bit.  I like being able to do work in pieces, but
  having them both in the tree leaves the code in an odd state, so we
  need some end goal in mind.
 
  I'm not a huge fan of openstack.common.exception
 
 Yeah, I don't see too much value in having everything inherit from a common
 exception base class. I think the different projects will typically want to
 customize the error message text...

My vote would be to have some handy mix-ins in openstack.common, if that works 
for everyone

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Moving nova.rpc into openstack.common

2012-04-03 Thread Jay Pipes

On 04/03/2012 01:40 PM, John Garbutt wrote:

However, I'm not sure how people would feel about having both
openstack.common.exception and nova.exception in the tree since they
overlap quite a bit.  I like being able to do work in pieces, but
having them both in the tree leaves the code in an odd state, so we
need some end goal in mind.


I'm not a huge fan of openstack.common.exception


Yeah, I don't see too much value in having everything inherit from a common
exception base class. I think the different projects will typically want to
customize the error message text...


My vote would be to have some handy mix-ins in openstack.common, if that works 
for everyone


Sure, that would work...

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova-api start failed in multi_host compute nodes.

2012-04-03 Thread Vishvananda Ishaya
Your api-paste.ini is very out of date.  Here is the section from the current 
version:


# Metadata #

[composite:metadata]
use = egg:Paste#urlmap
/: metaversions
/latest: meta
/1.0: meta
/2007-01-19: meta
/2007-03-01: meta
/2007-08-29: meta
/2007-10-10: meta
/2007-12-15: meta
/2008-02-01: meta
/2008-09-01: meta
/2009-04-04: meta

[pipeline:metaversions]
pipeline = ec2faultwrap logrequest metaverapp

[pipeline:meta]
pipeline = ec2faultwrap logrequest metaapp

[app:metaverapp]
paste.app_factory = nova.api.metadata.handler:Versions.factory

[app:metaapp]
paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory

-

Try updating to the paste included in etc/nova/api-paste.ini

FYI you can also run just the metadata server by using the binary:

nova-api-metadata

Instead of using nova-api and changing enabled_apis

Vish

On Apr 3, 2012, at 1:19 AM, 한승진 wrote:

 Hi all
 
 I am trying to start nova-api in my compute node to use metadata.
 
 I couldn't success yet. I found this log in nova-api log
 
 2012-04-03 15:18:43,908 CRITICAL nova [-] Could not load paste app 'metadata' 
 from /etc/nova/api-paste.ini
  36 (nova): TRACE: Traceback (most recent call last):
  37 (nova): TRACE:   File /usr/local/bin/nova-api, line 51, in module
  38 (nova): TRACE: servers.append(service.WSGIService(api))
  39 (nova): TRACE:   File 
 /usr/local/lib/python2.7/dist-packages/nova/service.py, line 299, in 
 __init__
  40 (nova): TRACE: self.app = self.loader.load_app(name)
  41 (nova): TRACE:   File 
 /usr/local/lib/python2.7/dist-packages/nova/wsgi.py, line 414, in load_app
  42 (nova): TRACE: raise exception.PasteAppNotFound(name=name, 
 path=self.config_path)
  43 (nova): TRACE: PasteAppNotFound: Could not load paste app 'metadata' from 
 /etc/nova/api-paste.ini
  44 (nova): TRACE:
  45 2012-04-03 15:20:43,786 ERROR nova.wsgi [-] No section 'metadata' 
 (prefixed by 'app' or 'application' or 'composite' or 'composit' or 
 'pipeline' or 'filter-app') found in config /etc/nova/api-paste.ini
 
 I added the flag in my nova.conf
 
 --enbled_apis=metadata
 
 Here is my api-paste.ini
 
 ###
 # EC2 #
 ###
 
 [composite:ec2]
 use = egg:Paste#urlmap
 /: ec2versions
 /services/Cloud: ec2cloud
 /services/Admin: ec2admin
 /latest: ec2metadata
 /2007-01-19: ec2metadata
 /2007-03-01: ec2metadata
 /2007-08-29: ec2metadata
 /2007-10-10: ec2metadata
 /2007-12-15: ec2metadata
 /2008-02-01: ec2metadata
 /2008-09-01: ec2metadata
 /2009-04-04: ec2metadata
 
 [pipeline:ec2cloud]
 pipeline = logrequest ec2noauth cloudrequest authorizer ec2executor
 # NOTE(vish): use the following pipeline for deprecated auth
 #pipeline = logrequest authenticate cloudrequest authorizer ec2executor
 
 [pipeline:ec2admin]
 pipeline = logrequest ec2noauth adminrequest authorizer ec2executor
 # NOTE(vish): use the following pipeline for deprecated auth
 #pipeline = logrequest authenticate adminrequest authorizer ec2executor
 
 [pipeline:ec2metadata]
 pipeline = logrequest ec2md
 
 [pipeline:ec2versions]
 pipeline = logrequest ec2ver
 
 [filter:logrequest]
 paste.filter_factory = nova.api.ec2:RequestLogging.factory
 
 [filter:ec2lockout]
 paste.filter_factory = nova.api.ec2:Lockout.factory
 
 [filter:ec2noauth]
 paste.filter_factory = nova.api.ec2:NoAuth.factory
 
 [filter:authenticate]
 paste.filter_factory = nova.api.ec2:Authenticate.factory
 
 [filter:cloudrequest]
 controller = nova.api.ec2.cloud.CloudController
 paste.filter_factory = nova.api.ec2:Requestify.factory
 
 [filter:adminrequest]
 controller = nova.api.ec2.admin.AdminController
 paste.filter_factory = nova.api.ec2:Requestify.factory
 
 [filter:authorizer]
 paste.filter_factory = nova.api.ec2:Authorizer.factory
 
 [app:ec2executor]
 paste.app_factory = nova.api.ec2:Executor.factory
 
 [app:ec2ver]
 paste.app_factory = nova.api.ec2:Versions.factory
 
 [app:ec2md]
 paste.app_factory = 
 nova.api.ec2.metadatarequesthandler:MetadataRequestHandler.factory
 
 #
 # Openstack #
 #
 
 [composite:osapi]
 use = call:nova.api.openstack.urlmap:urlmap_factory
 /: osversions
 /v1.1: openstackapi11
 
 [pipeline:openstackapi11]
 pipeline = faultwrap noauth ratelimit serialize extensions osapiapp11
 # NOTE(vish): use the following pipeline for deprecated auth
 # pipeline = faultwrap auth ratelimit serialize extensions osapiapp11
 
 [filter:faultwrap]
 paste.filter_factory = nova.api.openstack:FaultWrapper.factory
 
 [filter:auth]
 paste.filter_factory = nova.api.openstack.auth:AuthMiddleware.factory
 
 [filter:noauth]
 paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory
 
 [filter:ratelimit]
 paste.filter_factory = 
 nova.api.openstack.limits:RateLimitingMiddleware.factory
 
 [filter:serialize]
 paste.filter_factory = 
 nova.api.openstack.wsgi:LazySerializationMiddleware.factory
 
 [filter:extensions]
 paste.filter_factory = 
 nova.api.openstack.extensions:ExtensionMiddleware.factory
 
 [app:osapiapp11]
 

Re: [Openstack] Instance fails to spawn when instance_path is nfs mounted

2012-04-03 Thread Diego Parrilla Santamaría
We use nfs backed instances a lot, and this problem normally has to do with
wrong permission management in your filer and/or client.

Check if not only root can write on the nfs share (specially libvirt user).

Diego
-- 
Diego Parrilla
http://www.stackops.com/*CEO*
*www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 |
skype:diegoparrilla*
* http://www.stackops.com/
**




On Tue, Apr 3, 2012 at 6:43 PM, Mandar Vaze / मंदार वझे 
mandarv...@gmail.com wrote:

 I saw an old question posted here :
 https://answers.launchpad.net/nova/+question/164689

 But I am not trying live migration.

 I have nfs mounted instances_path - so when I try to spawn an instance I
 run into the above errors. Especially following :

 File /usr/lib/python2.7/dist-packages/libvirt.py, line 372, in
 createWithFlags
 40842 2012-04-03 05:42:27 TRACE nova.rpc.amqp if ret == -1: raise
 libvirtError ('virDomainCreateWithFlags() failed', dom=self)
 40843 2012-04-03 05:42:27 TRACE nova.rpc.amqp libvirtError: internal error
 Process exited while reading console log output: chardev: opening
 backend file failed

 But as you can see below, several files are created in this folder, so I
 am not sure if mine if permissions issue (Else none of the files would
 get created) The problem is reported when libvirt tries to write to
 console.log (File itself is created with correct permissions - just that
 this is zero byte file)

 mandar@ubuntu-dev-mandar:/nfs_shared_instances_path/instance-0005$ ll
 total 10944
 drwxrwxr-x 2 mandar libvirtd4096 2012-04-03 05:42 ./
 drwxrwxrwx 4 root   root4096 2012-04-03 05:42 ../
 -rw-rw 1 mandar libvirtd   0 2012-04-03 05:42 console.log
 -rw-r--r-- 1 mandar libvirtd 6291968 2012-04-03 05:42 disk
 -rw-rw-r-- 1 mandar libvirtd 4731440 2012-04-03 05:42 kernel
 -rw-rw-r-- 1 mandar libvirtd1067 2012-04-03 05:42 libvirt.xml
 -rw-rw-r-- 1 mandar libvirtd 2254249 2012-04-03 05:42 ramdisk

 I'm suspecting :
 https://bugs.launchpad.net/ubuntu/maverick/+source/libvirt/+bug/632696
 But I the above doesn't show itself in non-NFS setup

 Please suggest !!!

 -Mandar


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Limit flavors to specific hosts

2012-04-03 Thread Vishvananda Ishaya

On Apr 3, 2012, at 6:45 AM, Day, Phil wrote:

 Hi John,
  
 Maybe the problem with host aggregates is that it too quickly became 
 something that was linked to hypervisor capability, rather than being the 
 more general mechanism of which one form of aggregate could be linked to 
 hypervisor capabilities ?
  
 Should we have a “host aggregates 2.0” session at the Design Summit ?

+ 1

I think the primary use case is associating metadata with groups of hosts that 
can be interpreted by the scheduler.  Obviously, this same metadata can be used 
to create pools/etc. in the hypervisor, but we can't forget about the 
scheduler.  Modifying flags on the hosts for capabilities is ugly.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Dashboard VNC Console failed to connect to server

2012-04-03 Thread Vishvananda Ishaya
It is working!

You are in the bios screen, so you probably just need to wait (software mode 
booting can take a while)

If the vm doesn't ever actually boot, you may be attempting to boot a 
non-bootable image.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Best approach for deploying Essex?

2012-04-03 Thread Adam Gandelman

On 04/03/2012 08:20 AM, Lillie Ross-CDSR11 wrote:

My question is, should I base our new installation directly off the Essex 
branch in the git repository, or use the packages that will be deployed as part 
of the associated Ubuntu 12.04LTS release?  With Diablo, I was forced to use 
packages from the ManagedIT PPA with additional Keystone patches to get a 
consistent, stable platform up and running.  Obviously, some of these problems 
were due to confusion caused by various documents describing different 
incarnations of Openstack, and not really knowing what was current and stable.  
Especially the packages shipped with Ubuntu made assumptions about how 
Openstack was to be deployed that wasn't really apparent.


Hey Ross-

I can say that the Ubuntu precise packages have been kept relatively 
in-sync with each components' trunk git repository this cycle.  We've 
made a concerted effort to do weekly snapshot uploads of all Openstack 
components into the Precise archive starting from the beginning of the 
Essex+Precise dev cycles.  We've also maintained our own trunk PPA  
(updated continously) around which we center our testing efforts.  Now 
that we're nearing the release of Essex, we've been ensuring the release 
candidates hit our archive as soon as they are released.  As soon as 
Essex final hits, it'll be uploaded into Ubuntu and give any users who 
care the remainder of the Ubuntu dev cycle (~1 month) to test and 
identify issues before LTS ships.


Re: deployment assumptions.  Last cycle, we were caught off-guard by 
Keystone's last-minute inclusion into Openstack core and the 
dependencies this introduced (dashboard especially)  It's not that we 
were making assumptions about how Openstack Diablo should be deployed, 
just that there was no way we could shoe-horn a new component into the 
release so late in the game.   This time around,  a similar curve ball 
was thrown our way with the Keystone Lite rewrite, but we were able to 
get this sorted on our end relatively quickly to ensure pending security 
reviews and main inclusion processes for Keystone weren't blocked.   
We're making very few assumptions going into LTS and hope to provide 
as-close-to-pure Essex experience as any.  I can only think of a few 
patches we're carrying, and there are only two default configuration 
files we ship that differ from those you'd find in the git repos [2]. 
Perhaps when we release Essex into Precise this/next week, we'll put 
some notes somewhere outlining any Ubuntu-specific changes for those who 
are interested.


Hope that helps, and of course we welcome your testing and bug reports!

-Adam


[1]  We ship a default nova.conf that configures some Ubuntu defaults: 
defaults to libvirt compute, uses nova-rootwrap for sudo shell execution 
(requested by our security team), uses tgt iscsi initiator instead of 
ietd (tgt is supported in our main archive, ietd is not).  Our default 
Keystone config defaults to the SQL catalog backend instead of the 
default templated file, though I think SQL catalog is the new default in 
folsom.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Limit flavors to specific hosts

2012-04-03 Thread John Garbutt
+1

It is certainly worth a session to decide how to modify the scheduler.

I suspect all you would need is to do on the compute side is add a stub 
implementations for add/remove host operations in the compute manager base 
class (rather than throw NotImplemented exceptions), and maybe an extra 
XenServer specific flag to suppress the pool creation. I think it has all the 
DB and metadata functionality required.

One thing worth raising, is that currently all hosts in an aggregate must be in 
the same availability zone. That seems reasonable to me.

From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
Sent: 03 April 2012 19:11
To: Day, Phil
Cc: John Garbutt; Jan Drake; Lorin Hochstein; openstack@lists.launchpad.net
Subject: Re: [Openstack] Limit flavors to specific hosts


On Apr 3, 2012, at 6:45 AM, Day, Phil wrote:


Hi John,

Maybe the problem with host aggregates is that it too quickly became something 
that was linked to hypervisor capability, rather than being the more general 
mechanism of which one form of aggregate could be linked to hypervisor 
capabilities ?

Should we have a host aggregates 2.0 session at the Design Summit ?

+ 1

I think the primary use case is associating metadata with groups of hosts that 
can be interpreted by the scheduler.  Obviously, this same metadata can be used 
to create pools/etc. in the hypervisor, but we can't forget about the 
scheduler.  Modifying flags on the hosts for capabilities is ugly.

Vish

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Best approach for deploying Essex?

2012-04-03 Thread Lillie Ross-CDSR11
Hi Adam,

Thanks for the update.  Actually, I'm in the process of reading about your 
testing and integration framework for Openstack 
(http://ubuntuserver.wordpress.com/2012/02/08/704/) as I write this.

Yes, Keystone integration seemed to be the big bugaboo in the Ubuntu/Diablo 
release.  I've successfully got everything authenticating with keystone in our 
current deployment, but as you well know, this precludes using S3 bindings with 
Swift, and you must use the Openstack glance client to bundle/upload images.  
This had me pulling my hair out in the early stages.

Today I've spun up 3 generic 11.10 servers that I'm planning on testing the 
next Ubuntu release and Openstack packages.  Should I start my testing with the 
beta1 release of 12.04LTS?  In particular I'm interested in seeing and 
understanding the process of migrating my existing installation and configs to 
the new release.  Once I'm satisfied that I understand everything (not 
possible) in the new release, I can migrate our operational cloud in a couple 
of days.

Also, what's the best place to keep abreast on the Ubuntu/Canonical integration 
of openstack?  The Ubuntu Wiki? Mailing list?

Thanks again,
Ross

On Apr 3, 2012, at 1:21 PM, Adam Gandelman wrote:

 On 04/03/2012 08:20 AM, Lillie Ross-CDSR11 wrote:
 My question is, should I base our new installation directly off the Essex 
 branch in the git repository, or use the packages that will be deployed as 
 part of the associated Ubuntu 12.04LTS release?  With Diablo, I was forced 
 to use packages from the ManagedIT PPA with additional Keystone patches to 
 get a consistent, stable platform up and running.  Obviously, some of these 
 problems were due to confusion caused by various documents describing 
 different incarnations of Openstack, and not really knowing what was current 
 and stable.  Especially the packages shipped with Ubuntu made assumptions 
 about how Openstack was to be deployed that wasn't really apparent.
 
 Hey Ross-
 
 I can say that the Ubuntu precise packages have been kept relatively in-sync 
 with each components' trunk git repository this cycle.  We've made a 
 concerted effort to do weekly snapshot uploads of all Openstack components 
 into the Precise archive starting from the beginning of the Essex+Precise dev 
 cycles.  We've also maintained our own trunk PPA  (updated continously) 
 around which we center our testing efforts.  Now that we're nearing the 
 release of Essex, we've been ensuring the release candidates hit our archive 
 as soon as they are released.  As soon as Essex final hits, it'll be uploaded 
 into Ubuntu and give any users who care the remainder of the Ubuntu dev cycle 
 (~1 month) to test and identify issues before LTS ships.
 
 Re: deployment assumptions.  Last cycle, we were caught off-guard by 
 Keystone's last-minute inclusion into Openstack core and the dependencies 
 this introduced (dashboard especially)  It's not that we were making 
 assumptions about how Openstack Diablo should be deployed, just that there 
 was no way we could shoe-horn a new component into the release so late in the 
 game.   This time around,  a similar curve ball was thrown our way with the 
 Keystone Lite rewrite, but we were able to get this sorted on our end 
 relatively quickly to ensure pending security reviews and main inclusion 
 processes for Keystone weren't blocked.   We're making very few assumptions 
 going into LTS and hope to provide as-close-to-pure Essex experience as any.  
 I can only think of a few patches we're carrying, and there are only two 
 default configuration files we ship that differ from those you'd find in the 
 git repos [2]. Perhaps when we release Essex into Precise this/next week, 
 we'll put some notes somewhere outlining any Ubuntu-specific changes for 
 those who are interested.
 
 Hope that helps, and of course we welcome your testing and bug reports!
 
 -Adam
 
 
 [1]  We ship a default nova.conf that configures some Ubuntu defaults: 
 defaults to libvirt compute, uses nova-rootwrap for sudo shell execution 
 (requested by our security team), uses tgt iscsi initiator instead of ietd 
 (tgt is supported in our main archive, ietd is not).  Our default Keystone 
 config defaults to the SQL catalog backend instead of the default templated 
 file, though I think SQL catalog is the new default in folsom.
 
 
 
 




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Hostname selection

2012-04-03 Thread Joshua Harlow
HI all,

I was looking into how hostnames are selected for an instance in openstack and 
there seems to be a couple+X? ways I have discovered and just wanted to see if 
I am correct.


 1.  Use the metadata api, and send in a user-data section that specifies the 
hostname, then use say the 'ami-launch-index' to figure out which hostname is 
yours (ie the vm's) in the list.
 2.  There also seems to be some way to use the openstack API (not the ec2 
one?) to specify a display name which will eventually become the hostname, this 
is then stored in the DB (is this correct?)
 3.  Is there also a way to set the hostname in the metadata api directly, 
although I am not sure about this one, since cloud-init will only run once (?) 
and thus this metadata update won't be reflected?

A lot of this happens around 
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L551

Is it correct to say that the hostname, if not given a display name will become 
something like server_$uuid where the $uuid is generated. Is this hostname then 
returned in the metadata (I think yes?). If say the ec2 apis are used only, how 
is the hostname set (since it seems like only the openstack apis can take in a 
display name?) How does either of these work when multiple instances are 
launched (especially say the openstack api one, will multiple vm's then have 
the same display name??)

Any input welcome. Thx!

-Josh
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Limit flavors to specific hosts

2012-04-03 Thread John Purrier
+1.

 

Interesting scenarios open up if we can have the scheduler intelligently
direct workloads based on config/metadata.

 

johnpur

 

From: openstack-bounces+john=openstack@lists.launchpad.net
[mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf
Of Vishvananda Ishaya
Sent: Tuesday, April 03, 2012 1:11 PM
To: Day, Phil
Cc: openstack@lists.launchpad.net; John Garbutt
Subject: Re: [Openstack] Limit flavors to specific hosts

 

 

On Apr 3, 2012, at 6:45 AM, Day, Phil wrote:





Hi John,

 

Maybe the problem with host aggregates is that it too quickly became
something that was linked to hypervisor capability, rather than being the
more general mechanism of which one form of aggregate could be linked to
hypervisor capabilities ?

 

Should we have a host aggregates 2.0 session at the Design Summit ?

 

+ 1

 

I think the primary use case is associating metadata with groups of hosts
that can be interpreted by the scheduler.  Obviously, this same metadata can
be used to create pools/etc. in the hypervisor, but we can't forget about
the scheduler.  Modifying flags on the hosts for capabilities is ugly.

 

Vish

 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Issue with keystone-2012.1~rc1.tar.gz -- unable to authorize user

2012-04-03 Thread Joshua Tobin
Try setting admin_token = 012345SECRET99TOKEN012345 in 
/etc/keystone/keystone.conf

-Joshua


On Apr 2, 2012, at 6:26 PM, Vijay wrote:

 Hello,
 Installed keystone-2012.1~rc1.tar.gz.
 Following this url to configure:
 http://docs.openstack.org/trunk/openstack-compute/install/content/setting-up-tenants-users-and-roles.html
  
 When I tried to
 keystone --token 012345SECRET99TOKEN012345 --endpoint 
 http://192.168.198.85:35357/v2.0 tenant-create --name openstackDemo 
 --description Default
 
 Tenant --enabled true
  
 I am getting
 No handlers could be found for logger keystoneclient.client
 Unable to authorize user
 In the console log,
 it says
  
 2012-04-02 15:23:38DEBUG [keystone.common.wsgi]  
 REQUEST BODY 
 2012-04-02 15:23:38DEBUG [keystone.common.wsgi] {tenant: {enabled: 
 true, name: openstackDemo, description: Default Tenant}}
 2012-04-02 15:23:38DEBUG [keystone.common.wsgi]
 2012-04-02 15:23:38DEBUG [routes.middleware] Matched POST /tenants
 2012-04-02 15:23:38DEBUG [routes.middleware] Route path: 
 '{path_info:.*}', defaults: {'controller': 
 keystone.contrib.admin_crud.core.CrudExtension object at 0x2c74a10}
 2012-04-02 15:23:38DEBUG [routes.middleware] Match dict: {'controller': 
 keystone.contrib.admin_crud.core.CrudExtension object at 0x2c74a10, 
 'path_info': '/tenants'}
 2012-04-02 15:23:38DEBUG [routes.middleware] Matched POST /tenants
 2012-04-02 15:23:38DEBUG [routes.middleware] Route path: '/tenants', 
 defaults: {'action': u'create_tenant', 'controller': 
 keystone.identity.core.TenantController object at 0x2c741d0}
 2012-04-02 15:23:38DEBUG [routes.middleware] Match dict: {'action': 
 u'create_tenant', 'controller': keystone.identity.core.TenantController 
 object at 0x2c741d0}
 2012-04-02 15:23:38DEBUG [keystone.common.wsgi] arg_dict: {}
 2012-04-02 15:23:38  WARNING [keystone.common.wsgi] The request you have made 
 requires authentication.
 2012-04-02 15:23:38DEBUG [keystone.common.wsgi]  
 RESPONSE HEADERS 
 2012-04-02 15:23:38DEBUG [keystone.common.wsgi] Content-Type = 
 application/json
 2012-04-02 15:23:38DEBUG [keystone.common.wsgi] Vary = X-Auth-Token
 2012-04-02 15:23:38DEBUG [keystone.common.wsgi] Content-Length = 116
 2012-04-02 15:23:38DEBUG [keystone.common.wsgi]
 2012-04-02 15:23:38DEBUG [keystone.common.wsgi]  
 RESPONSE BODY 
 2012-04-02 15:23:38DEBUG [keystone.common.wsgi] {error: {message: 
 The request you have made requires authentication., code: 401, title: 
 Not Authorized}}
 2012-04-02 15:23:38DEBUG [eventlet.wsgi.server] 192.168.198.85 - - 
 [02/Apr/2012 15:23:38] POST /v2.0/tenants HTTP/1.1 401 257 0.008317
  
 Is this a bug or is something in my keystone.conf not set properly?
  
 Any lead is appreciated.
 Thanks,
 -Vijay
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Instance fails to spawn when instance_path is nfs mounted

2012-04-03 Thread Nathanael Burton
I had a problem like this when the umask was locked down.  Setting the
umask to 022 in the init script for nova-compute solved my problem.

On Tue, Apr 3, 2012 at 1:56 PM, Diego Parrilla Santamaría
diego.parrilla.santama...@gmail.com wrote:
 We use nfs backed instances a lot, and this problem normally has to do with
 wrong permission management in your filer and/or client.

 Check if not only root can write on the nfs share (specially libvirt user).

 Diego
 --
 Diego Parrilla
 CEO
 www.stackops.com |  diego.parri...@stackops.com | +34 649 94 43 29
 | skype:diegoparrilla




 On Tue, Apr 3, 2012 at 6:43 PM, Mandar Vaze / मंदार वझे
 mandarv...@gmail.com wrote:

 I saw an old question posted here :
 https://answers.launchpad.net/nova/+question/164689

 But I am not trying live migration.

 I have nfs mounted instances_path - so when I try to spawn an instance I
 run into the above errors. Especially following :

 File /usr/lib/python2.7/dist-packages/libvirt.py, line 372, in
 createWithFlags
 40842 2012-04-03 05:42:27 TRACE nova.rpc.amqp     if ret == -1: raise
 libvirtError ('virDomainCreateWithFlags() failed', dom=self)
 40843 2012-04-03 05:42:27 TRACE nova.rpc.amqp libvirtError: internal error
 Process exited while reading console log output: chardev: opening
 backend file failed

 But as you can see below, several files are created in this folder, so I
 am not sure if mine if permissions issue (Else none of the files would
 get created) The problem is reported when libvirt tries to write to
 console.log (File itself is created with correct permissions - just that
 this is zero byte file)

 mandar@ubuntu-dev-mandar:/nfs_shared_instances_path/instance-0005$ ll
 total 10944
 drwxrwxr-x 2 mandar libvirtd    4096 2012-04-03 05:42 ./
 drwxrwxrwx 4 root   root        4096 2012-04-03 05:42 ../
 -rw-rw 1 mandar libvirtd       0 2012-04-03 05:42 console.log
 -rw-r--r-- 1 mandar libvirtd 6291968 2012-04-03 05:42 disk
 -rw-rw-r-- 1 mandar libvirtd 4731440 2012-04-03 05:42 kernel
 -rw-rw-r-- 1 mandar libvirtd    1067 2012-04-03 05:42 libvirt.xml
 -rw-rw-r-- 1 mandar libvirtd 2254249 2012-04-03 05:42 ramdisk

 I'm suspecting :
 https://bugs.launchpad.net/ubuntu/maverick/+source/libvirt/+bug/632696
 But I the above doesn't show itself in non-NFS setup

 Please suggest !!!

 -Mandar


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] ESXi documentation..

2012-04-03 Thread Michael March
I accidentally posted this on openstack-operat...@lists.openstack.org..

-- Forwarded message --

Everyone,

After googlin' around I can not find any docs on how to setup OpenStack
with ESXi as a hypervisor.

This official link is dead: http://nova.openstack.org/vmwareapi_readme.html

Does anyone have any links that might help in this endeavor?

thanks!


-- 
Mike March
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova-orchestration] Thoughts on Orchestration (was Re: Documentation on Caching)

2012-04-03 Thread Ziad Sawalha
Just confirming what Sandy said; I am playing around with SpiffWorkflow.
I'll post my findings when I'm done on the wiki under the Nova
Orchestration page.

So far I've found some of the documentation lacking and concepts
confusing, which has resulted in a steep learning curve and made it
difficult to integrate into something like RabbitMQ (for long-running
tasks). But the thinking behind it (http://www.workflowpatterns.com/)
seems sound and I will continue to investigate it.

Z

On 3/29/12 5:56 PM, Sriram Subramanian sri...@computenext.com wrote:

Guys,

Sorry for missing the meeting today. Thanks for the detailed summary/
logs. I am cool with the action item : #action sriram to update the
Orchestration session proposal. This is my understanding the logs of
things to be updated in the blueprint:

1) orchestration service provides state management with client side APIs
2) add API design and state storage as topics for the orchestration
session at the Summit
3) add implementation plan as session topic

Please correct me if I missed anything.

Just to bring everyone to same page, here are the new links

Folsom BluePrint: 
https://blueprints.launchpad.net/nova/+spec/nova-orchestration
Folsom Session proposal:
https://blueprints.launchpad.net/nova/+spec/nova-orchestration
Wiki: http://wiki.openstack.org/NovaOrchestration (I will clean this up
tonight)

Maoy: Sandy's pointers are in this email thread (which n0ano meant to fwd
you)
Mikeyp: Moving the conversation to the main mailing list per your
suggestion

Thanks,
_Sriram

-Original Message-
From: Dugger, Donald D [mailto:donald.d.dug...@intel.com]
Sent: Thursday, March 29, 2012 12:52 PM
To: Sriram Subramanian; Sandy Walsh
Cc: Michael Pittaro (mik...@lahondaresearch.org)
Subject: RE: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

NP, I'll be on the IRC for whoever wants to talk.  Maybe we can try and
do the sync you want via email, that's always been my favorite way to
communicate (it allows you to focus thoughts and deals with timezones
nicely).

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


-Original Message-
From: Sriram Subramanian [mailto:sri...@computenext.com]
Sent: Thursday, March 29, 2012 1:45 PM
To: Sriram Subramanian; Sandy Walsh
Cc: Dugger, Donald D; Michael Pittaro (mik...@lahondaresearch.org)
Subject: RE: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

I will most likely be running little late from my 12 - 1 meeting which
doesn't seem to be ending anytime now :(

I haven't gotten a chance to submit a branch yet. Hopefully by this week
end (at least a bare bones)

If you are available for offline sync later this week - I would
appreciate that. Apologies for possibly missing the sync.

Thanks,
-Sriram
 
-Original Message-
From: 
nova-orchestration-bounces+sriram=computenext@lists.launchpad.net
[mailto:nova-orchestration-bounces+sriram=computenext.com@lists.launchpad.
net] On Behalf Of Sriram Subramanian
Sent: Wednesday, March 28, 2012 2:44 PM
To: Sandy Walsh
Cc: nova-orchestrat...@lists.launchpad.net
Subject: Re: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

Thanks for the pointers Sandy. I will try to spend some cycles on the
branch per your suggestion; we will also discuss more tomorrow.

Yes, BP is not far off from last summit, and would like to flush out more
for this summit. 

Thanks,
-Sriram

-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
Sent: Wednesday, March 28, 2012 11:31 AM
To: Sriram Subramanian
Cc: Michael Pittaro; Dugger, Donald D (donald.d.dug...@intel.com);
nova-orchestrat...@lists.launchpad.net
Subject: Thoughts on Orchestration (was Re: Documentation on Caching)

Ah, gotcha.

I don't think the caching stuff will really affect the Orchestration
layer all that much. Certainly the Cells stuff that comstud is working on
should be considered.

The BP isn't really too far off from what we discussed last summit.
Although I would give more consideration to the stuff Redhat is thinking
about and some of the efforts by HP and IBM with respect to scheduling
(mostly HPC stuff). Unifying and/or understanding those efforts would be
important.

That said, as with all things OpenStack, code speaks louder than words.
The best way to solicit input on an idea is to submit a branch. That's
the approach I'd take now if I had the cycles to put back into Orch. I'd
likely build something on top of Amazon Workflow services (in such a way
as it could be ripped out later) http://aws.amazon.com/swf/ The
replacement could be a new OS Service with SWF as the api template.

What I've been thinking about lately has been how to make a proof of
concept operate with trunk side-by-side without busting the existing
stuff. Tricky. Orchestration touches a lot of stuff. The error handling
is OS could be an issue and unifying the 3 Enum State Machine on Instance
could 

[Openstack] Glance 2012.1 RC3 available

2012-04-03 Thread Thierry Carrez
Hello everyone,

The tarball for the last (this time we mean it) release candidate for
OpenStack Image Service (Glance) 2012.1 is now available at:

https://launchpad.net/glance/essex/essex-rc3

This RC3 will be formally released as the 2012.1 (Essex) final version
next week, unless a critical regression is found that warrants a
last-minute respin.

If you find an issue in Glance that could be considered
release-critical, please file it at:

https://bugs.launchpad.net/glance/+filebug

and tag it essex-rc-potential to bring it to Jay's attention.

Regards,

-- 
Thierry Carrez (ttx)
Release Manager, OpenStack

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova-orchestration] Thoughts on Orchestration (was Re: Documentation on Caching)

2012-04-03 Thread Sandy Walsh
Can't wait to hear about it Ziad!

Very cool!

-S

From: Ziad Sawalha
Sent: Tuesday, April 03, 2012 6:56 PM
To: Sriram Subramanian; Dugger, Donald D; Sandy Walsh
Cc: nova-orchestrat...@lists.launchpad.net; openstack@lists.launchpad.net
Subject: Re: [Openstack] [Nova-orchestration] Thoughts on Orchestration (was 
Re: Documentation on Caching)

Just confirming what Sandy said; I am playing around with SpiffWorkflow.
I'll post my findings when I'm done on the wiki under the Nova
Orchestration page.

So far I've found some of the documentation lacking and concepts
confusing, which has resulted in a steep learning curve and made it
difficult to integrate into something like RabbitMQ (for long-running
tasks). But the thinking behind it (http://www.workflowpatterns.com/)
seems sound and I will continue to investigate it.

Z

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Limit flavors to specific hosts

2012-04-03 Thread Armando Migliaccio
On Tue, Apr 3, 2012 at 7:10 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:

 On Apr 3, 2012, at 6:45 AM, Day, Phil wrote:

 Hi John,

 Maybe the problem with host aggregates is that it too quickly became
 something that was linked to hypervisor capability, rather than being the
 more general mechanism of which one form of aggregate could be linked to
 hypervisor capabilities ?

The host aggregate model is not tied to no hypervisor-dependent
detail, it's a group of hosts with metadata linked to it. As far as
the the functionality around it, that can evolve to make it work as
you expected it to work. If I understand it correctly, you'd not be
interested in xenapi (whose implementation is wrapped around the
concept of pool), and as far as KVM is concerned, this is still a
blank canvas.

As for the scheduler, as most things in Nova, you can write a new one
(that's host-aggregate aware), or you can extend an existing one to
make sure it understands the metadata related to a host belonging to
an aggregate. The latter, in particular,  was not done in the Essex
timeframe for lack of time ;(


 Should we have a “host aggregates 2.0” session at the Design Summit ?


I am all in for a host aggregate 2.0 so to speak, or just the
continuation of the work that was started; at first I got the
impression that we were going to write new abstraction/functionality
from scratch just because the one that already existed did not fit the
bill...I am glad I was wrong :)


 + 1

 I think the primary use case is associating metadata with groups of hosts
 that can be interpreted by the scheduler.  Obviously, this same metadata can
 be used to create pools/etc. in the hypervisor, but we can't forget about
 the scheduler.  Modifying flags on the hosts for capabilities is ugly.

 Vish


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] python-glanceclient

2012-04-03 Thread Brian Waldon
In an effort to further align OpenStack API clients, Jay Pipes, Monty Taylor 
and myself have set up the python-glanceclient project. It is not intended to 
be a drop-in replacement for the existing client that lives in Glance, but a 
complete rewrite with a shiny new interface that maintains feature-parity.

As for integrating this new client with the necessary OpenStack projects, 
here's a little roadmap I came up with:

X 1) Basic functionality
X 2) Integrate with Gerrit
3) Verify feature parity
4) Integrate with DevStack
5) Integrate with Nova
6) Drop old client from Glance

If anybody is interested in playing with the new client, here it is: 
https://github.com/openstack/python-glanceclient. You can either yell at me or 
use Gerrit to fix anything I may have overlooked. Thanks!

Brian Waldon


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova-orchestration] Thoughts on Orchestration (was Re: Documentation on Caching)

2012-04-03 Thread Yun Mao
Hi Ziad,

Thanks for taking the effort. Do you know which ones out of the 43
workflows patterns are relavant to us? I'm slightly concerned that
SpiffWorkflow might be an overkill and bring unnecessary complexity
into the game. There was a discussion a while ago suggesting that
relatively simple sequential execution pattern:
https://lists.launchpad.net/nova-orchestration/msg00043.html

Thanks,

Yun

On Tue, Apr 3, 2012 at 5:56 PM, Ziad Sawalha ziad.sawa...@rackspace.com wrote:
 Just confirming what Sandy said; I am playing around with SpiffWorkflow.
 I'll post my findings when I'm done on the wiki under the Nova
 Orchestration page.

 So far I've found some of the documentation lacking and concepts
 confusing, which has resulted in a steep learning curve and made it
 difficult to integrate into something like RabbitMQ (for long-running
 tasks). But the thinking behind it (http://www.workflowpatterns.com/)
 seems sound and I will continue to investigate it.

 Z

 On 3/29/12 5:56 PM, Sriram Subramanian sri...@computenext.com wrote:

Guys,

Sorry for missing the meeting today. Thanks for the detailed summary/
logs. I am cool with the action item : #action sriram to update the
Orchestration session proposal. This is my understanding the logs of
things to be updated in the blueprint:

1) orchestration service provides state management with client side APIs
2) add API design and state storage as topics for the orchestration
session at the Summit
3) add implementation plan as session topic

Please correct me if I missed anything.

Just to bring everyone to same page, here are the new links

Folsom BluePrint:
https://blueprints.launchpad.net/nova/+spec/nova-orchestration
Folsom Session proposal:
https://blueprints.launchpad.net/nova/+spec/nova-orchestration
Wiki: http://wiki.openstack.org/NovaOrchestration (I will clean this up
tonight)

Maoy: Sandy's pointers are in this email thread (which n0ano meant to fwd
you)
Mikeyp: Moving the conversation to the main mailing list per your
suggestion

Thanks,
_Sriram

-Original Message-
From: Dugger, Donald D [mailto:donald.d.dug...@intel.com]
Sent: Thursday, March 29, 2012 12:52 PM
To: Sriram Subramanian; Sandy Walsh
Cc: Michael Pittaro (mik...@lahondaresearch.org)
Subject: RE: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

NP, I'll be on the IRC for whoever wants to talk.  Maybe we can try and
do the sync you want via email, that's always been my favorite way to
communicate (it allows you to focus thoughts and deals with timezones
nicely).

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786


-Original Message-
From: Sriram Subramanian [mailto:sri...@computenext.com]
Sent: Thursday, March 29, 2012 1:45 PM
To: Sriram Subramanian; Sandy Walsh
Cc: Dugger, Donald D; Michael Pittaro (mik...@lahondaresearch.org)
Subject: RE: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

I will most likely be running little late from my 12 - 1 meeting which
doesn't seem to be ending anytime now :(

I haven't gotten a chance to submit a branch yet. Hopefully by this week
end (at least a bare bones)

If you are available for offline sync later this week - I would
appreciate that. Apologies for possibly missing the sync.

Thanks,
-Sriram

-Original Message-
From:
nova-orchestration-bounces+sriram=computenext@lists.launchpad.net
[mailto:nova-orchestration-bounces+sriram=computenext.com@lists.launchpad.
net] On Behalf Of Sriram Subramanian
Sent: Wednesday, March 28, 2012 2:44 PM
To: Sandy Walsh
Cc: nova-orchestrat...@lists.launchpad.net
Subject: Re: [Nova-orchestration] Thoughts on Orchestration (was Re:
Documentation on Caching)

Thanks for the pointers Sandy. I will try to spend some cycles on the
branch per your suggestion; we will also discuss more tomorrow.

Yes, BP is not far off from last summit, and would like to flush out more
for this summit.

Thanks,
-Sriram

-Original Message-
From: Sandy Walsh [mailto:sandy.wa...@rackspace.com]
Sent: Wednesday, March 28, 2012 11:31 AM
To: Sriram Subramanian
Cc: Michael Pittaro; Dugger, Donald D (donald.d.dug...@intel.com);
nova-orchestrat...@lists.launchpad.net
Subject: Thoughts on Orchestration (was Re: Documentation on Caching)

Ah, gotcha.

I don't think the caching stuff will really affect the Orchestration
layer all that much. Certainly the Cells stuff that comstud is working on
should be considered.

The BP isn't really too far off from what we discussed last summit.
Although I would give more consideration to the stuff Redhat is thinking
about and some of the efforts by HP and IBM with respect to scheduling
(mostly HPC stuff). Unifying and/or understanding those efforts would be
important.

That said, as with all things OpenStack, code speaks louder than words.
The best way to solicit input on an idea is to submit a branch. That's
the approach I'd take now if I had the cycles to put back into Orch. I'd
likely 

Re: [Openstack] Swift and keystone: asking for an auth token.

2012-04-03 Thread Pete Zaitcev
On Tue, 03 Apr 2012 16:53:05 +0200
Pierre Amadio pierre.ama...@canonical.com wrote:

 [filter:tokenauth]
 paste.filter_factory = keystone.middleware.auth_token:filter_factory
 service_port = 5000
 service_host = 192.168.122.102
 auth_port = 35357
 auth_host = 192.168.122.102
 auth_protocol = http

I'm surprised this works at all, because if service_protocol is
not set, it defaults to https:. However, your -A argument points
to http:, like so

 ubuntu@swift-a:~$ swift -V 2 -U ubuntu:ubuntu -K openstack -A
 http://192.168.122.102:5000/v2.0 list

Strange that nothing shows up in logs.

 Account GET failed:
 https://192.168.122.105:8080/v2/AUTH_ed0a2ebb66054096869605fb53c049d7?format=json
 403 Forbidden

 Just in case, i try to change the keystone objectstore endpoint to use
 v1 instead of v2 in the 3 catalog urls, but i had the same result.

This is the most obvious problem. Please try to use /v1 once again,
look at swift.log. If that fails, we'll need to start probing with
curl step by step.

-- Pete

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift and keystone: asking for an auth token.

2012-04-03 Thread Chmouel Boudjnah
On Tue, Apr 3, 2012 at 4:53 PM, Pierre Amadio
pierre.ama...@canonical.com wrote:
 I am trying to use swift and keystone together (on ubuntu precise), and
 fail to do so.
                  roles: [{id: 60a1783c2f05437d91f2e1f369320c49, name: 
 Admin},
[...]
 [filter:keystone]
 paste.filter_factory = keystone.middleware.swift_auth:filter_factory
 operator_roles = admin

You are in the group Admin and your operator_roles is admin (lowercase).

Cheers,
Chmouel.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-poc] [Bug 972859] Re: Fix utils.import_object() when trying to get an instance of a class

2012-04-03 Thread OpenStack Hudson
Fix proposed to branch: master
Review: https://review.openstack.org/6191

** Changed in: openstack-common
   Status: New = In Progress

-- 
You received this bug notification because you are a member of OpenStack
Common Drivers, which is the registrant for openstack-common.
https://bugs.launchpad.net/bugs/972859

Title:
  Fix utils.import_object() when trying to get an instance of a class

Status in openstack-common:
  In Progress

Bug description:
  RIght now, utils.import_object('foo') is the same as
  utils.import_class('foo') when 'foo' is the path to a class.  This
  should be fixed so that import_object() returns an instance of that
  class (like how this works in nova.utils).

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-common/+bug/972859/+subscriptions

___
Mailing list: https://launchpad.net/~openstack-poc
Post to : openstack-poc@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-poc
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise-openstack-essex-python-quantumclient-trunk #28

2012-04-03 Thread openstack-testing-bot
Title: precise-openstack-essex-python-quantumclient-trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise-openstack-essex-python-quantumclient-trunk/28/Project:precise-openstack-essex-python-quantumclient-trunkDate of build:Tue, 03 Apr 2012 02:01:00 -0400Build duration:3 min 59 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60Changesbug 963155: add some missing test files to the sdist tarball.by daneditMANIFEST.inConsole Output[...truncated 565 lines...]Setting up pristine-tar (1.20) ...Setting up diffstat (1.54-1) ...Setting up gettext (0.18.1.1-5ubuntu3) ...Setting up quilt (0.50-2) ...Setting up bzr-builddeb (2.8.4) ...Setting up curl (7.22.0-3ubuntu4) ...Setting up html2text (1.3.2a-15) ...Setting up intltool-debian (0.35.0+20060710.1) ...Setting up po-debconf (1.0.16+nmu2ubuntu1) ...Setting up dh-apparmor (2.7.102-0ubuntu2) ...Setting up debhelper (9.20120115ubuntu3) ...Setting up distro-info-data (0.7) ...Setting up liberror-perl (0.17-1) ...Setting up git-man (1:1.7.9.1-1) ...Setting up git (1:1.7.9.1-1) ...Setting up libjs-jquery (1.7.1-1ubuntu1) ...Setting up libsys-hostname-long-perl (1.4-2) ...Setting up libmail-sendmail-perl (0.79.16-1) ...Setting up pbzip2 (1.1.6-1) ...Setting up python-crypto (2.4.1-1) ...Setting up python-distro-info (0.7.1) ...Setting up python-httplib2 (0.7.2-1ubuntu2) ...Setting up python-keyring (0.7.1-1fakesync1) ...Setting up python-simplejson (2.3.2-1) ...Setting up python-pkg-resources (0.6.24-1ubuntu1) ...Setting up python-lazr.uri (1.0.3-1) ...Setting up python-wadllib (1.3.0-2) ...Setting up python-oauth (1.0.1-3build1) ...Setting up python-zope.interface (3.6.1-1ubuntu3) ...Setting up python-lazr.restfulclient (0.12.0-1ubuntu1) ...Setting up python-launchpadlib (1.9.12-1) ...Setting up python-paramiko (1.7.7.1-2) ...Setting up equivs (2.0.9) ...Processing triggers for libc-bin ...ldconfig deferred processing now taking placeINFO:root:Branching lp:~openstack-ubuntu-testing/python-quantumclient/precise-essex-proposed to determine build-depsDEBUG:root:['bzr', 'branch', 'lp:~openstack-ubuntu-testing/python-quantumclient/precise-essex-proposed', '/tmp/tmpPhGu8f/python-quantumclient']ssh: connect to host bazaar.launchpad.net port 22: Connection timed outbzr: ERROR: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist. ERROR:root:Error occurred during package creation/buildERROR:root:Command '['bzr', 'branch', 'lp:~openstack-ubuntu-testing/python-quantumclient/precise-essex-proposed', '/tmp/tmpPhGu8f/python-quantumclient']' returned non-zero exit status 3INFO:root:Destroying schroot session: precise-amd64-7a0890b1-6cdf-46bf-bc8f-1b6bed06b173Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 72, in raise esubprocess.CalledProcessError: Command '['bzr', 'branch', 'lp:~openstack-ubuntu-testing/python-quantumclient/precise-essex-proposed', '/tmp/tmpPhGu8f/python-quantumclient']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise-openstack-essex-glance-trunk #161

2012-04-03 Thread openstack-testing-bot
Title: precise-openstack-essex-glance-trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise-openstack-essex-glance-trunk/161/Project:precise-openstack-essex-glance-trunkDate of build:Tue, 03 Apr 2012 13:01:00 -0400Build duration:4 min 9 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesRun version_control after auto-creating the DBby bcwaldoneditglance/registry/db/api.pyConsole Output[...truncated 565 lines...]Setting up pristine-tar (1.20) ...Setting up diffstat (1.54-1) ...Setting up gettext (0.18.1.1-5ubuntu3) ...Setting up quilt (0.50-2) ...Setting up bzr-builddeb (2.8.4) ...Setting up curl (7.22.0-3ubuntu4) ...Setting up html2text (1.3.2a-15) ...Setting up intltool-debian (0.35.0+20060710.1) ...Setting up po-debconf (1.0.16+nmu2ubuntu1) ...Setting up dh-apparmor (2.7.102-0ubuntu2) ...Setting up debhelper (9.20120115ubuntu3) ...Setting up distro-info-data (0.7) ...Setting up liberror-perl (0.17-1) ...Setting up git-man (1:1.7.9.1-1) ...Setting up git (1:1.7.9.1-1) ...Setting up libjs-jquery (1.7.1-1ubuntu1) ...Setting up libsys-hostname-long-perl (1.4-2) ...Setting up libmail-sendmail-perl (0.79.16-1) ...Setting up pbzip2 (1.1.6-1) ...Setting up python-crypto (2.4.1-1) ...Setting up python-distro-info (0.7.1) ...Setting up python-httplib2 (0.7.2-1ubuntu2) ...Setting up python-keyring (0.7.1-1fakesync1) ...Setting up python-simplejson (2.3.2-1) ...Setting up python-pkg-resources (0.6.24-1ubuntu1) ...Setting up python-lazr.uri (1.0.3-1) ...Setting up python-wadllib (1.3.0-2) ...Setting up python-oauth (1.0.1-3build1) ...Setting up python-zope.interface (3.6.1-1ubuntu3) ...Setting up python-lazr.restfulclient (0.12.0-1ubuntu1) ...Setting up python-launchpadlib (1.9.12-1) ...Setting up python-paramiko (1.7.7.1-2) ...Setting up equivs (2.0.9) ...Processing triggers for libc-bin ...ldconfig deferred processing now taking placeINFO:root:Branching lp:~openstack-ubuntu-testing/glance/precise-essex-proposed to determine build-depsDEBUG:root:['bzr', 'branch', 'lp:~openstack-ubuntu-testing/glance/precise-essex-proposed', '/tmp/tmpr3tIO9/glance']ssh: connect to host bazaar.launchpad.net port 22: Connection timed outbzr: ERROR: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist. ERROR:root:Error occurred during package creation/buildERROR:root:Command '['bzr', 'branch', 'lp:~openstack-ubuntu-testing/glance/precise-essex-proposed', '/tmp/tmpr3tIO9/glance']' returned non-zero exit status 3INFO:root:Destroying schroot session: precise-amd64-61b971ba-052f-41fe-ba20-2625fd1fe913Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 72, in raise esubprocess.CalledProcessError: Command '['bzr', 'branch', 'lp:~openstack-ubuntu-testing/glance/precise-essex-proposed', '/tmp/tmpr3tIO9/glance']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp