Re: [Openstack] [Keystone] Policy settings not working correctly

2013-06-07 Thread Yee, Guang
I think keystone client is still V2 by default, which is enforcing
admin_required. 

 

Try this

 

admin_required: [[role:KeystoneAdmin], [role:admin], [is_admin:1]],

 

 

Guang

 

 

From: Openstack
[mailto:openstack-bounces+guang.yee=hp@lists.launchpad.net] On Behalf Of
Adam Young
Sent: Thursday, June 06, 2013 7:28 PM
To: Heiko Krämer; openstack
Subject: Re: [Openstack] [Keystone] Policy settings not working correctly

 

What is the actualy question here?  Is it why is this failing or why was
it done that way?


On 06/04/2013 07:47 AM, Heiko Krämer wrote:

Heyho guys :)

I've a little problem with policy settings in keystone. I've create a new
rule in my policy-file and restarts keystone but keystone i don't have
privileges. 


What is the rule?




Example:


keystone user-create --name kadmin --pw lala 
keystone user-role-add --

keystone role-list --user kadmin --role KeystoneAdmin --tenant admin

+--+--+
|id| name |
+--+--+
| 3f5c0af585db46aeaec49da28900de28 |KeystoneAdmin |
| dccfed0bd790420bbf1982686cbf7e31 | KeystoneServiceAdmin |


cat /etc/keystone/policy.json

{
admin_required: [[role:admin], [is_admin:1]],
owner : [[user_id:%(user_id)s]],
admin_or_owner: [[rule:admin_required], [rule:owner]],
admin_or_kadmin: [[rule:admin_required], [role:KeystoneAdmin]],

default: [[rule:admin_required]],
[.]
identity:list_users: [[rule:admin_or_kadmin]],
[]

loading kadmin creds

keystone user-list
Unable to communicate with identity service: {error: {message: You are
not authorized to perform the requested action: admin_required, code:
403, title: Not Authorized}}. (HTTP 403)


In log file i see:
DEBUG [keystone.policy.backends.rules] enforce admin_required: {'tenant_id':
u'b33bf3927d4e449a98cec4a883148110', 'user_id':
u'46a6a9e429db483f8346f0259e99d6a5', u'roles': [u'KeystoneAdmin']}




Why does keystone enforce admin_required rule instead of the defined rule
(admin_or_kadmin).


Historical reasons.  We are trying to clean this up.  







Keystone conf:
[...]

# Path to your policy definition containing identity actions
policy_file = policy.json
[..]
[policy]
driver = keystone.policy.backends.rules.Policy




Any have an idea ?

Thx and greetings
Heiko






___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

 



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [cinder] Issue with OpenStack installation on Ubuntu 12.04 LTS

2013-06-07 Thread Sreejith N
I tried the following,
1) Rebooted server and restarted all services: Keystone, Glance, Cinder
2) Disabled SSL in keystone config and restarted service

But still I get the 401 error. I am able to run Keystone and Glance
commands successfully.
It will really helpful if any one can give a hint to debug this further.

Thanks
Sreejith

cinder --debug list

REQ: curl -i http://localhost:5000/v2.0/tokens -X POST -H Content-Type:
application/json -H Accept: application/json -H User-Agent:
python-cinderclient -d '{auth: {tenantName: demo,
passwordCredentials: {username: admin, password: secrete}}}'

RESP: [200] {'date': 'Fri, 07 Jun 2013 06:20:00 GMT', 'content-type':
'application/json', 'content-length': '2467', 'vary': 'X-Auth-Token'}
RESP BODY: {access: {token: {issued_at: 2013-06-07T06:20:00.106371,
expires: 2013-06-08T06:20:00Z, id:
ee3a054d267e4470a6b539acb9226604, tenant: {description: Default
Tenant, enabled: true, id: 38574792821d43d58bcecc7760149a90, name:
demo}}, serviceCatalog: [{endpoints: [{adminURL: 
http://localhost:8774/v2/38574792821d43d58bcecc7760149a90;, region:
RegionOne, internalURL: 
http://localhost:8774/v2/38574792821d43d58bcecc7760149a90;, id:
3958d10ac48643f7812265cc48dabc2b, publicURL: 
http://localhost:8774/v2/38574792821d43d58bcecc7760149a90}],
endpoints_links: [], type: compute, name: nova}, {endpoints:
[{adminURL: http://localhost:9292;, region: RegionOne,
internalURL: http://localhost:9292;, id:
3ca52129990d4cc39e9733cdcb3cf9b1, publicURL: http://localhost:9292}],
endpoints_links: [], type: image, name: glance}, {endpoints:
[{adminURL: http://localhost:8776/v1/38574792821d43d58bcecc7760149a90;,
region: RegionOne, internalURL: 
http://localhost:8776/v1/38574792821d43d58bcecc7760149a90;, id:
58885f13d2794787b73e174386e1557b, publicURL: 
http://localhost:8776/v1/38574792821d43d58bcecc7760149a90}],
endpoints_links: [], type: volume, name: volume}, {endpoints:
[{adminURL: http://localhost:8773/services/Admin;, region:
RegionOne, internalURL: http://localhost:8773/services/Cloud;, id:
9aa14a6266ca44ad9091edfd1726ffe3, publicURL: 
http://localhost:8773/services/Cloud}], endpoints_links: [], type:
ec2, name: ec2}, {endpoints: [{adminURL: http://localhost:/v1;,
region: RegionOne, internalURL: 
http://localhost:/v1/AUTH_38574792821d43d58bcecc7760149a90;, id:
a832b7b4ee5c417e8657aa5e35b8c6f5, publicURL: 
http://localhost:/v1/AUTH_38574792821d43d58bcecc7760149a90}],
endpoints_links: [], type: object-store, name: swift},
{endpoints: [{adminURL: http://localhost:35357/v2.0;, region:
RegionOne, internalURL: http://localhost:5000/v2.0;, id:
00996f2061524df898642b463b43a69e, publicURL:
http://localhost:5000/v2.0}],
endpoints_links: [], type: identity, name: keystone}], user:
{username: admin, roles_links: [], id:
bcc02d66f571431daff8fc1c89ebb869, roles: [{name: _member_},
{name: admin}], name: admin}, metadata: {is_admin: 0, roles:
[9fe2ff9ee4384b1894a90878d3e92bab, e880a3ec7a464240b5c6b9718f201a34]}}}


REQ: curl -i
http://localhost:8776/v1/38574792821d43d58bcecc7760149a90/volumes/detail -X
GET -H X-Auth-Project-Id: demo -H User-Agent: python-cinderclient -H
Accept: application/json -H X-Auth-Token:
ee3a054d267e4470a6b539acb9226604

RESP: [401] {'date': 'Fri, 07 Jun 2013 06:20:00 GMT', 'content-length':
'276', 'content-type': 'text/plain; charset=UTF-8', 'www-authenticate':
Keystone uri='http://127.0.0.1:35357'}
RESP BODY: 401 Unauthorized

Thanks
Sreejith

On Thu, Jun 6, 2013 at 6:36 PM, Sreejith N
sreejith.naarakat...@gmail.comwrote:

 Hi,
I am getting 'Unauthorized' response when trying to create a
 test volume with cinder. Looking at the traces, token request is authorized
 and successful.

 In the second request the user_id is null. Is it normal?

 $keystone user-list
 +--++-+---+
 |id|  name  | enabled | email |
 +--++-+---+
 | bcc02d66f571431daff8fc1c89ebb869 | admin  |   True  |   |
 | 614ff08b4f604dadbfb0f2bb66a5f0f4 |  ec2   |   True  |   |
 | 0abdaa883e744cfcb6bc6d011cdc0b82 | glance |   True  |   |
 | 73277295da634a13b21387bf60bfa634 |  nova  |   True  |   |
 | bd376af8d1a84a58ae98a378e32db29d | swift  |   True  |   |
 +--++-+---+
 Here should I also have a 'cinder' admin role user?

 I have verified cinder db connection with it's credentials. Any help or
 hint is greatly appreciated.

 Thanks
 Sreejith

 $cinder --debug create --display_name test 1

 REQ: curl -i http://localhost:5000/v2.0/tokens -X POST -H Content-Type:
 application/json -H Accept: application/json -H User-Agent:
 python-cinderclient -d '{auth: {tenantName: demo,
 passwordCredentials: {username: admin, password: secrete}}}'

 RESP: [200] {'date': 'Thu, 06 Jun 2013 12:31:47 GMT', 'content-type':
 'application/json', 'content-length': '6123', 'vary': 'X-Auth-Token'}
 RESP BODY: {access: 

Re: [Openstack] [Keystone] Policy settings not working correctly

2013-06-07 Thread Heiko Krämer
Hi Guang,

thx for your hint but that's not the reason because in your example all
users with the KeystoneAdmin role have the same rights as the admin and
thats useless.

@Adam so i've no chance to get the policy management working ? I can't
say the KeystoneAdmin role is only allowed to create and delete users
and nothing more ?
I saw instead of the file a mysql base policy management but thers no
cli commands available right ?


Thx and Greetings
Heiko

On 07.06.2013 07:59, Yee, Guang wrote:

 I think keystone client is still V2 by default, which is enforcing
 admin_required.

  

 Try this

  

 admin_required: [[role:KeystoneAdmin], [role:admin],
 [is_admin:1]],

  

  

 Guang

  

  

 *From:*Openstack
 [mailto:openstack-bounces+guang.yee=hp@lists.launchpad.net] *On
 Behalf Of *Adam Young
 *Sent:* Thursday, June 06, 2013 7:28 PM
 *To:* Heiko Krämer; openstack
 *Subject:* Re: [Openstack] [Keystone] Policy settings not working
 correctly

  

 What is the actualy question here?  Is it why is this failing or
 why was it done that way?


 On 06/04/2013 07:47 AM, Heiko Krämer wrote:

 Heyho guys :)

 I've a little problem with policy settings in keystone. I've
 create a new rule in my policy-file and restarts keystone but
 keystone i don't have privileges.


 What is the rule?


 Example:


 keystone user-create --name kadmin --pw lala
 keystone user-role-add --

 keystone role-list --user kadmin --role KeystoneAdmin --tenant admin

 +--+--+
 |id| name |
 +--+--+
 | 3f5c0af585db46aeaec49da28900de28 |KeystoneAdmin |
 | dccfed0bd790420bbf1982686cbf7e31 | KeystoneServiceAdmin |


 cat /etc/keystone/policy.json

 {
 admin_required: [[role:admin], [is_admin:1]],
 owner : [[user_id:%(user_id)s]],
 admin_or_owner: [[rule:admin_required], [rule:owner]],
 admin_or_kadmin: [[rule:admin_required], [role:KeystoneAdmin]],

 default: [[rule:admin_required]],
 [.]
 identity:list_users: [[rule:admin_or_kadmin]],
 []

 loading kadmin creds

 keystone user-list
 Unable to communicate with identity service: {error: {message:
 You are not authorized to perform the requested action:
 admin_required, code: 403, title: Not Authorized}}. (HTTP 403)


 In log file i see:
 DEBUG [keystone.policy.backends.rules] enforce admin_required:
 {'tenant_id': u'b33bf3927d4e449a98cec4a883148110', 'user_id':
 u'46a6a9e429db483f8346f0259e99d6a5', u'roles': [u'KeystoneAdmin']}




 Why does keystone enforce /admin_required/ rule instead of the defined
 rule (/admin_or_kadmin/).


 Historical reasons.  We are trying to clean this up. 





 Keystone conf:
 [...]

 # Path to your policy definition containing identity actions
 policy_file = policy.json
 [..]
 [policy]
 driver = keystone.policy.backends.rules.Policy




 Any have an idea ?

 Thx and greetings
 Heiko




 ___
 Mailing list: https://launchpad.net/~openstack 
 https://launchpad.net/%7Eopenstack
 Post to : openstack@lists.launchpad.net 
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack 
 https://launchpad.net/%7Eopenstack
 More help   : https://help.launchpad.net/ListHelp

  


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Swift load balancing

2013-06-07 Thread Heiko Krämer
Hey Kotwani,

we are using an SW loadbalancer but L3 (keepalived).
DNS round robin are not a load balancer :) if one node is done, some
connections will arrive the down host that's not the right way i think.

HTTP Proxy are an option but you make a bottleneck of your connection to
WAN because all usage will pass your proxy server.

You can use Keepalived as a Layer3 Loadbalancer, so all your incoming
responses will distributed to the swift proxy servers and delivered of
them. You don't have a bottleneck because you are using the WAN
connection of each swift proxy servers and you have automate failover of
keepalived with an other hot standby lb ( keepalived are using out of
the box pacemaker + corosync for lb failover).


Greetings
Heiko

On 07.06.2013 06:40, Chu Duc Minh wrote:
 If you choose to use DNS round robin, you can set TTL small and use a
 script/tool to continous check proxy nodes to reconfigure DNS record
 if one proxy node goes down, and vice-versa.

 If you choose to use SW load-balancer, I suggest HAProxy for
 performance (many high-traffic websites use it) and NGinx for features
 (if you really need features provided by Nginx).
 IMHO, I like Nginx more than Haproxy. It's stable, modern, high
 performance, and full-featured.


 On Fri, Jun 7, 2013 at 6:28 AM, Kotwani, Mukul mukul.g.kotw...@hp.com
 mailto:mukul.g.kotw...@hp.com wrote:

 Hello folks,

 I wanted to check and see what others are using in the case of a
 Swift installation with multiple proxy servers for load
 balancing/distribution. Based on my reading, the approaches used
 are DNS round robin, or SW load balancers such as Pound, or HW
 load balancers. I am really interested in finding out what others
 have been using in their installations. Also, if there are issues
 that you have seen related to the approach you are using, and any
 other information you think would help would be greatly appreciated.

  

 As I understand it, DNS round robin does not check the state of
 the service behind it, so if a service goes down, DNS will still
 send the record and the record requires manual removal(?). Also, I
 am not sure how well it scales or if there are any other issues.
 About Pound, I am not sure what kind of resources it expects and
 what kind of scalability it has, and yet again, what other issues
 have been seen.

  

 Real world examples and problems seen by you guys would definitely
 help in understanding the options better.

  

 Thanks!

 Mukul

  


 ___
 Mailing list: https://launchpad.net/~openstack
 https://launchpad.net/%7Eopenstack
 Post to : openstack@lists.launchpad.net
 mailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 https://launchpad.net/%7Eopenstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [HyperV][Ceilometer] Performance statistics from Hyper-V with Ceilometer and libvirt

2013-06-07 Thread Julien Danjou
On Thu, Jun 06 2013, Peter Pouliot wrote:

 The hyper-v driver uses WMI.
 Libvirt is not used. There is currently no support for celometer, however we
 should have havana blueprints meaning it is one of the things we are trying
 to deliver.

We'd be glad to have this support indeed. I don't think I saw any
blueprint about this on Ceilometer, did you already create them
somewhere?

-- 
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Unable to connect to MYSQL DB using inbuilt session provided by openstack

2013-06-07 Thread swapnil khanapurkar
Hi All,

I want to use the existing engine created by Openstack to connect to
the MYSQL Database, however somehow I am not able to connect. .

The existing engine gets connected to SQLite database(which is the
default db) and not MYSQL.

I got the Openstack session and engine from :
from nova.openstack.common.db.sqlalchemy import session as db_session

get_session = db_session.get_session
session = get_session()

get_engine = db_session.get_engine
engine=get_engine()



This is inbuilt session info of Openstack :

{'autocommit': True, 'autoflush': True, 'transaction': None,
'hash_key': 1L, 'expire_on_commit': False, '_new': {}, 'bind':
Engine(sqlite:opt/stack/nova/nova/openstack/common/db/nova.sqlite),
'_deleted': {}, '_flushing': False, 'identity_map': {},
'_enable_transaction_accounting': True, '_identity_cls': class
'sqlalchemy.orm.identity.WeakInstanceDict', 'twophase': False,
'_Session__binds': {}, '_query_cls': class
'nova.openstack.common.db.sqlalchemy.session.Query'}



Then Manully created session and engine and able to connect to db and query it
The Manualy created session info is:

{'autocommit': False, 'autoflush': True, 'transaction':
sqlalchemy.orm.session.SessionTransaction object at 0xa2fe34c,
'hash_key': 1L, 'expire_on_commit': True, '_new': {}, 'bind':
Engine(mysql://username:passworsd@hostip/nova), '_deleted': {},
'_flushing': False, 'identity_map': {},
'_enable_transaction_accounting': True, '_identity_cls': class
'sqlalchemy.orm.identity.WeakInstanceDict', 'twophase': False,
'_Session__binds': {}, '_query_cls': class
'sqlalchemy.orm.query.Query'}


Please let me know how to proceed further.Any help is appreciated.

Thanks,
Swapnil

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] novncproxy and keymaps

2013-06-07 Thread Dennis Jacobfeuerborn

Hi,
I am now at a point in my deployment where I can now start instances and 
access the console through horizon. The problem is that I get a weird 
keyboard layout and when I add vnc_keymap=de in nova.conf it changes 
but to a en-us layout rather than the de (german) I requested.


The 'de' is properly reflected in the instances xml config:
...
graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0' 
keymap='de'

...

Any ideas how to get the proper layout working in the horizon console?

Regards,
  Dennis

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [HyperV][Ceilometer] Performance statistics from Hyper-V with Ceilometer and libvirt

2013-06-07 Thread Peter Pouliot


https://blueprints.launchpad.net/ceilometer/+spec/hyper-v-agent

Sent from my Verizon Wireless 4G LTE Smartphone



 Original message 
From: Julien Danjou jul...@danjou.info
Date: 06/07/2013 4:37 AM (GMT-05:00)
To: Peter Pouliot ppoul...@microsoft.com
Cc: Bruno Oliveira brunnop.olive...@gmail.com,OpenStack 
openstack@lists.launchpad.net
Subject: Re: [Openstack] [HyperV][Ceilometer] Performance statistics from 
Hyper-V with Ceilometer and libvirt


On Thu, Jun 06 2013, Peter Pouliot wrote:

 The hyper-v driver uses WMI.
 Libvirt is not used. There is currently no support for celometer, however we
 should have havana blueprints meaning it is one of the things we are trying
 to deliver.

We'd be glad to have this support indeed. I don't think I saw any
blueprint about this on Ceilometer, did you already create them
somewhere?

--
Julien Danjou
;; Free Software hacker ; freelance consultant
;; http://julien.danjou.info
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] NetApp + Openstack folsom

2013-06-07 Thread Alexandre De Carvalho
Hi !


I have : 1 controller, 1 compute, 1 block storage and all that this works
well.  (Ubuntu 12.04 LTS + OpenStack Folsom)

And i would like to add a NetApp iSCSI FAS2020 for this structure. But i
don't know how and I don't find any document to do it.


If you can help me, i'm interested !


Thanks for your help !

-- 
regards,
Alexandre
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] NetApp + Openstack folsom

2013-06-07 Thread Diego Parrilla Santamaría
We have several deployments of customers with StackOps running on NetApp
like a breeze.

Check this document: https://communities.netapp.com/docs/DOC-24892

Cheers
Diego

 --
Diego Parrilla
http://www.stackops.com/*CEO*
*www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29 |
skype:diegoparrilla*
* http://www.stackops.com/
*

*



On Fri, Jun 7, 2013 at 3:18 PM, Alexandre De Carvalho 
alexandre7.decarva...@gmail.com wrote:

 Hi !


 I have : 1 controller, 1 compute, 1 block storage and all that this works
 well.  (Ubuntu 12.04 LTS + OpenStack Folsom)

 And i would like to add a NetApp iSCSI FAS2020 for this structure. But i
 don't know how and I don't find any document to do it.


 If you can help me, i'm interested !


 Thanks for your help !

 --
 regards,
 Alexandre



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Keystone] Policy settings not working correctly

2013-06-07 Thread Brant Knudson
Heiko --

Guang's response provides the hint that could get you where you want to go
-- try using the V3 Identity API rather than the V2 admin API. The V2 admin
API essentially ignores policy and only allows admin role. Here's docs on
the V3 API:
https://github.com/openstack/identity-api/blob/master/openstack-identity-api/src/markdown/identity-api-v3.md.
The openstack client may provide a CLI for the commands you want to
run.

-- Brant



On Fri, Jun 7, 2013 at 3:07 AM, Heiko Krämer i...@honeybutcher.de wrote:

  Hi Guang,

 thx for your hint but that's not the reason because in your example all
 users with the KeystoneAdmin role have the same rights as the admin and
 thats useless.

 @Adam so i've no chance to get the policy management working ? I can't say
 the KeystoneAdmin role is only allowed to create and delete users and
 nothing more ?
 I saw instead of the file a mysql base policy management but thers no cli
 commands available right ?


 Thx and Greetings
 Heiko


 On 07.06.2013 07:59, Yee, Guang wrote:

  I think keystone client is still V2 by default, which is enforcing
 admin_required. 

 ** **

 Try this

 ** **

 admin_required: [[role:KeystoneAdmin], [role:admin], [is_admin:1]],
 

 ** **

 ** **

 Guang

 ** **

 ** **

 *From:* Openstack [
 mailto:openstack-bounces+guang.yee=hp@lists.launchpad.netopenstack-bounces+guang.yee=hp@lists.launchpad.net]
 *On Behalf Of *Adam Young
 *Sent:* Thursday, June 06, 2013 7:28 PM
 *To:* Heiko Krämer; openstack
 *Subject:* Re: [Openstack] [Keystone] Policy settings not working
 correctly

 ** **

 What is the actualy question here?  Is it why is this failing or why
 was it done that way?


 On 06/04/2013 07:47 AM, Heiko Krämer wrote:

 Heyho guys :)

 I've a little problem with policy settings in keystone. I've create a new
 rule in my policy-file and restarts keystone but keystone i don't have
 privileges. 


 What is the rule?

 


 Example:


 keystone user-create --name kadmin --pw lala
 keystone user-role-add --

 keystone role-list --user kadmin --role KeystoneAdmin --tenant admin

 +--+--+
 |id| name |
 +--+--+
 | 3f5c0af585db46aeaec49da28900de28 |KeystoneAdmin |
 | dccfed0bd790420bbf1982686cbf7e31 | KeystoneServiceAdmin |


 cat /etc/keystone/policy.json

 {
 admin_required: [[role:admin], [is_admin:1]],
 owner : [[user_id:%(user_id)s]],
 admin_or_owner: [[rule:admin_required], [rule:owner]],
 admin_or_kadmin: [[rule:admin_required], [role:KeystoneAdmin]],

 default: [[rule:admin_required]],
 [.]
 identity:list_users: [[rule:admin_or_kadmin]],
 []

 loading kadmin creds

 keystone user-list
 Unable to communicate with identity service: {error: {message: You
 are not authorized to perform the requested action: admin_required,
 code: 403, title: Not Authorized}}. (HTTP 403)


 In log file i see:
 DEBUG [keystone.policy.backends.rules] enforce admin_required:
 {'tenant_id': u'b33bf3927d4e449a98cec4a883148110', 'user_id':
 u'46a6a9e429db483f8346f0259e99d6a5', u'roles': [u'KeystoneAdmin']}




 Why does keystone enforce *admin_required* rule instead of the defined
 rule (*admin_or_kadmin*).


 Historical reasons.  We are trying to clean this up.


 




 Keystone conf:
 [...]

 # Path to your policy definition containing identity actions
 policy_file = policy.json
 [..]
 [policy]
 driver = keystone.policy.backends.rules.Policy




 Any have an idea ?

 Thx and greetings
 Heiko




 

 ___

 Mailing list: https://launchpad.net/~openstack

 Post to : openstack@lists.launchpad.net

 Unsubscribe : https://launchpad.net/~openstack

 More help   : https://help.launchpad.net/ListHelp

 ** **



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OPENSTACK] Grizzly (three node setup) Error Agent with agent_type=DHCP agent and host=network could not be found

2013-06-07 Thread Nikhil Mittal
Hello

I setup a three-node Grizzly setup using Ubuntu 12.04. In the
/var/log/quantum/server.log (on controller node) i get the below error
whenever i run command quantum agent-list. Actually this command returns
nothing (just a blank line) on either controller or network node that I run
this command on.

NOTE: the network node's host name is network.




2013-06-08 00:56:19DEBUG [quantum.openstack.common.rpc.amqp] received
{u'_context_roles': [u'admin'], u'_msg_id':
u'9987f73b2db44097ae472bd281210f2a', u'_context_read_deleted': u'no',
u'_context_tenant_id': None, u'args': {u'host': u'network'}, u'_unique_id':
u'a89b6f2b2dd04c97a2fb0ce694cac1b3', u'_context_is_admin': True,
u'version': u'1.0', u'_context_project_id': None, u'_context_timestamp':
u'2013-06-07 06:24:13.416063', u'_context_user_id': None, u'method':
u'get_active_networks'}

2013-06-08 00:56:19DEBUG [quantum.openstack.common.rpc.amqp] unpacked
context: {'user_id': None, 'roles': [u'admin'], 'tenant_id': None,
'is_admin': True, 'timestamp': u'2013-06-07 06:24:13.416063', 'project_id':
None, 'read_deleted': u'no'}

2013-06-08 00:56:19DEBUG [quantum.db.dhcp_rpc_base] Network list
requested from network

2013-06-08 00:56:19  WARNING [quantum.scheduler.dhcp_agent_scheduler] No
enabled DHCP agent on host network

2013-06-08 00:56:19ERROR [quantum.openstack.common.rpc.amqp] Exception
during message handling

Traceback (most recent call last):

  File
/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/amqp.py,
line 430, in _process_data

rval = self.proxy.dispatch(ctxt, version, method, **args)

  File /usr/lib/python2.7/dist-packages/quantum/common/rpc.py, line 43,
in dispatch

quantum_ctxt, version, method, **kwargs)

  File
/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/dispatcher.py,
line 133, in dispatch

return getattr(proxyobj, method)(ctxt, **kwargs)

  File /usr/lib/python2.7/dist-packages/quantum/db/dhcp_rpc_base.py, line
42, in get_active_networks

context, host)

  File /usr/lib/python2.7/dist-packages/quantum/db/agentschedulers_db.py,
line 137, in list_active_networks_on_active_dhcp_agent

context, constants.AGENT_TYPE_DHCP, host)

  File /usr/lib/python2.7/dist-packages/quantum/db/agents_db.py, line
125, in _get_agent_by_type_and_host

host=host)

AgentNotFoundByTypeHost: Agent with agent_type=DHCP agent and host=network
could not be found

2013-06-08 00:56:19ERROR [quantum.openstack.common.rpc.common]
Returning exception Agent with agent_type=DHCP agent and host=network could
not be found to caller

2013-06-08 00:56:19ERROR [quantum.openstack.common.rpc.common]
['Traceback (most recent call last):\n', '  File
/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/amqp.py,
line 430, in _process_data\nrval = self.proxy.dispatch(ctxt, version,
method, **args)\n', '  File
/usr/lib/python2.7/dist-packages/quantum/common/rpc.py, line 43, in
dispatch\nquantum_ctxt, version, method, **kwargs)\n', '  File
/usr/lib/python2.7/dist-packages/quantum/openstack/common/rpc/dispatcher.py,
line 133, in dispatch\nreturn getattr(proxyobj, method)(ctxt,
**kwargs)\n', '  File
/usr/lib/python2.7/dist-packages/quantum/db/dhcp_rpc_base.py, line 42, in
get_active_networks\ncontext, host)\n', '  File
/usr/lib/python2.7/dist-packages/quantum/db/agentschedulers_db.py, line
137, in list_active_networks_on_active_dhcp_agent\ncontext,
constants.AGENT_TYPE_DHCP, host)\n', '  File
/usr/lib/python2.7/dist-packages/quantum/db/agents_db.py, line 125, in
_get_agent_by_type_and_host\nhost=host)\n', 'AgentNotFoundByTypeHost:
Agent with agent_type=DHCP agent and host=network could not be found\n']



Thanks,

-Nikhil
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceilometer-api Auth Error

2013-06-07 Thread Bruno Oliveira
The auth-token you got in out.txt seems fine to me...

Judging by the first output, and the 401 Unauthorized, sounds more
like a misconfig of the ceilometer
user in keystone...

The same way you got an admin tenant, you should probably have an admin user
in keystone. Could you possibly try to curl the auth token using it ?

And then, use that token to list the ceilometer /meters  or /resources.

Let us know. Thanks
--

Bruno Oliveira
Developer, Software Engineer




On Fri, Jun 7, 2013 at 10:35 AM, Claudio Marques clau...@onesource.pt wrote:
 Hi guys
 (Sorry about the previous e-mail - I have sent it by mistake)

 I've changed all the configuration from localhost to the correct ip_addr -
 as Bruno guided me, and started all over again.

 Here's the output of all the tenants I have in OpenStack:

 keystone tenant-list
 +--+-+-+
 |id| name| enabled |
 +--+-+-+
 | 68c5e7308a234d889d9591b51891a30a |admin|   True  |
 | 0b0318f87f384247ae8b658f844ed9a4 | project_one |   True  |
 | 0300e74768a8445aa268f20a9846a7c1 |   service   |   True  |
 +--+-+-+

 I have created the ceilometer user in the keystone with the following
 command:

 keystone user-create --name=ceilometer --pass=ceilometer_pass --tenant-id
 68c5e7308a234d889d9591b51891a30a --email=ceilome...@domain.com

 Just to check if everything was ok:

 keystone user-get ceilometer
 +--+--+
 | Property |  Value   |
 +--+--+
 |  email   |  ceilome...@domain.com   |
 | enabled  |   True   |
 |id| a47c062e52f4407baf19db1a8613f5bf |
 |   name   |ceilometer|
 | tenantId | 68c5e7308a234d889d9591b51891a30a |
 +--+--+

 Then I created a service for ceilometer:

 keystone service-create --name=ceilometer -–type=metering
 -–description=”Ceilometer Service”

 And then i createted an Endpoint in Keystone for ceilometer by using the
 following command:

 keystone endpoint-create --region RegionOne --service_id
 22881e9089b342a58bde91712f090c6b --publicurl http://10.0.1.167:8777/;
 --adminurl http://10.10.10.53:8777/; --internalurl
 http://10.10.10.53:8777/;

 Cheking the endpoint list I get:

 keystone endpoint-list
 +--+---+-+--+--+--+
 |id|   region  |publicurl
 |   internalurl| adminurl
 |service_id|
 +--+---+-+--+--+--+
 | 4375fcf13fb843f497ae01a186e95098 | RegionOne |
 http://10.0.1.167:8776/v1/$(tenant_id)s |
 http://10.10.10.51:8776/v1/$(tenant_id)s |
 http://10.10.10.51:8776/v1/$(tenant_id)s | a2a9c0733d124d2389c58cec06e24eae
 |
 | 5a37d2960f094677b3068f7b112addef | RegionOne |
 http://10.0.1.167:9696/ | http://10.10.10.51:9696/ |
 http://10.10.10.51:9696/ | 9fe761c9d83647f2953b5fbe037aa548 |
 | 5cf12f7972de48e2bf342a3c961334d3 | RegionOne |
 http://10.0.1.167:5000/v2.0   |   http://10.10.10.51:5000/v2.0
 |  http://10.10.10.51:35357/v2.0   |
 e50dff43e6184d15a3764fc220a7272a |
 | 9a8b00e0065643d4b100de944d7a30b0 | RegionOne |
 http://10.0.1.167:8773/services/Cloud  |
 http://10.10.10.51:8773/services/Cloud  |
 http://10.10.10.51:8773/services/Admin  | 0908f8a92c2e406b9f99839d9d8076c2 |
 | c85f6c95b5804d88a728f69cb1e125c5 | RegionOne |
 http://10.0.1.167:9292/v2|http://10.10.10.51:9292/v2
 |http://10.10.10.51:9292/v2|
 fc70a5946d2c4fadb36ce14461c2a7a0 |
 | ea7d0c2d4d8d4f37b6f505994a30a7ea | RegionOne |
 http://10.0.1.167:8777/ | http://10.10.10.51:8777/ |
 http://10.10.10.51:8777/ | 22881e9089b342a58bde91712f090c6b |
 | f4543edef18d4a42a22a2d566bca72d2 | RegionOne |
 http://10.0.1.167:8774/v2/$(tenant_id)s |
 http://10.10.10.51:8774/v2/$(tenant_id)s |
 http://10.10.10.51:8774/v2/$(tenant_id)s | 0d780e90409e45ceaa870f5c0b16d6a6
 |
 +--+---+-+--+--+--+



 My credentials in OpenStack are

 user: ceilometer
 password: ceilometer_pass
 tenantid: 68c5e7308a234d889d9591b51891a30a
 tenantName: admin

 I have attached my ceilometer.conf file in case any of you doesn't mind to
 

[Openstack] CloudFoundry on Openstack Grizzly Part 1

2013-06-07 Thread Heiko Krämer
Heyho guys,

I've written the first part how to deploy cloudfoundry on OpenStack
The second will coming soon.


http://honeybutcher.de/2013/06/cloudfoundry-micro-bosh-openstack-grizzly/


Greetings
Heiko

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Swift load balancing

2013-06-07 Thread John Dickinson
The given options (DNS, SW load balancer, and HW load balancer) are all things 
I've seen people use in production Swift clusters.

As mentioned in another reply, DNSRR isn't really load balancing, but it can be 
used if nothing else is available.

One thing to consider when choosing a load balancer is if you want it to also 
terminate your SSL connections. You shouldn't ever terminate SSL within the 
Swift proxy itself, so you either need something local (like stunnel or stud) 
or you can combine the functionality with something like Pound or HAProxy. Both 
Pound and HAProxy can do load balancing and SSL termination, but for SSL they 
both use OpenSSL, so you won't see a big difference in SSL performance. Another 
free option (for smaller clusters) is using LVS.

You could also use commercial load balancers with varying degrees of success.

Swift supports being able to tell the healthcheck middleware to send an error 
or not 
(https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L185),
 so when configuring your load balancer, you can more simply manage the 
interaction with the proxy servers by taking advantage of this feature.

I would strongly recommend against using nginx as a front-end to a Swift 
cluster. nginx spools request bodies locally, so it is not a good option in 
front of a storage system when the request bodies could be rather large.

--John





On Jun 7, 2013, at 1:24 AM, Heiko Krämer i...@honeybutcher.de wrote:

 Hey Kotwani,
 
 we are using an SW loadbalancer but L3 (keepalived).
 DNS round robin are not a load balancer :) if one node is done, some 
 connections will arrive the down host that's not the right way i think.
 
 HTTP Proxy are an option but you make a bottleneck of your connection to WAN 
 because all usage will pass your proxy server.
 
 You can use Keepalived as a Layer3 Loadbalancer, so all your incoming 
 responses will distributed to the swift proxy servers and delivered of them. 
 You don't have a bottleneck because you are using the WAN connection of each 
 swift proxy servers and you have automate failover of keepalived with an 
 other hot standby lb ( keepalived are using out of the box pacemaker + 
 corosync for lb failover).
 
 
 Greetings
 Heiko
 
 On 07.06.2013 06:40, Chu Duc Minh wrote:
 If you choose to use DNS round robin, you can set TTL small and use a 
 script/tool to continous check proxy nodes to reconfigure DNS record if one 
 proxy node goes down, and vice-versa.
 
 If you choose to use SW load-balancer, I suggest HAProxy for performance 
 (many high-traffic websites use it) and NGinx for features (if you really 
 need features provided by Nginx). 
 IMHO, I like Nginx more than Haproxy. It's stable, modern, high performance, 
 and full-featured.
 
 
 On Fri, Jun 7, 2013 at 6:28 AM, Kotwani, Mukul mukul.g.kotw...@hp.com 
 wrote:
 Hello folks,
 
 I wanted to check and see what others are using in the case of a Swift 
 installation with multiple proxy servers for load balancing/distribution. 
 Based on my reading, the approaches used are DNS round robin, or SW load 
 balancers such as Pound, or HW load balancers. I am really interested in 
 finding out what others have been using in their installations. Also, if 
 there are issues that you have seen related to the approach you are using, 
 and any other information you think would help would be greatly appreciated.
 
  
 As I understand it, DNS round robin does not check the state of the service 
 behind it, so if a service goes down, DNS will still send the record and the 
 record requires manual removal(?). Also, I am not sure how well it scales or 
 if there are any other issues. About Pound, I am not sure what kind of 
 resources it expects and what kind of scalability it has, and yet again, 
 what other issues have been seen.
 
  
 Real world examples and problems seen by you guys would definitely help in 
 understanding the options better.
 
  
 Thanks!
 
 Mukul
 
  
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 ___
 Mailing list: 
 https://launchpad.net/~openstack
 
 Post to : 
 openstack@lists.launchpad.net
 
 Unsubscribe : 
 https://launchpad.net/~openstack
 
 More help   : 
 https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Cinder Callbacks from nova-compute

2013-06-07 Thread Wolfgang Richter
My nova-compute nodes appear to be using a hostname for my Cinder host that
is incorrect.

How do I set the hostname for the cinder-volume host on each nova-compute
node?

Some setting in /etc/nova/nova.conf?

-- 
Wolf
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Problems with rabbitmq + haproxy + cinder

2013-06-07 Thread Samuel Winchenbach
Hi Ray, thanks for the response.

I am using RabbitMQ in mirrored queues mode.  I am setting up a Grizzly
test cluster and didn't realize that support for multiple servers were
added!  That is great news.   I will give that a shot, thanks.

Sam


On Fri, Jun 7, 2013 at 3:16 PM, Ray Pekowski pekow...@gmail.com wrote:


 Seems like it might have something to do with IPv6.  It looks like
 RabbitMQ is only listening on IPv6.  Note the :::5673.  Maybe you could
 look into how to configure RabbitMQ to listen on IPv4.

 But I am curious why you don't just use the HA capabilities of RabbitMQ?
 Are you on Folsom?  I think OpenStack RPC added support for multiple
 RabbitMQ servers and HA in Grizzly.  I suppose you could backport that
 feature, but might be a pain.

 Ray

 On Fri, Jun 7, 2013 at 12:48 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 Hi all, I am having a few troubles getting haproxy and cinder-api to work
 together correctly.  If I set the rabbit_host  port to the actual service
 (not through haproxy) it seems to work fine.  The following is a bunch of
 debugging information:

 Here are the errors in my cinder log:
 http://pastie.org/pastes/8020123/text

 Here are the non-default cinder configuration options:
 http://pastie.org/pastes/8020082/text

 Note: I have cinder running on a non-standard port because ultimately it
 too will be load balanced with haproxy.

 Here is a section my haproxy-int.cfg:
 http://pastie.org/pastes/8020077/text

 Status, permissions, and policies of the rabbitmq cluster:
 http://pastie.org/pastes/8020114/text

 Does anyone see anything wrong, or have suggestions?

 Thanks so much!

 P.S. If anyone can explain the difference between logdir and log_dir that
 would be awesome!

 Thanks,
 Sam

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Cinder Callbacks from nova-compute

2013-06-07 Thread Wolfgang Richter
Fixed the issue.

Turns out Keystone was providing a URL (from the database driver) that I
didn't want it to use for callbacks (nova-compute on a compute server):

I manually updated the table 'endpoint' in the 'keystone' database to
change the URL for the 'public' interface associated with 'cinder' (URL
using port 8776) to point to an internal IP.

Is this desired behavior?  nova-compute is contacting the 'public'
interface of a service ('cinder')?  Why aren't the OpenStack components
using the 'internal' URL?  What is the distinction here?

--
Wolf


On Fri, Jun 7, 2013 at 12:42 PM, Wolfgang Richter w...@cs.cmu.edu wrote:

 My nova-compute nodes appear to be using a hostname for my Cinder host
 that is incorrect.

 How do I set the hostname for the cinder-volume host on each nova-compute
 node?

 Some setting in /etc/nova/nova.conf?

 --
 Wolf




-- 
Wolf
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] cc_ssh.py warning (cloud-init issue resolved)

2013-06-07 Thread Justin Chiu




 Original Message 
Subject:Re: [Openstack] cc_ssh.py warning (cloud-init issue resolved)
Date:   Fri, 07 Jun 2013 12:31:45 -0700
From:   Justin Chiu j.c...@cern.ch
To: Steven Hardy sha...@redhat.com



Thanks Steve.

cloud-init-0.6.3-0.12.bzr532.el6.noarch
python-boto-2.5.2-3.el6.noarch
(grabbed from EPEL)

Some good and bad news. The issue of cloud-init not being able to obtain 
metadata seems to have resolved itself. Launched a dozen instances and 
they all grabbed the metadata just fine.

I will post if I run into the metadata issue again...
--
I've run into a (not so critical) issue with one of the scripts:

cc_ssh.py[WARNING]: applying credentials failed!

Further down in the log:

ec2: #

ec2: -BEGIN SSH HOST KEY FINGERPRINTS-

ec2: 1024 XX:XX:... /etc/ssh/ssh_host_dsa_key.pub (DSA)

ec2: 2048 XX:XX:... /etc/ssh/ssh_host_key.pub (RSA1)

ec2: 2048 XX:XX:... /etc/ssh/ssh_host_rsa_key.pub (RSA)

ec2: -END SSH HOST KEY FINGERPRINTS-

ec2: #

-BEGIN SSH HOST KEY KEYS-
*my keys*
-END SSH HOST KEY KEYS-

So it seems like the keys are applied. Furthermore, I can log-in with 
the corresponding private key just fine.
Is there some non-critical incompatibility between the cloud-init 
scripts and SSH paths, etc...that I have overlooked?


Thanks for your help,
Justin

On 2013-06-06 2:27 AM, Steven Hardy wrote:

On Wed, Jun 05, 2013 at 09:25:17AM -0700, Justin Chiu wrote:

Hi all,
I sent this message out a few days ago. I am still trying to figure
out what is going on. Any advice would be much appreciated.
--
I am having some issues with cloud-init being unable to contact the
metadata server. cloud-init built into a base Scientific Linux 6.4
image with Oz. Any ideas on what might be the cause?

Can you confirm the version of cloud-init and python-boto in your image?

I found on Fedora that cloud-init 0.7.x only works with newer ( 2.6.0)
boto versions.  Getting the wrong combination can lead to the sort of problems
you're seeing IME.

Steve




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Problems with rabbitmq + haproxy + cinder

2013-06-07 Thread Samuel Winchenbach
It doesn't appear that glance supports RabbitMQ HA cluster host:port pairs
or am I missing something?   It seems odd that it would be only the service
not to support it.

Thanks,
Sam


On Fri, Jun 7, 2013 at 3:36 PM, Samuel Winchenbach swinc...@gmail.comwrote:

 Hi Ray, thanks for the response.

 I am using RabbitMQ in mirrored queues mode.  I am setting up a Grizzly
 test cluster and didn't realize that support for multiple servers were
 added!  That is great news.   I will give that a shot, thanks.

 Sam


 On Fri, Jun 7, 2013 at 3:16 PM, Ray Pekowski pekow...@gmail.com wrote:


 Seems like it might have something to do with IPv6.  It looks like
 RabbitMQ is only listening on IPv6.  Note the :::5673.  Maybe you could
 look into how to configure RabbitMQ to listen on IPv4.

 But I am curious why you don't just use the HA capabilities of RabbitMQ?
 Are you on Folsom?  I think OpenStack RPC added support for multiple
 RabbitMQ servers and HA in Grizzly.  I suppose you could backport that
 feature, but might be a pain.

 Ray

 On Fri, Jun 7, 2013 at 12:48 PM, Samuel Winchenbach 
 swinc...@gmail.comwrote:

 Hi all, I am having a few troubles getting haproxy and cinder-api to
 work together correctly.  If I set the rabbit_host  port to the actual
 service (not through haproxy) it seems to work fine.  The following is a
 bunch of debugging information:

 Here are the errors in my cinder log:
 http://pastie.org/pastes/8020123/text

 Here are the non-default cinder configuration options:
 http://pastie.org/pastes/8020082/text

 Note: I have cinder running on a non-standard port because ultimately it
 too will be load balanced with haproxy.

 Here is a section my haproxy-int.cfg:
 http://pastie.org/pastes/8020077/text

 Status, permissions, and policies of the rabbitmq cluster:
 http://pastie.org/pastes/8020114/text

 Does anyone see anything wrong, or have suggestions?

 Thanks so much!

 P.S. If anyone can explain the difference between logdir and log_dir
 that would be awesome!

 Thanks,
 Sam

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [HyperV][Quantum] Quantum dhcp agent not working for Hyper-V

2013-06-07 Thread Bruno Oliveira ~lychinus
(...)Do you have your vSwitch properly configured on your hyper-v host?(...)

 I can't say for sure, Peter, but I think so...

In troubleshooting we did (and are still doing) I can tell that
regardless of the network model that we're using (FLAT or VLAN
Network),
the instance that is provisioned on Hyper-V (for some reason) can't
reach the quantum-l3-agent by default
(I said default because, we just managed to do it after a hard, long
and boring troubleshoting,
 yet, we're not sure if that's how it should be done, indeed)

Since it's not something quick to explain, I'll present the scenario:
(I'm not sure if it might be a candidate for a fix in quantum-l3-agent,
 so quantum-devs might be interested too)


Here's how our network interfaces turns out, in our network controller:

==
External bridge network
==

Bridge br-eth1
Port br-eth1
Interface br-eth1
type: internal
Port eth1.11
Interface eth1.11
Port phy-br-eth1
Interface phy-br-eth1

==
Internal network
==

   Bridge br-int
Port int-br-eth1
Interface int-br-eth1
Port br-int
Interface br-int
type: internal
Port tapb610a695-46
tag: 1
Interface tapb610a695-46
type: internal
Port qr-ef10bef4-fa
tag: 1
Interface qr-ef10bef4-fa
type: internal

==

There's another iface named br-ex that we're using for floating_ips,
but it has nothing to do with what we're doing right now, so I'm skipping it...


 So, for the hands-on 

I know it may be a little bit hard to understand, but I'll do my best
trying to explain:

1) the running instance in Hyper-V, which is linked to Hyper-V vSwitch
is actually
communicating to bridge: br-eth1 (that is in the network controller).

NOTE: That's where the DHCP REQUEST (from the instance) lands


2) The interface MAC Address, of that running instance on Hyper-V, is:
fa:16:3e:95:95:e4. (we're gonna use it on later steps)
Since DHCP is not fully working yet, we had to manually set an IP for
that instance: 10.5.5.3


3) From that instance interface, the dhcp_broadcast should be forward -
   FROM interface eth1.12 TO  phy-br-eth1
   And FROM interface phy-br-eth1 TO the bridge br-int   *** THIS
IS WHERE THE PACKETS ARE DROPPED  ***.

Check it out for the actions:drop
-
root@osnetwork:~# ovs-dpctl dump-flows br-int  |grep 10.5.5.3

in_port(4),eth(src=fa:16:3e:f0:ac:8e,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=10.5.5.3,tip=10.5.5.1,op=1,sha=fa:16:3e:f0:ac:8e,tha=00:00:00:00:00:00),
packets:20, bytes:1120, used:0.412s, actions:drop
-

4) Finally, when the packet reaches the bridge br-int, the
DHCP_REQUEST should be forward to the
   dhcp_interface, that is: tapb610a695-46*** WHICH IS NOT
HAPPENING EITHER ***


5) How to fix :: bridge br-eth1

---
5.1. Getting to know the ifaces of 'br-eth1'
---
root@osnetwork:~# ovs-ofctl show br-eth1

OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:e0db554e164b
n_tables:255, n_buffers:256 features: capabilities:0xc7, actions:0xfff

1(eth1.11): addr:e0:db:55:4e:16:4b
 config: 0
 state:  0
 current:10GB-FD AUTO_NEG
 advertised: 1GB-FD 10GB-FD FIBER AUTO_NEG
 supported:  1GB-FD 10GB-FD FIBER AUTO_NEG

3(phy-br-eth1): addr:26:9b:97:93:b9:70
 config: 0
 state:  0
 current:10GB-FD COPPER

LOCAL(br-eth1): addr:e0:db:55:4e:16:4b
 config: 0
 state:  0

OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0


---
5.2. Adding flow rules to enable passing (instead of dropping)
---

# the source mac_address (dl_src) is the from the interface of the
# running instance on Hyper-V. This fix the DROP (only)

root@osnetwork:~# ovs-ofctl add-flow br-eth1
priority=10,in_port=3,dl_src=fa:16:3e:95:95:e4,actions=normal



6) How to fix :: bridge br-int

---
6.1. Getting to know the ifaces of 'br-int'
---

root@osnetwork:~# ovs-ofctl show br-int

OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:92976d64274d

n_tables:255, n_buffers:256  features: capabilities:0xc7, actions:0xfff

1(tapb610a695-46): addr:19:01:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN

4(int-br-eth1): addr:5a:56:e1:53:e9:90
 config: 0
 state:  0
 current:10GB-FD COPPER

5(qr-ef10bef4-fa): addr:19:01:00:00:00:00
 config:  

[Openstack] Quantum VLAN / GRE-node

2013-06-07 Thread Kannan, Hari
Slightly off topic question - as I'm fairly new to O~S

What is the use case scenario for preference towards GRE vs VLAN? I would have 
thought GRE is more preferable as it doesn't require external h/w 
configuration as well as doesn't come with the VLAN scalability limitations 
etc..

What is a preferred deployment model? Why would I choose one over the other??

Hari

From: Openstack 
[mailto:openstack-bounces+hari.kannan=hp@lists.launchpad.net] On Behalf Of 
Aaron Rosen
Sent: Wednesday, June 05, 2013 3:26 AM
To: Chu Duc Minh
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Quantum VLAN tag mismatch between Network-node and 
Compute-node

Hi,

Those vlan tags you are showing are not the actual tags that will be seen on 
the wire. Those tags are auto incremented and used for each new port that lands 
on a server that is in a different network. If you run ovs-ofctl dump-flows 
br-int you'll see those vlan tags are stripped off and the correct one is added.


Look here 
https://github.com/openstack/quantum/blob/master/quantum/plugins/openvswitch/agent/ovs_quantum_agent.py#L326
 if your curious about what's going on.

Aaron

On Wed, Jun 5, 2013 at 2:25 AM, Chu Duc Minh 
chu.ducm...@gmail.commailto:chu.ducm...@gmail.com wrote:
Hi, i'm converting from GRE tunnel to VLAN tagging, and deleted all old 
project/user/net/subnet.

in file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini @ all nodes, I 
already set:
network_vlan_ranges = physnet1:2:4094
When I create a new net:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 6d7b116e-be0b-4019-8769-a50a9ca13406 |
| name  | net_proj_one |
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id  | 2|
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   | 959fe9e0-a79d-4d0f-8040-ebfab26d8182 |
| tenant_id | 29ba82e97f374492a4ca02c62eb0a953 |
+---+--+

But when i show in network-node:
# ovs-vsctl show
...
Bridge br-int
Port tapdddef664-ee
tag: 1
Interface tapdddef664-ee
type: internal
Port qr-f9ba0308-2c
tag: 1
Interface qr-f9ba0308-2c
type: internal
Port int-br-eth0
Interface int-br-eth0
Port br-int
Interface br-int
type: internal
Bridge br-eth0
Port br-eth0
Interface br-eth0
type: internal
Port phy-br-eth0
Interface phy-br-eth0
Port eth0
Interface eth0

interface for router  dhcp created are created in VLAN 1 (wrong! it should be 
created with VLAN 2)
I try to find in config and database, but i can't found which setting that 
start with VLAN 1.

Because of VLAN tag mismatch, I can't access to VM instance.
Another weird thing is on compute node, tag is not constant when i 
create/terminate new instance:
# ovs-vsctl show
a9900940-f882-42f8-9b7c-9b42393ed8a4
Bridge qbred613362-fe
Port qvbed613362-fe
Interface qvbed613362-fe
Port qbred613362-fe
Interface qbred613362-fe
type: internal
Port taped613362-fe
Interface taped613362-fe
Bridge br-eth1
Port eth1
Interface eth1
Port br-eth1
Interface br-eth1
type: internal
Port phy-br-eth1
Interface phy-br-eth1
Bridge br-int
Port br-int
Interface br-int
type: internal
Port qvo9816466e-22
tag: 5
Interface qvo9816466e-22
Port int-br-eth1
Interface int-br-eth1
Port qvoed613362-fe
tag: 5
Interface qvoed613362-fe
Bridge qbr9816466e-22
Port qbr9816466e-22
Interface qbr9816466e-22
type: internal
Port tap9816466e-22
Interface tap9816466e-22
Port qvb9816466e-22
Interface qvb9816466e-22
Bridge virbr0
Port virbr0
Interface virbr0
type: internal

Do you know why it happen?

When everything is ok, tag on both Network-node  Compute-node should equal 2 
(for first VM network) when I configured network_vlan_ranges = 
physnet1:2:4094 ??

Thank you very much!

___

[Openstack] OpenStack Community Weekly Newsletter (May 31 – June 7)

2013-06-07 Thread Stefano Maffulli


 OpenStack 2013.1.2 released
 
http://lists.openstack.org/pipermail/openstack-announce/2013-June/000109.html

The OpenStack Stable Maintenance team is happy to announce the release 
of the 2013.1.2 stable Grizzly release. We have been busy reviewing and 
accepting backported bugfixes to the stable/grizzly branches. A total of 
80 bugs have been fixed across all core projects.



 *OpenStack “I” ***release**naming**
 http://markmail.org/message/mz3c7thc4oa5vbfc

The next release cycle for OpenStack, starting in November 2013 after we 
conclude the current *release* cycle (“Havana”) will be called Icehouse. 
https://launchpad.net/%7Eopenstack/+poll/i-release-naming



 Open Source Sysadmin: Reorganization of the OpenStack
 Infrastructure Docs http://princessleia.com/journal/?p=8101

The OpenStack Infrastructure team is constantly evolving its 
documentation to make it easier for new contributors to join the team. 
Last week documentation for the OpenStack Project Infrastructure 
http://ci.openstack.org/ was reorganised “to re-orient the 
documentation as an introduction for new contributors and a reference 
for all contributors.” All of the CI tools are open source, the puppet 
and other configurations are all hosted in public revision control 
https://github.com/openstack-infra/config and any changes submitted 
are made by the same process all other changes in OpenStack are made 
https://wiki.openstack.org/wiki/GerritWorkflow. They go through 
automated tests in Jenkins to test applicable syntax and other 
formatting and the code changes submitted are reviewed by peers and 
approved by members of the infrastructure team. This has made it super 
easy it is for the team to collaborate on changes and offer suggestions 
(much better than endless pastebins or sharing a screen session with a 
fellow sysadmin!), plus with all changes in revision control it’s easy 
to track down where things went wrong and revert as necessary.



 Enter OpenStack’s T-shirt Design Contest!
 
http://www.openstack.org/blog/2013/06/enter-openstacks-t-shirt-design-contest/

*Show us your creative talent  submit an original design for our 2013 
OpenStack T-shirt Design Contest! Winning design will be announced the 
last week in August 2013. **Details on **OpenStack blog 
http://www.openstack.org/blog/2013/06/enter-openstacks-t-shirt-design-contest/.*



 Async I/O and Python
 http://blogs.gnome.org/markmc/2013/06/04/async-io-and-python/

When you’re working on OpenStack, you’ll probably hear a lot of 
references to ‘async I/O’ and how eventlet is the library we use for 
this in OpenStack. But, well … what exactly is this mysterious 
‘asynchronous I/O’ thing? Read it from Mark McLoughlin 
http://blogs.gnome.org/markmc.



 Ceph integration in OpenStack: Grizzly update and roadmap for
 Havana
 
http://sebastien-han.fr/blog/2013/06/03/ceph-integration-in-openstack-grizzly-update-and-roadmap-for-havana/

Sébastien Han http://sebastien-han.fr/ wrote a summary of the sessions 
about Ceph integration with OpenStack. His post contains details about 
upcoming features and a roadmap.



 OpenStack-Docker: How to manage your Linux Containers with Nova
 
http://blog.docker.io/2013/06/openstack-docker-manage-linux-containers-with-nova/

A new approach to manage Linux Containers (LXC) within OpenStack 
Compute. The Docker project released a driver to deploy LXC with Docker, 
with multiple advantages over the “normal” virtual machines usually 
deployed by Nova. Those advantages are speed, efficiency, and 
portability. Details and links to the code on How to manage your Linux 
Containers with Nova 
http://blog.docker.io/2013/06/openstack-docker-manage-linux-containers-with-nova/.



   Tips ‘n Tricks

 * By Adam Young http://adam.younglogic.com/: Keystone test coverage
   http://adam.younglogic.com/2013/06/keystone-test-coverage/
 * By Everett Toews http://blog.phymata.com/: Swift/Cloud Files Cross
   Origin Resource Sharing Container with jclouds
   
http://blog.phymata.com/2013/06/04/swift-cloud-files-cross-origin-resource-sharing-container-with-jclouds/
 * By Aaron Rosen http://blog.aaronorosen.com/: OpenStack Interface
   Hot Plugging
   http://blog.aaronorosen.com/openstack-interface-hot-plugging/


   OpenStack In The Wild

 * Live Person OpenStack Usage Case Study
   http://www.slideshare.net/openstackil/koby-holzer-live-personopenstackstory


   Upcoming Events

 * OpenStack Meetup Chennai
   http://www.meetup.com/Indian-OpenStack-User-Group/events/120677342/ Jun
   08, 2013 – Chennai, India Details
   http://www.meetup.com/Indian-OpenStack-User-Group/events/120677342/
 * OpenStack meeting in Munich
   http://www.meetup.com/openstack-de/events/109700562/ Jun 10, 2013
   – Munich, Germany Details
   http://www.meetup.com/openstack-de/events/109700562/
 * Cloud Expo East 2013 http://www.cloudcomputingexpo.com/ Jun 10 –
   13, 2013 – New York City, NY Details
   

[Openstack] quantum l2 networks

2013-06-07 Thread Joe Breu
Hello,

Is there a way to create a quantum l2 network using OVS that does not have MAC 
and IP spoofing enabled either in iptables or OVS?  One workaround that we 
found was to set the OVS plugin firewall_driver = 
quantum.agent.firewall.NoopFirewallDriver to security_group_api=nova however 
this is far from ideal and doesn't solve the problem of MAC spoof filtering at 
the OVS level.

Thanks for any help


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] quantum l2 networks

2013-06-07 Thread Aaron Rosen
Hi Joe,

I thought setting firewall_driver =
quantum.agent.firewall.NoopFirewallDriver would do the trick? Also, the ovs
plugin does not do any mac spoof filtering at the OVS level. Those are all
done in iptables.

Aaron

On Fri, Jun 7, 2013 at 8:22 PM, Joe Breu joseph.b...@rackspace.com wrote:

 Hello,

 Is there a way to create a quantum l2 network using OVS that does not have
 MAC and IP spoofing enabled either in iptables or OVS?  One workaround that
 we found was to set the OVS plugin firewall_driver =
 quantum.agent.firewall.NoopFirewallDriver to security_group_api=nova
 however this is far from ideal and doesn't solve the problem of MAC spoof
 filtering at the OVS level.

 Thanks for any help


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #306

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/306/Project:precise_havana_nova_trunkDate of build:Fri, 07 Jun 2013 01:30:52 -0400Build duration:34 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesAllocate networks in the backgroundby cbehrenseditnova/network/model.pyeditnova/tests/network/test_network_info.pyeditnova/compute/manager.pyeditnova/tests/fake_network.pyConsole Output[...truncated 5160 lines...]dch -a [502b672] Removes unnecessary check for admin context in evacuate.dch -a [4a28450] Fix zookeeper import and testsdch -a [8b79ac2] Make sure that hypervisor nodename is set correctly in FakeDriverdch -a [ac9cc15] Optimize db.instance_floating_address_get_all methoddch -a [2e35d71] Session cleanup for db.floating_ip_* methodsdch -a [0f56d8d] Optimize instance queries in compute managerdch -a [a419581] Remove duplicate gettext.install() callsdch -a [dd66f23] Include list of attached volumes with instance infodch -a [46ce2e3] Catch volume create exceptiondch -a [6cfc025] Fixes KeyError bug with network api associatedch -a [b7f9940] Add unitests for VMware vif, and fix code logical error.dch -a [f53df8c] Fix format error in claims.dch -a [eb8e070] Fixes mock calls in Hyper-V test methoddch -a [d50d69c] Adds instance root disk size checks during resizedch -a [3222d8b] Rename nova.compute.instance_types to flavorsdch -a [6be8577] Convert to using newly imported processutils.dch -a [1971856] Import new additions to oslo's processutils.dch -a [dfd1e5e] Imported Translations from Transifexdch -a [e507094] Enable live block migration when using iSCSI volumesdch -a [cbe8626] Transition from openstack.common.setup to pbr.dch -a [5a89fe1] Remove security_group_handlerdch -a [37b4ae3] Add cpuset attr to vcpu conf in libvirt xmldch -a [0098f12] libvirt: ignore NOSTATE in resume_state_on_host_boot() method.dch -a [85359a2] Add an index to compute_node_statsdch -a [d819f86] Copy the RHEL6 eventlet workaround from Oslodebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935AINFO:root:Destroying schroot.CDC nova_2013.2.a1051.g7a475d3+git201306070159~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A nova_2013.2.a1051.g7a475d3+git201306070159~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2.a1051.g7a475d3+git201306070159~precise-0ubuntu1.dsc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2.a1051.g7a475d3+git201306070159~precise-0ubuntu1.dsc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_havana_horizon_trunk #69

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_horizon_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_horizon_trunk/69/Project:precise_havana_horizon_trunkDate of build:Fri, 07 Jun 2013 02:00:24 -0400Build duration:6 min 23 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesRemove Edit VIP button when there is no VIPby tmazureditopenstack_dashboard/dashboards/project/loadbalancers/tables.pyConsole Output[...truncated 6519 lines...]Uploading to ppa (via ftp to ppa.launchpad.net):  Uploading horizon_2013.2+git201306070201~precise-0ubuntu1.dsc: done.  Uploading horizon_2013.2+git201306070201~precise.orig.tar.gz: done.  Uploading horizon_2013.2+git201306070201~precise-0ubuntu1.debian.tar.gz: done.  Uploading horizon_2013.2+git201306070201~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-havana', 'horizon_2013.2+git201306070201~precise-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/h/horizon/openstack-dashboard-ubuntu-theme_2013.2+git201306022125~precise-0ubuntu1_all.debdeleting and forgetting pool/main/h/horizon/openstack-dashboard_2013.2+git201306022125~precise-0ubuntu1_all.debdeleting and forgetting pool/main/h/horizon/python-django-horizon_2013.2+git201306022125~precise-0ubuntu1_all.debdeleting and forgetting pool/main/h/horizon/python-django-openstack_2013.2+git201306022125~precise-0ubuntu1_all.debINFO:root:Storing current commit for next build: 4327be0e078c6963f99b0bb290f6368a125a8d9aINFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/horizon/havana /tmp/tmp71KZ22/horizonmk-build-deps -i -r -t apt-get -y /tmp/tmp71KZ22/horizon/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log c5f968afee65c360225abbd4b118c7d102a1edd3..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201306070201~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [4327be0] Remove "Edit VIP" button when there is no VIPdch -a [4a8ac74] Make 'Router created' message translatabledch -a [a3d0e37] Enable most of the pyflakes checks.dch -a [65c48fc] Adding pagination to the tenant viewsdch -a [9429d4d] Add RAM/disk requirements to image detailsdch -a [ba8d9c0] Add edit buttons for vip, member and monitordch -a [8770b32] Resizing a server by means of changing its flavordch -a [9fa7e93] Make 'Creating volume' message translatabledch -a [b029961] Add availability zone choice to launch instancedch -a [431404c] When launching instances, clarifies quota text to "X of Y Used"dch -a [495f404] Fix spelling errors.dch -a [3eac918] Pop 'password' in user_update v3 if it is left blankdch -a [cc87269] Adds methods for [] & len into LazyURLPatterndch -a [bca1ece] Add security group rule templatesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC horizon_2013.2+git201306070201~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A horizon_2013.2+git201306070201~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana horizon_2013.2+git201306070201~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana horizon_2013.2+git201306070201~precise-0ubuntu1_amd64.changesEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_grizzly_glance_trunk #302

2013-06-07 Thread openstack-testing-bot
Title: precise_grizzly_glance_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_glance_trunk/302/Project:precise_grizzly_glance_trunkDate of build:Fri, 07 Jun 2013 07:00:22 -0400Build duration:11 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesEncode headers and paramsby flaper87editglance/common/client.pyaddglance/openstack/common/strutils.pyeditopenstack-common.confaddglance/tests/unit/common/test_client.pyeditglance/openstack/common/gettextutils.pyConsole Output[...truncated 6305 lines...]gpg: Signature made Fri Jun  7 07:02:15 2013 EDT using RSA key ID 9935ACDCgpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) "Checking signature on .changesGood signature on /tmp/tmpRa5qb2/glance_2013.1+git201306070700~precise-0ubuntu1_source.changes.Checking signature on .dscGood signature on /tmp/tmpRa5qb2/glance_2013.1+git201306070700~precise-0ubuntu1.dsc.Uploading to ppa (via ftp to ppa.launchpad.net):  Uploading glance_2013.1+git201306070700~precise-0ubuntu1.dsc: done.  Uploading glance_2013.1+git201306070700~precise.orig.tar.gz: done.  Uploading glance_2013.1+git201306070700~precise-0ubuntu1.debian.tar.gz: done.  Uploading glance_2013.1+git201306070700~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'glance_2013.1+git201306070700~precise-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-grizzly/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-grizzly/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/g/glance/glance-api_2013.1+git201305300730~precise-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance-common_2013.1+git201305300730~precise-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance-registry_2013.1+git201305300730~precise-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance_2013.1+git201305300730~precise-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/python-glance-doc_2013.1+git201305300730~precise-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/python-glance_2013.1+git201305300730~precise-0ubuntu1_all.debINFO:root:Storing current commit for next build: 272982c0d0c18443323e26475d54a8203f7ecec7INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpRa5qb2/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpRa5qb2/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 4b1f8039ad26ec9a193feb7689c4440b553113b8..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.1+git201306070700~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [272982c] Bump stable/grizzly next version to 2013.1.3dch -a [580eb6f] Encode headers and paramsdch -a [ff8c8e8] Adding help text to the options that did not have it.dch -a [4655685] Call os.kill for each child instead of the process groupdch -a [9ce21bd] Call monkey_patch before other modules are loadeddch -a [0c98014] Don't raise HTTPForbidden on a multitenant environmentdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.1+git201306070700~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A glance_2013.1+git201306070700~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing glance_2013.1+git201306070700~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-grizzly glance_2013.1+git201306070700~precise-0ubuntu1_amd64.changesEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_python-quantumclient_trunk #26

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_python-quantumclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_python-quantumclient_trunk/26/Project:precise_havana_python-quantumclient_trunkDate of build:Fri, 07 Jun 2013 08:01:21 -0400Build duration:1 min 34 secBuild cause:Started by user Chuck ShortBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 918 lines...]Distribution: precise-havanaFail-Stage: install-depsHost Architecture: amd64Install-Time: 0Job: python-quantumclient_2.2.2a.4.g245c616+git201306070801~precise-0ubuntu1.dscMachine Architecture: amd64Package: python-quantumclientPackage-Time: 0Source-Version: 2:2.2.2a.4.g245c616+git201306070801~precise-0ubuntu1Space: 0Status: failedVersion: 2:2.2.2a.4.g245c616+git201306070801~precise-0ubuntu1Finished at 20130607-0802Build needed 00:00:00, 0k disc spaceE: Package build dependencies not satisfied; skippingERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'python-quantumclient_2.2.2a.4.g245c616+git201306070801~precise-0ubuntu1.dsc']' returned non-zero exit status 3ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'python-quantumclient_2.2.2a.4.g245c616+git201306070801~precise-0ubuntu1.dsc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-quantumclient/havana /tmp/tmpB4Gxg_/python-quantumclientmk-build-deps -i -r -t apt-get -y /tmp/tmpB4Gxg_/python-quantumclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hdch -b -D precise --newversion 2:2.2.2a.4.g245c616+git201306070801~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-quantumclient_2.2.2a.4.g245c616+git201306070801~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A python-quantumclient_2.2.2a.4.g245c616+git201306070801~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'python-quantumclient_2.2.2a.4.g245c616+git201306070801~precise-0ubuntu1.dsc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'python-quantumclient_2.2.2a.4.g245c616+git201306070801~precise-0ubuntu1.dsc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #307

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/307/Project:precise_havana_nova_trunkDate of build:Fri, 07 Jun 2013 08:38:43 -0400Build duration:4 min 55 secBuild cause:Started by user Chuck ShortBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 5350 lines...]dch -a [502b672] Removes unnecessary check for admin context in evacuate.dch -a [4a28450] Fix zookeeper import and testsdch -a [8b79ac2] Make sure that hypervisor nodename is set correctly in FakeDriverdch -a [ac9cc15] Optimize db.instance_floating_address_get_all methoddch -a [2e35d71] Session cleanup for db.floating_ip_* methodsdch -a [0f56d8d] Optimize instance queries in compute managerdch -a [a419581] Remove duplicate gettext.install() callsdch -a [dd66f23] Include list of attached volumes with instance infodch -a [46ce2e3] Catch volume create exceptiondch -a [6cfc025] Fixes KeyError bug with network api associatedch -a [b7f9940] Add unitests for VMware vif, and fix code logical error.dch -a [f53df8c] Fix format error in claims.dch -a [eb8e070] Fixes mock calls in Hyper-V test methoddch -a [d50d69c] Adds instance root disk size checks during resizedch -a [3222d8b] Rename nova.compute.instance_types to flavorsdch -a [6be8577] Convert to using newly imported processutils.dch -a [1971856] Import new additions to oslo's processutils.dch -a [dfd1e5e] Imported Translations from Transifexdch -a [e507094] Enable live block migration when using iSCSI volumesdch -a [cbe8626] Transition from openstack.common.setup to pbr.dch -a [5a89fe1] Remove security_group_handlerdch -a [37b4ae3] Add cpuset attr to vcpu conf in libvirt xmldch -a [0098f12] libvirt: ignore NOSTATE in resume_state_on_host_boot() method.dch -a [85359a2] Add an index to compute_node_statsdch -a [d819f86] Copy the RHEL6 eventlet workaround from Oslodebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935AINFO:root:Destroying schroot.CDC nova_2013.2.a1051.g7a475d3+git201306070839~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A nova_2013.2.a1051.g7a475d3+git201306070839~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2.a1051.g7a475d3+git201306070839~precise-0ubuntu1.dsc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2.a1051.g7a475d3+git201306070839~precise-0ubuntu1.dsc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #308

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/308/Project:precise_havana_nova_trunkDate of build:Fri, 07 Jun 2013 08:45:16 -0400Build duration:4 min 40 secBuild cause:Started by user Chuck ShortBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 5339 lines...]dch -a [502b672] Removes unnecessary check for admin context in evacuate.dch -a [4a28450] Fix zookeeper import and testsdch -a [8b79ac2] Make sure that hypervisor nodename is set correctly in FakeDriverdch -a [ac9cc15] Optimize db.instance_floating_address_get_all methoddch -a [2e35d71] Session cleanup for db.floating_ip_* methodsdch -a [0f56d8d] Optimize instance queries in compute managerdch -a [a419581] Remove duplicate gettext.install() callsdch -a [dd66f23] Include list of attached volumes with instance infodch -a [46ce2e3] Catch volume create exceptiondch -a [6cfc025] Fixes KeyError bug with network api associatedch -a [b7f9940] Add unitests for VMware vif, and fix code logical error.dch -a [f53df8c] Fix format error in claims.dch -a [eb8e070] Fixes mock calls in Hyper-V test methoddch -a [d50d69c] Adds instance root disk size checks during resizedch -a [3222d8b] Rename nova.compute.instance_types to flavorsdch -a [6be8577] Convert to using newly imported processutils.dch -a [1971856] Import new additions to oslo's processutils.dch -a [dfd1e5e] Imported Translations from Transifexdch -a [e507094] Enable live block migration when using iSCSI volumesdch -a [cbe8626] Transition from openstack.common.setup to pbr.dch -a [5a89fe1] Remove security_group_handlerdch -a [37b4ae3] Add cpuset attr to vcpu conf in libvirt xmldch -a [0098f12] libvirt: ignore NOSTATE in resume_state_on_host_boot() method.dch -a [85359a2] Add an index to compute_node_statsdch -a [d819f86] Copy the RHEL6 eventlet workaround from Oslodebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935AINFO:root:Destroying schroot.CDC nova_2013.2.a1051.g7a475d3+git201306070846~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A nova_2013.2.a1051.g7a475d3+git201306070846~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2.a1051.g7a475d3+git201306070846~precise-0ubuntu1.dsc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2.a1051.g7a475d3+git201306070846~precise-0ubuntu1.dsc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_havana_nova_trunk #309

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/309/Project:precise_havana_nova_trunkDate of build:Fri, 07 Jun 2013 08:52:54 -0400Build duration:10 minBuild cause:Started by user Chuck ShortBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 18448 lines...]dch -a [6a98b56] Fixes typo in server-evacuate-req.xmldch -a [aeef5c3] Fix variable referenced before assginment in vmwareapi code.dch -a [188a94c] Remove invalid block_device_mapping volume_size of ''dch -a [91516ed] Architecture property updated in snapshot libvirtdch -a [8f965ea] Add sqlalchemy migration utils.create_shadow_table methoddch -a [a9b8fbc] Add sqlalchemy migration utils.check_shadow_table methoddch -a [3728018] Change type of cells.deleted from boolean to integer.dch -a [07a8213] Pass None to image if booted from volume in live migrationdch -a [662a793] Raise InstanceInvalidState for double hard rebootdch -a [05f01d2] Removes duplicate assertEqualdch -a [58d6879] Remove insecure default for signing_dir option.dch -a [502b672] Removes unnecessary check for admin context in evacuate.dch -a [4a28450] Fix zookeeper import and testsdch -a [8b79ac2] Make sure that hypervisor nodename is set correctly in FakeDriverdch -a [ac9cc15] Optimize db.instance_floating_address_get_all methoddch -a [2e35d71] Session cleanup for db.floating_ip_* methodsdch -a [0f56d8d] Optimize instance queries in compute managerdch -a [a419581] Remove duplicate gettext.install() callsdch -a [dd66f23] Include list of attached volumes with instance infodch -a [46ce2e3] Catch volume create exceptiondch -a [6cfc025] Fixes KeyError bug with network api associatedch -a [b7f9940] Add unitests for VMware vif, and fix code logical error.dch -a [f53df8c] Fix format error in claims.dch -a [eb8e070] Fixes mock calls in Hyper-V test methoddch -a [d50d69c] Adds instance root disk size checks during resizedch -a [3222d8b] Rename nova.compute.instance_types to flavorsdch -a [6be8577] Convert to using newly imported processutils.dch -a [1971856] Import new additions to oslo's processutils.dch -a [dfd1e5e] Imported Translations from Transifexdch -a [e507094] Enable live block migration when using iSCSI volumesdch -a [cbe8626] Transition from openstack.common.setup to pbr.dch -a [5a89fe1] Remove security_group_handlerdch -a [37b4ae3] Add cpuset attr to vcpu conf in libvirt xmldch -a [0098f12] libvirt: ignore NOSTATE in resume_state_on_host_boot() method.dch -a [85359a2] Add an index to compute_node_statsdch -a [d819f86] Copy the RHEL6 eventlet workaround from Oslodebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935AINFO:root:Destroying schroot.CDC nova_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A nova_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana nova_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana nova_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_amd64.changes+ [ 0 != 0 ]+ jenkins-cli build -p pipeline_parameters=pipeline_parameters -p PARENT_BUILD_TAG=jenkins-precise_havana_nova_trunk-309 pipeline_runnerEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_havana_python-quantumclient_trunk #27

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_python-quantumclient_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_python-quantumclient_trunk/27/Project:precise_havana_python-quantumclient_trunkDate of build:Fri, 07 Jun 2013 09:03:44 -0400Build duration:1 min 37 secBuild cause:Started by user Chuck ShortBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 1541 lines...]Source-Version: 2:2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1Space: 1984Status: successfulVersion: 2:2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1Finished at 20130607-0905Build needed 00:00:32, 1984k disc spaceINFO:root:Uploading package to ppa:openstack-ubuntu-testing/havanaDEBUG:root:['dput', 'ppa:openstack-ubuntu-testing/havana', 'python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1_source.changes']gpg: Signature made Fri Jun  7 09:04:43 2013 EDT using RSA key ID 9935ACDCgpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) <ja...@shingle-house.org.uk>"gpg: Signature made Fri Jun  7 09:04:43 2013 EDT using RSA key ID 9935ACDCgpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) <ja...@shingle-house.org.uk>"Checking signature on .changesGood signature on /tmp/tmprqUaQi/python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1_source.changes.Checking signature on .dscGood signature on /tmp/tmprqUaQi/python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1.dsc.Uploading to ppa (via ftp to ppa.launchpad.net):  Uploading python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1.dsc: done.  Uploading python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise.orig.tar.gz: done.  Uploading python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1.debian.tar.gz: done.  Uploading python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-havana', 'python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/p/python-quantumclient/python-quantumclient_2.2.2a.2.gfc94431+git201306022113~precise-0ubuntu1_all.debINFO:root:Storing current commit for next build: 92811c76f3a8308b36f81e61451ec17d227b453bINFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-quantumclient/havana /tmp/tmprqUaQi/python-quantumclientmk-build-deps -i -r -t apt-get -y /tmp/tmprqUaQi/python-quantumclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hdch -b -D precise --newversion 2:2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1 Automated Ubuntu testing build:dch -a No change rebuild.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana python-quantumclient_2.2.2a.4.g245c616+git201306070903~precise-0ubuntu1_amd64.changesEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_havana_ceilometer_trunk #108

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_ceilometer_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_ceilometer_trunk/108/Project:precise_havana_ceilometer_trunkDate of build:Fri, 07 Jun 2013 09:06:43 -0400Build duration:4 min 21 secBuild cause:Started by user Chuck ShortBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 12604 lines...]dch -a [07828e3] Fix meter_publisher in setup.cfgdch -a [2fb7309] Use flake8 instead of pep8dch -a [5664137] Imported Translations from Transifexdch -a [901eab8] Use sqlalchemy session code from oslo.dch -a [b8bbe8c] Switch to pbr.INFO:root:Destroying schroot.dch -a [1d4533e] fix the broken ceilometer.conf.sample linkdch -a [2920a88] Add a direct Ceilometer notifierdch -a [d88a309] Do the same auth checks in the v2 API as in the v1 APIdch -a [2c84007] Add the sqlalchemy implementation of the alarms collection.dch -a [c906536] Allow posting samples via the rest API (v2)dch -a [1c14bcd] Updated the ceilometer.conf.sample.dch -a [578e6fa] Don't use trivial alarm_id's like "1" in the test cases.dch -a [a60fc5b] Fix the nova notifier tests after a nova renamedch -a [018e39d] Document HBase configurationdch -a [0fdf53d] alarm: fix MongoDB alarm iddch -a [53172bc] Use jsonutils instead of json in test/api.pydch -a [8629e09] Connect the Alarm API to the dbdch -a [896015c] Add the mongo implementation of alarms collectiondch -a [43d728c] Move meter signature computing into meter_publishdch -a [bcb8236] Update WSME dependencydch -a [0c45387] Imported Translations from Transifexdch -a [c1b7161] Add Alarm DB API and modelsdch -a [9518813] Imported Translations from Transifexdch -a [d764f8c] Remove "extras" againdch -a [89ab2f8] add links to return values from API methodsdch -a [82ad299] Modify limitation on request versiondch -a [f90b36d] Doc improvementsdch -a [92905c9] Rename EventFilter to SampleFilter.dch -a [39d9ca7] Fixes AttributeError of FloatingIPPollsterdch -a [0d5c271] Add just the most minimal alarm APIdch -a [5cb2f9c] Update oslo before bringing in exceptionsdch -a [4fb7650] Enumerate the meter type in the API Meter classdch -a [ca971ff] Remove "extras" as it is not useddch -a [6979b16] Adds examples of CLI and API queries to the V2 documentation.dch -a [1828143] update the ceilometer.conf.sampledch -a [8bcc377] Set hbase table_prefix default to Nonedch -a [af2704e] glance/cinder/quantum counter units are not accurate/consistentdch -a [6cb0eb9] Add some recommendations about databasedebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC ceilometer_2013.2+git201306070906~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A ceilometer_2013.2+git201306070906~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana ceilometer_2013.2+git201306070906~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana ceilometer_2013.2+git201306070906~precise-0ubuntu1_amd64.changesEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_havana_python-cinderclient_trunk #25

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_python-cinderclient_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_python-cinderclient_trunk/25/Project:precise_havana_python-cinderclient_trunkDate of build:Fri, 07 Jun 2013 09:11:32 -0400Build duration:1 min 56 secBuild cause:Started by user Chuck ShortBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 2046 lines...]  Uploading python-cinderclient_1.0.4.20.g93557c1+git201306070911~precise.orig.tar.gz: done.  Uploading python-cinderclient_1.0.4.20.g93557c1+git201306070911~precise-0ubuntu1.debian.tar.gz: done.  Uploading python-cinderclient_1.0.4.20.g93557c1+git201306070911~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-havana', 'python-cinderclient_1.0.4.20.g93557c1+git201306070911~precise-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/p/python-cinderclient/python-cinderclient_1.0.4.6.g8044dc7+git201305161430~precise-0ubuntu1_all.debINFO:root:Storing current commit for next build: 035ba87c4a6ed491285fd0e232fd0c1153542a48INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-cinderclient/havana /tmp/tmpQ2Go8U/python-cinderclientmk-build-deps -i -r -t apt-get -y /tmp/tmpQ2Go8U/python-cinderclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 7c43330c3fd36b1c95c8dae5d122f9d06b5f6c4a..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:1.0.4.20.g93557c1+git201306070911~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [035ba87] python3: Introduce py33 to tox.inidch -a [2e58e73] Update run_tests and bring back colorizer.dch -a [bde6efb] Set the correct location for the tests.dch -a [2a446c5] Only add logging handlers if there currently aren't anydch -a [bf1ce84] Move tests into cinderclient package.dch -a [c82a811] Rename requires files to standard names.dch -a [aa28083] Migrate to pbr.dch -a [7c71fd3] Make ManagerWithFind abstract and fix its descendantsdch -a [24b4039] Migrate to flake8.dch -a [95e142a] Allow generator as input to utils.print_list.dch -a [a122a76] Fixed do_create() in v2 shell.dch -a [2ed5cdc] Add license information.dch -a [f2835f4] Update release info in index.rst.dch -a [eaa0417] Update setup.py prior to next upload to pypi.dch -a [cc8dd55] Add support for volume backupsdch -a [c476311] Fixed unit test name in v1 and v2 testsdch -a [0b78153] Don't print the empty table on list operations.dch -a [dca8dbd] Sync with oslo-incubator copy of setup.py and version.pydch -a [fd3351f] Minor typo/message fixesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-cinderclient_1.0.4.20.g93557c1+git201306070911~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A python-cinderclient_1.0.4.20.g93557c1+git201306070911~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana python-cinderclient_1.0.4.20.g93557c1+git201306070911~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana python-cinderclient_1.0.4.20.g93557c1+git201306070911~precise-0ubuntu1_amd64.changesEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_havana_python-novaclient_trunk #47

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_python-novaclient_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_python-novaclient_trunk/47/Project:precise_havana_python-novaclient_trunkDate of build:Fri, 07 Jun 2013 09:13:43 -0400Build duration:1 min 49 secBuild cause:Started by user Chuck ShortBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 1948 lines...]Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-havana', 'python-novaclient_2.13.0.69.gf67c5e0+git201306070913~precise-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/p/python-novaclient/python-novaclient_2.13.0.38.g64e43fd+git201305150730~precise-0ubuntu1_all.debINFO:root:Storing current commit for next build: 8820623d2a32c0a314c811ad58e974c8417df225INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-novaclient/havana /tmp/tmpVidn8J/python-novaclientmk-build-deps -i -r -t apt-get -y /tmp/tmpVidn8J/python-novaclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 3c97f768e5e175c1f2217be9d976fee8cbdca58b..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2.13.0.69.gf67c5e0+git201306070913~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [8820623] python3: Introduce py33 to tox.inidch -a [67c8055] Start using Hacking and PyFlakesdch -a [def5df2] Fix shell tests for older prettytable versions.dch -a [37da28c] Provide nova CLI man page.dch -a [a8ed2f2] Improve error messages for invalid --nic / --file.dch -a [ff85bd4] 100% test coverage for security groups and rulesdch -a [f2559c4] Add MethodNotAllowed and Conflict exception classesdch -a [c34c371] Move tests into the novaclient package.dch -a [51f0596] Add CONTRIBUTING file.dch -a [3bbdcda] Rename requires files to standard names.dch -a [1a0b7b0] Code cleanup in advance of flake8.dch -a [9d4db6f] Migrate to flake8.dch -a [c305a45] Revert "Support force update quota"dch -a [e8e7a0e] Only add logging handlers if there currently aren't anydch -a [d274077] Convert to more modern openstack-common.conf format.dch -a [ecbf770] Cleanup unused local variablesdch -a [bc0ad1c] Reuse oslo for is_uuid_like() implementationdch -a [20a3595] Synchronize code from oslodch -a [f08ac04] Migrate to pbr.dch -a [b1802a5] Cleanup nova subcommands for security groups and rulesdch -a [c9fc9b5] Make ManagerWithFind abstract and fix its descendantsdch -a [e745b46] Fix for --bridge-interface being ignore by nova network-createdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-novaclient_2.13.0.69.gf67c5e0+git201306070913~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A python-novaclient_2.13.0.69.gf67c5e0+git201306070913~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana python-novaclient_2.13.0.69.gf67c5e0+git201306070913~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana python-novaclient_2.13.0.69.gf67c5e0+git201306070913~precise-0ubuntu1_amd64.changesEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_havana_quantum_trunk #179

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/179/Project:precise_havana_quantum_trunkDate of build:Fri, 07 Jun 2013 09:16:03 -0400Build duration:8 min 15 secBuild cause:Started by user Chuck ShortBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 20079 lines...]dch -a [b28dc10] Docstrings formatted according to pep257dch -a [85ffc01] Docstrings formatted according to pep257dch -a [c7afeda] Use Query instances as iterables when possibledch -a [07299e3] Imported Translations from Transifexdch -a [d9c95e4] Add tests for LinuxBridge and OVS agentsdch -a [b15570f] Imported Translations from Transifexdch -a [7dddca7] Fix logic issue in OVSQuantumAgent.port_unbound methoddch -a [fb927db] Imported Translations from Transifexdch -a [f83c785] Imported Translations from Transifexdch -a [ea9aeb6] Simplify NVP plugin configurationdch -a [152f3cf] Create veth peer in namespace.dch -a [9c21592] Imported Translations from Transifexdch -a [01a977b] Send 400 error if device specification contains unexpected attributesdch -a [62017cd] Imported Translations from Transifexdch -a [26b98b7] lbaas: check object state before update for pools, members, health monitorsdch -a [49c1c98] Metadata agent: reuse authentication info across eventlet threadsdch -a [11639a2] Imported Translations from Transifexdch -a [35988f1] Make the 'admin' role configurabledch -a [ee50162] Simplify delete_health_monitor() using cascadesdch -a [765baf8] Imported Translations from Transifexdch -a [15a1445] Update latest OSLO codedch -a [343ca18] Imported Translations from Transifexdch -a [c117074] Remove locals() from strings substitutionsdch -a [fb66e24] Imported Translations from Transifexdch -a [e001a8d] Add string 'quantum'/ version to scope/tag in NVPdch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1c513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic valINFO:root:Destroying schroot.idation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201306070916~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201306070916~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana quantum_2013.2+git201306070916~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana quantum_2013.2+git201306070916~precise-0ubuntu1_amd64.changesEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_havana_nova_trunk #310

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/310/Project:precise_havana_nova_trunkDate of build:Fri, 07 Jun 2013 13:00:23 -0400Build duration:11 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesReplace openstack-common with oslo in HACKING.rstby thomasbechtoldeditHACKING.rstConsole Output[...truncated 9138 lines...]Job: nova_2013.2.a1052.g3100abb+git201306071301~precise-0ubuntu1.dscMachine Architecture: amd64Package: novaPackage-Time: 464Source-Version: 1:2013.2.a1052.g3100abb+git201306071301~precise-0ubuntu1Space: 85852Status: attemptedVersion: 1:2013.2.a1052.g3100abb+git201306071301~precise-0ubuntu1Finished at 20130607-1311Build needed 00:07:44, 85852k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2.a1052.g3100abb+git201306071301~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2.a1052.g3100abb+git201306071301~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/havana /tmp/tmp4AwZ2A/novamk-build-deps -i -r -t apt-get -y /tmp/tmp4AwZ2A/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log f7c35a2956a360c2f6f7c045784f1d03b352540f..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2.a1052.g3100abb+git201306071301~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [3100abb] Replace openstack-common with oslo in HACKING.rstdch -a [7d26848] Keypair API test cleanupdch -a [7e2fe38] Alphabetize v3 API extension entry point listdch -a [3a89638] Allocate networks in the backgrounddch -a [edcf4ec] Add x-compute-request-id header when no response bodydebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.2.a1052.g3100abb+git201306071301~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A nova_2013.2.a1052.g3100abb+git201306071301~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2.a1052.g3100abb+git201306071301~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2.a1052.g3100abb+git201306071301~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_havana_nova_trunk #311

2013-06-07 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/311/Project:precise_havana_nova_trunkDate of build:Fri, 07 Jun 2013 16:00:24 -0400Build duration:10 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesRename unique constraints due to new convention.by vsergeyeveditnova/tests/utils.pyeditnova/tests/db/test_migrations.pyeditnova/tests/db/test_db_api.pyeditnova/tests/virt/libvirt/test_libvirt.pyaddnova/db/sqlalchemy/migrate_repo/versions/185_rename_unique_constraints.pyeditnova/db/sqlalchemy/models.pyeditnova/openstack/common/db/sqlalchemy/session.pyConsole Output[...truncated 17391 lines...]deleting and forgetting pool/main/n/nova/nova-common_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-kvm_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-lxc_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-qemu_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-uml_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-xcp_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute-xen_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-compute_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-conductor_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-console_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-consoleauth_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-doc_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-network_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-novncproxy_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-objectstore_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-scheduler_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-spiceproxy_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-volume_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-network_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xcp-plugins_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/nova-xvpvncproxy_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debdeleting and forgetting pool/main/n/nova/python-nova_2013.2.a1051.g7a475d3+git201306070853~precise-0ubuntu1_all.debINFO:root:Storing current commit for next build: 64ce647003b110771331d3daf92980729bd3988eINFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/havana /tmp/tmpySBB4N/novamk-build-deps -i -r -t apt-get -y /tmp/tmpySBB4N/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log f7c35a2956a360c2f6f7c045784f1d03b352540f..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2.a1054.g08d6c1d+git201306071601~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [64ce647] Rename unique constraints due to new convention.dch -a [3100abb] Replace openstack-common with oslo in HACKING.rstdch -a [7d26848] Keypair API test cleanupdch -a [7e2fe38] Alphabetize v3 API extension entry point listdch -a [3a89638] Allocate networks in the backgrounddch -a [edcf4ec] Add x-compute-request-id header when no response bodydebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC nova_2013.2.a1054.g08d6c1d+git201306071601~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A nova_2013.2.a1054.g08d6c1d+git201306071601~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana nova_2013.2.a1054.g08d6c1d+git201306071601~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana nova_2013.2.a1054.g08d6c1d+git201306071601~precise-0ubuntu1_amd64.changes+ [ 0 != 0 ]+ jenkins-cli build -p pipeline_parameters=pipeline_parameters -p PARENT_BUILD_TAG=jenkins-precise_havana_nova_trunk-311 pipeline_runnerEmail was triggered for: 

[Openstack-ubuntu-testing-notifications] Build Failure: raring_grizzly_glance_stable #316

2013-06-07 Thread openstack-testing-bot
Title: raring_grizzly_glance_stable
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_glance_stable/316/Project:raring_grizzly_glance_stableDate of build:Fri, 07 Jun 2013 20:34:39 -0400Build duration:7.4 secBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesNo ChangesConsole OutputStarted by user Adam GandelmanBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/raring_grizzly_glance_stableCheckout:raring_grizzly_glance_stable / /var/lib/jenkins/slave/workspace/raring_grizzly_glance_stable - hudson.remoting.Channel@6ccd48:pkg-builderUsing strategy: DefaultLast Built Revision: Revision a5dda27f3b797abe9acac4eceb091c3d48cfc0e7 (remotes/origin/stable/grizzly)Checkout:glance / /var/lib/jenkins/slave/workspace/raring_grizzly_glance_stable/glance - hudson.remoting.LocalChannel@1f2f2a8fWiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/glance.gitCommencing build of Revision a5dda27f3b797abe9acac4eceb091c3d48cfc0e7 (remotes/origin/stable/grizzly)Checking out Revision a5dda27f3b797abe9acac4eceb091c3d48cfc0e7 (remotes/origin/stable/grizzly)No emails were triggered.[raring_grizzly_glance_stable] $ /bin/sh -xe /tmp/hudson8461890517879925569.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package -j -DERROR:root:Could not find config file at /var/lib/jenkins/tools/openstack-ubuntu-testing/etc/builds/config-raring-grizzly-stable.yaml.DEBUG:root:Using parameters derived from environment:branch: falseconfig: /var/lib/jenkins/tools/openstack-ubuntu-testing/etc/builds/config-raring-grizzly-stable.yamldebug: truedest: /var/lib/jenkins/slave/workspace/raring_grizzly_glance_stablejenkins: trueproject: glancerelease: raringsnapshot_repo: falsesource: /var/lib/jenkins/slave/workspace/raring_grizzly_glance_stable/glanceERROR:root:Unable to find configuration file (/var/lib/jenkins/tools/openstack-ubuntu-testing/etc/builds/config-raring-grizzly-stable.yaml)Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: saucy_havana_cinder_trunk #99

2013-06-07 Thread openstack-testing-bot
Title: saucy_havana_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/saucy_havana_cinder_trunk/99/Project:saucy_havana_cinder_trunkDate of build:Fri, 07 Jun 2013 20:30:31 -0400Build duration:6 min 20 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesAdd missing tests for backup_* methodsby yzveryanskyyeditcinder/tests/test_db_api.pyConsole Output[...truncated 5285 lines...]Host Architecture: amd64Install-Time: 77Job: cinder_2013.2+git201306072030~saucy-0ubuntu1.dscMachine Architecture: amd64Package: cinderPackage-Time: 153Source-Version: 1:2013.2+git201306072030~saucy-0ubuntu1Space: 26660Status: attemptedVersion: 1:2013.2+git201306072030~saucy-0ubuntu1Finished at 20130607-2036Build needed 00:02:33, 26660k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'saucy-havana', '-n', '-A', 'cinder_2013.2+git201306072030~saucy-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'saucy-havana', '-n', '-A', 'cinder_2013.2+git201306072030~saucy-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/havana /tmp/tmpkgflE6/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpkgflE6/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log b7ceb409ecac6741cd97e664167162087a415904..HEAD --no-merges --pretty=format:[%h] %sdch -b -D saucy --newversion 1:2013.2+git201306072030~saucy-0ubuntu1 Automated Ubuntu testing build:dch -a [4615161] Add missing tests for backup_* methodsdch -a [8d7703c] Unset all stubs before running other cleanups.dch -a [34e649e] Add missing tests for iscsi_* methodsdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC cinder_2013.2+git201306072030~saucy-0ubuntu1_source.changessbuild -d saucy-havana -n -A cinder_2013.2+git201306072030~saucy-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'saucy-havana', '-n', '-A', 'cinder_2013.2+git201306072030~saucy-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'saucy-havana', '-n', '-A', 'cinder_2013.2+git201306072030~saucy-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: raring_grizzly_glance_stable #317

2013-06-07 Thread openstack-testing-bot
Title: raring_grizzly_glance_stable
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_glance_stable/317/Project:raring_grizzly_glance_stableDate of build:Fri, 07 Jun 2013 20:36:08 -0400Build duration:11 minBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesConsole Output[...truncated 7022 lines...]gpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) "gpg: Signature made Fri Jun  7 20:39:27 2013 EDT using RSA key ID 9935ACDCgpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) "Checking signature on .changesGood signature on /tmp/tmpJRiK6k/glance_2013.1+git201306072036~raring-0ubuntu1_source.changes.Checking signature on .dscGood signature on /tmp/tmpJRiK6k/glance_2013.1+git201306072036~raring-0ubuntu1.dsc.Uploading to ppa (via ftp to ppa.launchpad.net):  Uploading glance_2013.1+git201306072036~raring-0ubuntu1.dsc: done.  Uploading glance_2013.1+git201306072036~raring.orig.tar.gz: done.  Uploading glance_2013.1+git201306072036~raring-0ubuntu1.debian.tar.gz: done.  Uploading glance_2013.1+git201306072036~raring-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'raring-grizzly', 'glance_2013.1+git201306072036~raring-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/raring-grizzly/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/raring-grizzly/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/g/glance/glance-api_2013.1+git201306070700~raring-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance-common_2013.1+git201306070700~raring-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance-registry_2013.1+git201306070700~raring-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance_2013.1+git201306070700~raring-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/python-glance-doc_2013.1+git201306070700~raring-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/python-glance_2013.1+git201306070700~raring-0ubuntu1_all.debINFO:root:Storing current commit for next build: 272982c0d0c18443323e26475d54a8203f7ecec7INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpJRiK6k/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpJRiK6k/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log -n5 --no-merges --pretty=format:[%h] %sdch -b -D raring --newversion 1:2013.1+git201306072036~raring-0ubuntu1 Automated Ubuntu testing build:dch -a [272982c] Bump stable/grizzly next version to 2013.1.3dch -a [580eb6f] Encode headers and paramsdch -a [4b1f803] Bump stable/grizzly next version to 2013.1.2dch -a [ff8c8e8] Adding help text to the options that did not have it.dch -a [4655685] Call os.kill for each child instead of the process groupdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.1+git201306072036~raring-0ubuntu1_source.changessbuild -d raring-grizzly -n -A glance_2013.1+git201306072036~raring-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing glance_2013.1+git201306072036~raring-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include raring-grizzly glance_2013.1+git201306072036~raring-0ubuntu1_amd64.changesEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp