Re: [Openstack] Thanks for using DocImpact

2013-04-11 Thread Tom Fifield
With Havana development starting, we're already seeing some DocImpacts 
coming in - thanks for those!


We want to sure we catch all those removals, deprecations and pent-up 
updates that tend to come in at the beginning of the cycle.


So, as usual:


 If your commit could have an impact on documentation - be it an
 added/altered/removed commandline option, a deprecated or new feature, a
 caveat, if you've written docs in the patch, or if you're just not sure
 - there's a way to let us know.

 = Just add DocImpact to a line in your commit message.

 This sends us an email so we can triage. It doesn't guarantee docs will
 be written, but at least it gives us visibility of the changes.



Don't forget to tell your friends :)


Regards,


Tom, on behalf of the docs team


On 03/01/13 13:25, Tom Fifield wrote:

Hi all,

Just wanted to drop a quick note on the list to say thanks to all who
have gone to the effort of using the DocImpact flag in your commit messages.

We're now receiving a steady stream of useful information on changes
that affect the documentation, and logging and targeting them[1][2]. The
aim is that as Grizzly is released, the manuals should be much more up
to date than in previous releases.

Of course, the workload[3] is still a struggle, so any help[4] fixing up
docbugs is much appreciated :)

Thanks again for your efforts!

Regards,


Tom, on behalf of the docs team

[1] https://bugs.launchpad.net/openstack-manuals/+milestone/grizzly
[2] https://bugs.launchpad.net/openstack-api-site/+milestone/grizzly
[3] http://kiks.webnumbr.com/untouched-bugs-in-openstack-manuals-
[4] http://wiki.openstack.org/Documentation/HowTo

On 30/10/12 12:31, Tom Fifield wrote:

TL;DR - If anything you submit could have an impact on documentation,
just add DocImpact to a line in your commit message.

Developers,


We need your help.

In the face of the 500 contributors to the code base, those small
handful of us working on documentation are losing the war.

One of the worst pains we have right now is that we're not getting
information from you about the changes you make. We just don't have the
people to review every single commit on every single project for its
impact on documentation.

This is where you can make a difference.

If your commit could have an impact on documentation - be it an
added/altered/removed commandline option, a deprecated or new feature, a
caveat, if you've written docs in the patch, or if you're just not sure
- there's a way to let us know.

= Just add DocImpact to a line in your commit message.

This sends us an email so we can triage. It doesn't guarantee docs will
be written, but at least it gives us visibility of the changes.


Thanks for reading.

As always - if you have any time to write/fix docs, we've more than one
hundred bugs waiting for your contribution . . .


Regards,


Tom, on behalf of the docs team.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly release packages available in the Ubuntu Cloud Archive

2013-04-11 Thread Martinx - ジェームズ
Hi!

I figure it out! The nova-api *depends* on iptables but, the Ubuntu package
is currently *missing that*.

When I first install Grizzly on top of a Ubuntu minimum virtual machine
today, iptables wasn't installed...

I started it from scratch again, installing iptables before nova-api, the
Dashboard works (in parts).

---

Now, after login into my Grizzly Dashboard, I'm seeing the following two
error messages there:

Dashboard error:

Error: Unauthorized: Unable to retrieve usage information.

Error: Unauthorized: Unable to retrieve quota information.


The Apache error:
-
[Thu Apr 11 05:08:54 2013] [notice] Apache/2.2.22 (Ubuntu) mod_wsgi/3.3
Python/2.7.3 configured -- resuming normal operations
[Thu Apr 11 08:09:11 2013] [error] \x1b[31;1mUnauthorized: Unauthorized
(HTTP 401)\x1b[0m
[Thu Apr 11 08:09:11 2013] [error] Traceback (most recent call last):
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py,
line 95, in summarize
[Thu Apr 11 08:09:11 2013] [error] self.usage_list =
self.get_usage_list(start, end)
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py,
line 130, in get_usage_list
[Thu Apr 11 08:09:11 2013] [error] return
api.nova.usage_list(self.request, start, end)
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py,
line 474, in usage_list
[Thu Apr 11 08:09:11 2013] [error]
novaclient(request).usage.list(start, end, True)]
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/lib/python2.7/dist-packages/novaclient/v1_1/usage.py, line 35, in
list
[Thu Apr 11 08:09:11 2013] [error] tenant_usages)
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/lib/python2.7/dist-packages/novaclient/base.py, line 62, in _list
[Thu Apr 11 08:09:11 2013] [error] _resp, body =
self.api.client.get(url)
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/lib/python2.7/dist-packages/novaclient/client.py, line 230, in get
[Thu Apr 11 08:09:11 2013] [error] return self._cs_request(url, 'GET',
**kwargs)
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/lib/python2.7/dist-packages/novaclient/client.py, line 227, in
_cs_request
[Thu Apr 11 08:09:11 2013] [error] raise e
[Thu Apr 11 08:09:11 2013] [error] Unauthorized: Unauthorized (HTTP 401)
[Thu Apr 11 08:09:11 2013] [error] \x1b[31;1mUnauthorized: Unauthorized
(HTTP 401)\x1b[0m
[Thu Apr 11 08:09:11 2013] [error] Traceback (most recent call last):
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py,
line 112, in get_quotas
[Thu Apr 11 08:09:11 2013] [error] self.quotas =
quotas.tenant_quota_usages(self.request)
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/lib/python2.7/dist-packages/horizon/utils/memoized.py, line 33, in
__call__
[Thu Apr 11 08:09:11 2013] [error] value = self.func(*args)
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py,
line 116, in tenant_quota_usages
[Thu Apr 11 08:09:11 2013] [error] disabled_quotas=disabled_quotas):
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py,
line 99, in get_tenant_quota_data
[Thu Apr 11 08:09:11 2013] [error] tenant_id=tenant_id)
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py,
line 76, in _get_quota_data
[Thu Apr 11 08:09:11 2013] [error] quotasets.append(getattr(nova,
method_name)(request, tenant_id))
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py,
line 457, in tenant_quota_get
[Thu Apr 11 08:09:11 2013] [error] return
QuotaSet(novaclient(request).quotas.get(tenant_id))
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/lib/python2.7/dist-packages/novaclient/v1_1/quotas.py, line 37, in
get
[Thu Apr 11 08:09:11 2013] [error] return self._get(/os-quota-sets/%s
% (tenant_id), quota_set)
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/lib/python2.7/dist-packages/novaclient/base.py, line 140, in _get
[Thu Apr 11 08:09:11 2013] [error] _resp, body =
self.api.client.get(url)
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/lib/python2.7/dist-packages/novaclient/client.py, line 230, in get
[Thu Apr 11 08:09:11 2013] [error] return self._cs_request(url, 'GET',
**kwargs)
[Thu Apr 11 08:09:11 2013] [error]   File
/usr/lib/python2.7/dist-packages/novaclient/client.py, line 227, in
_cs_request
[Thu Apr 11 08:09:11 2013] [error] raise e
[Thu Apr 11 08:09:11 2013] [error] Unauthorized: Unauthorized (HTTP 401)
-

The nova-api.log:
--
2013-04-11 05:12:26.906 1468 INFO 

Re: [Openstack] Grizzly release packages available in the Ubuntu Cloud Archive

2013-04-11 Thread Martinx - ジェームズ
Just for the record, under Dashboard - System Info, the Default Quotas is
empty.


On 11 April 2013 05:15, Martinx - ジェ�`ムズ thiagocmarti...@gmail.com wrote:

 Hi!

 I figure it out! The nova-api *depends* on iptables but, the Ubuntu
 package is currently *missing that*.

 When I first install Grizzly on top of a Ubuntu minimum virtual machine
 today, iptables wasn't installed...

 I started it from scratch again, installing iptables before nova-api, the
 Dashboard works (in parts).

 ---

 Now, after login into my Grizzly Dashboard, I'm seeing the following two
 error messages there:

 Dashboard error:

 Error: Unauthorized: Unable to retrieve usage information.

 Error: Unauthorized: Unable to retrieve quota information.


 The Apache error:
 -
 [Thu Apr 11 05:08:54 2013] [notice] Apache/2.2.22 (Ubuntu) mod_wsgi/3.3
 Python/2.7.3 configured -- resuming normal operations
 [Thu Apr 11 08:09:11 2013] [error] \x1b[31;1mUnauthorized: Unauthorized
 (HTTP 401)\x1b[0m
 [Thu Apr 11 08:09:11 2013] [error] Traceback (most recent call last):
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py,
 line 95, in summarize
 [Thu Apr 11 08:09:11 2013] [error] self.usage_list =
 self.get_usage_list(start, end)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py,
 line 130, in get_usage_list
 [Thu Apr 11 08:09:11 2013] [error] return
 api.nova.usage_list(self.request, start, end)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py,
 line 474, in usage_list
 [Thu Apr 11 08:09:11 2013] [error]
 novaclient(request).usage.list(start, end, True)]
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/v1_1/usage.py, line 35, in
 list
 [Thu Apr 11 08:09:11 2013] [error] tenant_usages)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/base.py, line 62, in _list
 [Thu Apr 11 08:09:11 2013] [error] _resp, body =
 self.api.client.get(url)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/client.py, line 230, in get
 [Thu Apr 11 08:09:11 2013] [error] return self._cs_request(url, 'GET',
 **kwargs)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/client.py, line 227, in
 _cs_request
 [Thu Apr 11 08:09:11 2013] [error] raise e
 [Thu Apr 11 08:09:11 2013] [error] Unauthorized: Unauthorized (HTTP 401)
 [Thu Apr 11 08:09:11 2013] [error] \x1b[31;1mUnauthorized: Unauthorized
 (HTTP 401)\x1b[0m
 [Thu Apr 11 08:09:11 2013] [error] Traceback (most recent call last):
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py,
 line 112, in get_quotas
 [Thu Apr 11 08:09:11 2013] [error] self.quotas =
 quotas.tenant_quota_usages(self.request)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/horizon/utils/memoized.py, line 33, in
 __call__
 [Thu Apr 11 08:09:11 2013] [error] value = self.func(*args)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py,
 line 116, in tenant_quota_usages
 [Thu Apr 11 08:09:11 2013] [error] disabled_quotas=disabled_quotas):
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py,
 line 99, in get_tenant_quota_data
 [Thu Apr 11 08:09:11 2013] [error] tenant_id=tenant_id)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py,
 line 76, in _get_quota_data
 [Thu Apr 11 08:09:11 2013] [error] quotasets.append(getattr(nova,
 method_name)(request, tenant_id))
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py,
 line 457, in tenant_quota_get
 [Thu Apr 11 08:09:11 2013] [error] return
 QuotaSet(novaclient(request).quotas.get(tenant_id))
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/v1_1/quotas.py, line 37, in
 get
 [Thu Apr 11 08:09:11 2013] [error] return
 self._get(/os-quota-sets/%s % (tenant_id), quota_set)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/base.py, line 140, in _get
 [Thu Apr 11 08:09:11 2013] [error] _resp, body =
 self.api.client.get(url)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/client.py, line 230, in get
 [Thu Apr 11 08:09:11 2013] [error] return self._cs_request(url, 'GET',
 **kwargs)
 [Thu Apr 11 08:09:11 2013] [error]   File
 

Re: [Openstack] Grizzly release packages available in the Ubuntu Cloud Archive

2013-04-11 Thread Martinx - ジェームズ
Another error:

Dashboard - Flavors: Error: Unauthorized: Unable to retrieve flavor list.

`nova flavor-list' returns:
ERROR: Unauthorized (HTTP 401)

I am missing something but, where?

Tks!
Thiago


On 11 April 2013 05:20, Martinx - ジェ�`ムズ thiagocmarti...@gmail.com wrote:

 Just for the record, under Dashboard - System Info, the Default Quotas is
 empty.


 On 11 April 2013 05:15, Martinx - ジェ�`ムズ thiagocmarti...@gmail.com wrote:

 Hi!

 I figure it out! The nova-api *depends* on iptables but, the Ubuntu
 package is currently *missing that*.

 When I first install Grizzly on top of a Ubuntu minimum virtual machine
 today, iptables wasn't installed...

 I started it from scratch again, installing iptables before nova-api, the
 Dashboard works (in parts).

 ---

 Now, after login into my Grizzly Dashboard, I'm seeing the following two
 error messages there:

 Dashboard error:

 Error: Unauthorized: Unable to retrieve usage information.

 Error: Unauthorized: Unable to retrieve quota information.


 The Apache error:
 -
 [Thu Apr 11 05:08:54 2013] [notice] Apache/2.2.22 (Ubuntu) mod_wsgi/3.3
 Python/2.7.3 configured -- resuming normal operations
 [Thu Apr 11 08:09:11 2013] [error] \x1b[31;1mUnauthorized: Unauthorized
 (HTTP 401)\x1b[0m
 [Thu Apr 11 08:09:11 2013] [error] Traceback (most recent call last):
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py,
 line 95, in summarize
 [Thu Apr 11 08:09:11 2013] [error] self.usage_list =
 self.get_usage_list(start, end)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py,
 line 130, in get_usage_list
 [Thu Apr 11 08:09:11 2013] [error] return
 api.nova.usage_list(self.request, start, end)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py,
 line 474, in usage_list
 [Thu Apr 11 08:09:11 2013] [error]
 novaclient(request).usage.list(start, end, True)]
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/v1_1/usage.py, line 35, in
 list
 [Thu Apr 11 08:09:11 2013] [error] tenant_usages)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/base.py, line 62, in _list
 [Thu Apr 11 08:09:11 2013] [error] _resp, body =
 self.api.client.get(url)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/client.py, line 230, in get
 [Thu Apr 11 08:09:11 2013] [error] return self._cs_request(url,
 'GET', **kwargs)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/client.py, line 227, in
 _cs_request
 [Thu Apr 11 08:09:11 2013] [error] raise e
 [Thu Apr 11 08:09:11 2013] [error] Unauthorized: Unauthorized (HTTP 401)
 [Thu Apr 11 08:09:11 2013] [error] \x1b[31;1mUnauthorized: Unauthorized
 (HTTP 401)\x1b[0m
 [Thu Apr 11 08:09:11 2013] [error] Traceback (most recent call last):
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py,
 line 112, in get_quotas
 [Thu Apr 11 08:09:11 2013] [error] self.quotas =
 quotas.tenant_quota_usages(self.request)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/horizon/utils/memoized.py, line 33, in
 __call__
 [Thu Apr 11 08:09:11 2013] [error] value = self.func(*args)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py,
 line 116, in tenant_quota_usages
 [Thu Apr 11 08:09:11 2013] [error] disabled_quotas=disabled_quotas):
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py,
 line 99, in get_tenant_quota_data
 [Thu Apr 11 08:09:11 2013] [error] tenant_id=tenant_id)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/quotas.py,
 line 76, in _get_quota_data
 [Thu Apr 11 08:09:11 2013] [error] quotasets.append(getattr(nova,
 method_name)(request, tenant_id))
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py,
 line 457, in tenant_quota_get
 [Thu Apr 11 08:09:11 2013] [error] return
 QuotaSet(novaclient(request).quotas.get(tenant_id))
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/v1_1/quotas.py, line 37, in
 get
 [Thu Apr 11 08:09:11 2013] [error] return
 self._get(/os-quota-sets/%s % (tenant_id), quota_set)
 [Thu Apr 11 08:09:11 2013] [error]   File
 /usr/lib/python2.7/dist-packages/novaclient/base.py, line 140, in _get
 [Thu Apr 11 08:09:11 2013] [error] _resp, body =
 self.api.client.get(url)
 [Thu 

Re: [Openstack] Grizzly release packages available in the Ubuntu Cloud Archive

2013-04-11 Thread James Page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 11/04/13 09:22, Martinx - ジェ�`ムズ wrote:
 Dashboard - Flavors: Error: Unauthorized: Unable to retrieve
 flavor list.
 
 `nova flavor-list' returns: ERROR: Unauthorized (HTTP 401)
 
 I am missing something but, where?

All of the missing data (and the error above) is due to the fact that
the user you are using to access the dashboard does not appear to have
the right permissions - I would suspect some sort of misconfiguration
in keystone.


- -- 
James Page
Technical Lead
Ubuntu Server Team
james.p...@canonical.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQIcBAEBCAAGBQJRZnNPAAoJEL/srsug59jDVQ0P/2CxVmwT6DOvB08GKE87oIe7
VQUvCkc59+Gq0t6kpr33c2wBD5IPERtU0g0/v+1Q+6f6ay+AnvbFWWHBZuwpujAc
IcDqh+NMCU7FwEROEAD5bU9clTqMcepB+ONnjij/jpkhUvwGByhtGyan6Ek5K2Rc
ofQusmlk4cZX/k+u4+GCKIrvLIv+mRjnsZYtV8WahOzMDAA3RRWIsOGmjOOT4D73
B3RTUM7W9IqfWo2Tau3JjLzrq09zHG+4tasaWuoNSUPBJaXAy8dKJp4zAoUEbAqd
BXf63APMRrz+FQVMTPOsgH+atsuBpUS4UbCzJmfLn6y/XXKyDxDh4QdATIc2ylJl
nUCmSa2ucDWFL0vFU8FVS2yQ5VO/VJILYQpiOLg7FFfFvhD/IYXRqLnEHMWGZ1RB
dNWYZgGpLjpH23nrPrbd15AmEuacwMLKpbqPXno6Uf6WWHBv8MCxRs4fymfcw4xm
rHh4bNWS8bPY7WpfG+WfH9tv1DsAU9m5UJRGPNyIZ7HCr/Q+Jh1qohLrHViD+mYl
UFZ3OWrYWEZ7Su14udgBwUH6xzgzr00KR+o1P05Yrs737zAZxWS8o8pRqkhx0H9G
DpTk1WrygcIgjp4dNc1u97fiwffXqgjmXZcCXk8QYFKREAEgpn8+zTQDHMbmQST8
cMX7TpyNGIKphX/eVSeW
=UeaH
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly release packages available in the Ubuntu Cloud Archive

2013-04-11 Thread Martinx - ジェームズ
Mmmm... That's true...

My mistake, I forgot to setup /etc/nova/api-paste.ini.

Sorry about the buzz...

Tks!
Thiago


On 11 April 2013 05:24, James Page james.p...@canonical.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 On 11/04/13 09:22, Martinx - ジェームズ wrote:
  Dashboard - Flavors: Error: Unauthorized: Unable to retrieve
  flavor list.
 
  `nova flavor-list' returns: ERROR: Unauthorized (HTTP 401)
 
  I am missing something but, where?

 All of the missing data (and the error above) is due to the fact that
 the user you are using to access the dashboard does not appear to have
 the right permissions - I would suspect some sort of misconfiguration
 in keystone.


 - --
 James Page
 Technical Lead
 Ubuntu Server Team
 james.p...@canonical.com
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.12 (GNU/Linux)
 Comment: Using GnuPG with undefined - http://www.enigmail.net/

 iQIcBAEBCAAGBQJRZnNPAAoJEL/srsug59jDVQ0P/2CxVmwT6DOvB08GKE87oIe7
 VQUvCkc59+Gq0t6kpr33c2wBD5IPERtU0g0/v+1Q+6f6ay+AnvbFWWHBZuwpujAc
 IcDqh+NMCU7FwEROEAD5bU9clTqMcepB+ONnjij/jpkhUvwGByhtGyan6Ek5K2Rc
 ofQusmlk4cZX/k+u4+GCKIrvLIv+mRjnsZYtV8WahOzMDAA3RRWIsOGmjOOT4D73
 B3RTUM7W9IqfWo2Tau3JjLzrq09zHG+4tasaWuoNSUPBJaXAy8dKJp4zAoUEbAqd
 BXf63APMRrz+FQVMTPOsgH+atsuBpUS4UbCzJmfLn6y/XXKyDxDh4QdATIc2ylJl
 nUCmSa2ucDWFL0vFU8FVS2yQ5VO/VJILYQpiOLg7FFfFvhD/IYXRqLnEHMWGZ1RB
 dNWYZgGpLjpH23nrPrbd15AmEuacwMLKpbqPXno6Uf6WWHBv8MCxRs4fymfcw4xm
 rHh4bNWS8bPY7WpfG+WfH9tv1DsAU9m5UJRGPNyIZ7HCr/Q+Jh1qohLrHViD+mYl
 UFZ3OWrYWEZ7Su14udgBwUH6xzgzr00KR+o1P05Yrs737zAZxWS8o8pRqkhx0H9G
 DpTk1WrygcIgjp4dNc1u97fiwffXqgjmXZcCXk8QYFKREAEgpn8+zTQDHMbmQST8
 cMX7TpyNGIKphX/eVSeW
 =UeaH
 -END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly release packages available in the Ubuntu Cloud Archive

2013-04-11 Thread Martinx - ジェームズ
Only 1 more thing (for now)...=)

After I purged the package `openstack-dashboard-ubuntu-theme', the default
OpenStack Dashboard opened without any style (I think CSS is missing),
don't know...

Tips?!


On 11 April 2013 05:38, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Mmmm... That's true...

 My mistake, I forgot to setup /etc/nova/api-paste.ini.

 Sorry about the buzz...

 Tks!
 Thiago


 On 11 April 2013 05:24, James Page james.p...@canonical.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 On 11/04/13 09:22, Martinx - ジェームズ wrote:
  Dashboard - Flavors: Error: Unauthorized: Unable to retrieve
  flavor list.
 
  `nova flavor-list' returns: ERROR: Unauthorized (HTTP 401)
 
  I am missing something but, where?

 All of the missing data (and the error above) is due to the fact that
 the user you are using to access the dashboard does not appear to have
 the right permissions - I would suspect some sort of misconfiguration
 in keystone.


 - --
 James Page
 Technical Lead
 Ubuntu Server Team
 james.p...@canonical.com
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.12 (GNU/Linux)
 Comment: Using GnuPG with undefined - http://www.enigmail.net/

 iQIcBAEBCAAGBQJRZnNPAAoJEL/srsug59jDVQ0P/2CxVmwT6DOvB08GKE87oIe7
 VQUvCkc59+Gq0t6kpr33c2wBD5IPERtU0g0/v+1Q+6f6ay+AnvbFWWHBZuwpujAc
 IcDqh+NMCU7FwEROEAD5bU9clTqMcepB+ONnjij/jpkhUvwGByhtGyan6Ek5K2Rc
 ofQusmlk4cZX/k+u4+GCKIrvLIv+mRjnsZYtV8WahOzMDAA3RRWIsOGmjOOT4D73
 B3RTUM7W9IqfWo2Tau3JjLzrq09zHG+4tasaWuoNSUPBJaXAy8dKJp4zAoUEbAqd
 BXf63APMRrz+FQVMTPOsgH+atsuBpUS4UbCzJmfLn6y/XXKyDxDh4QdATIc2ylJl
 nUCmSa2ucDWFL0vFU8FVS2yQ5VO/VJILYQpiOLg7FFfFvhD/IYXRqLnEHMWGZ1RB
 dNWYZgGpLjpH23nrPrbd15AmEuacwMLKpbqPXno6Uf6WWHBv8MCxRs4fymfcw4xm
 rHh4bNWS8bPY7WpfG+WfH9tv1DsAU9m5UJRGPNyIZ7HCr/Q+Jh1qohLrHViD+mYl
 UFZ3OWrYWEZ7Su14udgBwUH6xzgzr00KR+o1P05Yrs737zAZxWS8o8pRqkhx0H9G
 DpTk1WrygcIgjp4dNc1u97fiwffXqgjmXZcCXk8QYFKREAEgpn8+zTQDHMbmQST8
 cMX7TpyNGIKphX/eVSeW
 =UeaH
 -END PGP SIGNATURE-



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [quantum][folsom] error with dhcp agent

2013-04-11 Thread Arindam Choudhury
Hi,

I am trying to install openstack folsom on fedora 18. while installing quantum 
I am having this problem:

# cat dhcp-agent.log 
2013-04-11 12:53:31 INFO [quantum.common.config] Logging enabled!
2013-04-11 12:55:39ERROR [quantum.openstack.common.rpc.impl_qpid] Unable to 
connect to AMQP server: [Errno 110] ETIMEDOUT. Sleeping 1 seconds

# cat openvswitch-agent.log 
2013-04-11 12:53:22 INFO [quantum.common.config] Logging enabled!
2013-04-11 12:53:22 INFO 
[quantum.plugins.openvswitch.agent.ovs_quantum_agent] Bridge mappings: {}
2013-04-11 12:53:22ERROR [quantum.agent.linux.ovs_lib] Unable to execute 
['ovs-vsctl', '--timeout=2', '--', '--if-exists', 'del-port', 'br-int', 
'patch-tun']. Exception: 
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', '--', '--if-exists', 'del-port', 'br-int', 
'patch-tun']
Exit code: 1
Stdout: ''
Stderr: 'Traceback (most recent call last):\n  File 
/usr/bin/quantum-rootwrap, line 95, in module\n
env=filtermatch.get_environment(userargs))\n  File 
/usr/lib64/python2.7/subprocess.py, line 679, in __init__\nerrread, 
errwrite)\n  File /usr/lib64/python2.7/subprocess.py, line 1249, in 
_execute_child\nraise child_exception\nOSError: [Errno 13] Permission 
denied\n'
2013-04-11 12:53:22ERROR [quantum.agent.linux.ovs_lib] Unable to execute 
['ovs-ofctl', 'del-flows', 'br-int']. Exception: 
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 
'ovs-ofctl', 'del-flows', 'br-int']
Exit code: 1
Stdout: ''
Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'
2013-04-11 12:53:23ERROR [quantum.agent.linux.ovs_lib] Unable to execute 
['ovs-ofctl', 'add-flow', 'br-int', 
'hard_timeout=0,idle_timeout=0,priority=1,actions=normal']. Exception: 
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 
'ovs-ofctl', 'add-flow', 'br-int', 
'hard_timeout=0,idle_timeout=0,priority=1,actions=normal']
Exit code: 1
Stdout: ''
Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'
2013-04-11 12:55:30ERROR [quantum.openstack.common.rpc.impl_qpid] Unable to 
connect to AMQP server: [Errno 110] ETIMEDOUT. Sleeping 1 seconds



I have followed these instructions:
[(keystone_admin)]$ keystone service-create --name openstack_network --type 
network --description Openstack Networking 
Service+-+--+|   Property  |   
   Value   |+-+--+| 
description |   Openstack Networking Service   ||  id | 
cdc15413851341e7a1a827672b81c8e8 || name|openstack_network  
   || type| network  
|+-+--+[(keystone_admin)]$ keystone 
endpoint-create --region RegionOne --service-id 
cdc15413851341e7a1a827672b81c8e8 --publicurl 'http://109.158.65.21:9696' 
--adminurl 'http://109.158.65.21:9696' --internalurl 
'http://109.158.65.21:9696'+-+--+|  
 Property  |  Value   
|+-+--+|   adminurl  |
http://109.158.65.21:9696 ||  id | 16668a90ff0445068b8e9fc01d568b89 
|| internalurl |http://109.158.65.21:9696 ||  publicurl  |
http://109.158.65.21:9696 ||region   |RegionOne 
||  service_id | cdc15413851341e7a1a827672b81c8e8 
|+-+--+[(keystone_admin)]$ keystone 
tenant-create --name openstack_network --description OpenStack network 
tenant+-+--+|   Property  |
  Value   |+-+--+| 
description | OpenStack network tenant ||   enabled   |   
True   ||  id | c8274f56a1cd4b2a968ba81a47278476 || 
name|openstack_network 
|+-+--+[(keystone_admin)]$ keystone 
user-create --name openstack_network --pass openstack_network --tenant-id 
c8274f56a1cd4b2a968ba81a47278476+--+-+|
 Property |  Value  

|+--+-+|
  email   | 
|| enabled  |   
True
  ||id| 
e2ba6a8d3c1b4758844972b1d1df3d6b
||   name   |  

Re: [Openstack] Grizzly release packages available in the Ubuntu Cloud Archive

2013-04-11 Thread Martinx - ジェームズ
Hi!

 I'm still facing a small issue (already reported here during test phase)
with Quantum (Grizzly + Ubuntu 12.04).

 If I do not do this:





visudo


---


quantum ALL=NOPASSWD: ALL

---



 Quantum doesn't work... Too much erros on auth.log.





...And if I try this within `visudo':



---
includedir /etc/sudoers.d
---




...The `visudo' section doesn't close, a warning about a fatal error appear.




NOTE: I'm following this guide: *Ultimate OpenStack Grizzly Guide*
https://gist.github.com/tmartinx/d36536b7b62a48f859c2




Tks,
Thiago



On 9 April 2013 10:20, James Page james.p...@canonical.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Hi All

 OpenStack Grizzly release packages are now available in the Ubuntu
 Cloud Archive for Ubuntu 12.04 - see
 https://wiki.ubuntu.com/ServerTeam/CloudArchive for details on how to
 enable and use these packages on Ubuntu 12.04.

 Please note that further Ubuntu related updates may land into the
 Cloud Archive for Grizzly between now and the final release of Ubuntu
 13.04 in a couple of weeks time; 13.04 is now feature frozen so these
 should be bug fixes only.

 Enjoy

 James

 - --
 James Page
 Technical Lead
 Ubuntu Server Team
 james.p...@canonical.com
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.12 (GNU/Linux)
 Comment: Using GnuPG with undefined - http://www.enigmail.net/

 iQIcBAEBCAAGBQJRZBWSAAoJEL/srsug59jDasAQALXgHF2OfrAySBGdati0GImP
 gKJ7gHs5uNgOHi99m4XX/LUsRWMYrYhmswKTGpFvnIhgy4nxsvvQPc5+9k9Om/lz
 ArGDivKmf7idInGfRTdp7hGm/llgNa7WLaU+GwVACuj0utBkF5RcTTE0kES1kFX2
 CAvHMQMDqLfVBpDWunsarVyE9VBMJdVHJQZWpdzDhiTForhawcZXxB9fh2qHpKhS
 nX6AqP77JZ6XARw4fTLI30n6gQritwPsbK1J93QwXFtNqu5W5TUc+GAukQSVcoAy
 frkYSkJX+4MawkhI7PJ919O0y9q9O3UAn6sH+q4xk8Mpak/xJ0KUxHZX81MUw0Q5
 5BmdRsJwCkRPYiz1Qc0sqqT5ROlr/WnDiHUIEwjs8IYAf2/hjTUD+KjOz7ycPWqg
 V/asjzqtgTuLDCESWv5yG4vF/CWHTf00e6nqTgfoORHHVBnTnImFsq7CryLzUxes
 nSRvTAALoa/71+1qMpUoUS61bCcKhY2fBsCn2uqMM1nHiot2MUH1wVEajKiX332N
 X2IWSyHvNzr7/UP3BS5A5LKj3ck5NTdr46ft0HfLeknu5jcjOb7cltDH2wkFSunU
 9t7p2Z3yBPw5tK5c8Fmt5gAscw9YfYhjE4Dufd12nOCD3Go2Xw8gbzjCkSQYtiY7
 RoxivOeqbSwAJu0Q6Zm4
 =kvSY
 -END PGP SIGNATURE-

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Can't log in grizzly's dashboard

2013-04-11 Thread Mohammed Amine SAYA
Hi all,

I upgraded my install from folsom to grizzly but I can't log in dashboard.
I keep getting this error :

HTTPConnectionPool(host='192.168.0.1', port=8776): Max retries exceeded
with url:
/v1/5690876e82414117b80e64167a3ee3f8/os-quota-sets/5690876e82414117b80e64167a3ee3f8

I haven't changed the database content. I installed grizzly packages only.
Keystone, nova and apache are running fine. I can list endpoints, users and
tenants.

nova-manage service list gives this:
+--++--+-+---++
| Binary   | Host   | Zone | Status  | State | Updated_at
  |
+--++--+-+---++
| nova-cert| openstack0 | internal | enabled | up|
2013-04-11T12:17:07.00 |
| nova-compute | openstack1 | nova | enabled | down  |
2013-04-03T12:45:08.00 |
| nova-console | openstack0 | internal | enabled | up|
2013-04-11T12:17:11.00 |
| nova-consoleauth | openstack0 | internal | enabled | down  |
2013-04-03T12:45:14.00 |
| nova-scheduler   | openstack0 | internal | enabled | up|
2013-04-11T12:17:11.00 |
+--++--+-+---++

Do you know how to fix this please?

Thanks for your help.
Amine.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-04-11 Thread Martinx - ジェームズ
Awesome!!! :-D

Guide updated!

Cheers!
Thiago


On 21 March 2013 12:10, Razique Mahroua razique.mahr...@gmail.com wrote:

 great guide, thanks a lot !

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 21 mars 2013 à 15:26, Jean-Baptiste RANSY 
 jean-baptiste.ra...@alyseo.com a écrit :

  Hello Thiago,

 I think it's better to use rootwrap in sudoers :

 nova ALL = (root) NOPASSWD: /usr/bin/nova-rootwrap /etc/nova/rootwrap.conf
 *
 cinder ALL = (root) NOPASSWD: /usr/bin/cinder-rootwrap
 /etc/cinder/rootwrap.conf *
 quantum ALL = (root) NOPASSWD: /usr/bin/quantum-rootwrap
 /etc/quantum/rootwrap.conf *

 NOTE : with quantum (l3, dhcp, etc ..) you can encounter issue with
 rootwrap, especially with namespaces (i don't know if this is still the
 case)
 To fix that, just add 'root_helper = sudo /usr/bin/quantum-rootwrap
 /etc/quantum/rootwrap.conf' in the .ini file of each quantum service.

 I don't know why root_helper isn't in each quantum service sample files if
 it must be configured ... is it normal or not ?
 If this addition (to add root_helper in each ini file) should not be
 necessary, I think i identified the root problem.
 In the dhcp_agent for example, just need to replace each occurrences of
 'self.conf.root_helper' by 'self.root_helper'

 If someone has the answer, let me know if I should open a bug or not.

 Regards,


 jbr_


 On 03/21/2013 01:19 AM, Martinx - ジェームズ wrote:

 1 problem fixed with:

  visudo

  ---
 quantum ALL=NOPASSWD: ALL
 cinder ALL=NOPASSWD: ALL
 nova ALL=NOPASSWD: ALL
 ---

  Guide updated...


 On 20 March 2013 19:51, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

  Hi!

   I'm working with Grizzly G3+RC1 on top of Ubuntu 12.04.2 and here is
 the guide I wrote:

   Ultimate OpenStack Grizzly 
 Guidehttps://gist.github.com/tmartinx/d36536b7b62a48f859c2

   It covers:

   * Ubuntu 12.04.2
  * Basic Ubuntu setup
  * KVM
  * OpenvSwitch
  * Name Resolution for OpenStack components;
  * LVM for Instances
  * Keystone
  * Glance
  * Quantum - Single Flat, Super Green!!
  * Nova
  * Cinder / tgt
  * Dashboard

   It is still a draft but, every time I deploy Ubuntu and Grizzly, I
 follow this little guide...

   I would like some help to improve this guide... If I'm doing something
 wrong, tell me! Please!

   Probably I'm doing something wrong, I don't know yet, but I'm seeing
 some errors on the logs, already reported here on this list. Like for
 example: nova-novncproxy conflicts with novnc (no VNC console for now),
 dhcp-agent.log / auth.log points to some problems with `sudo' or the
 `rootwarp' subsystem when dealing with metadata (so it isn't working)...

   But in general, it works great!!

  Best!
 Thiago




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


  ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



NUAGECO-LOGO-Fblan_petit.jpg___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-04-11 Thread Martinx - ジェームズ
Guys!

 I just update the *Ultimate OpenStack Grizzly
Guide*https://gist.github.com/tmartinx/d36536b7b62a48f859c2
!

 You guys will note that this environment works with *echo 0 
/proc/sys/net/ipv4/ip_forward*, on *both* controller *AND* compute nodes!
Take a look! I didn't touch the /etc/sysctl.conf file and it is working!

 I'll ask for the help of this community to finish my guide.

 On my `TODO list' I have: enable Metadata, Spice and Ceilometer.
Volunteers?!

Best!
Thiago

On 20 March 2013 19:51, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Hi!

  I'm working with Grizzly G3+RC1 on top of Ubuntu 12.04.2 and here is the
 guide I wrote:

  Ultimate OpenStack Grizzly 
 Guidehttps://gist.github.com/tmartinx/d36536b7b62a48f859c2

  It covers:

  * Ubuntu 12.04.2
  * Basic Ubuntu setup
  * KVM
  * OpenvSwitch
  * Name Resolution for OpenStack components;
  * LVM for Instances
  * Keystone
  * Glance
  * Quantum - Single Flat, Super Green!!
  * Nova
  * Cinder / tgt
  * Dashboard

  It is still a draft but, every time I deploy Ubuntu and Grizzly, I follow
 this little guide...

  I would like some help to improve this guide... If I'm doing something
 wrong, tell me! Please!

  Probably I'm doing something wrong, I don't know yet, but I'm seeing some
 errors on the logs, already reported here on this list. Like for example:
 nova-novncproxy conflicts with novnc (no VNC console for now),
 dhcp-agent.log / auth.log points to some problems with `sudo' or the
 `rootwarp' subsystem when dealing with metadata (so it isn't working)...

  But in general, it works great!!

 Best!
 Thiago

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Can't log in grizzly's dashboard

2013-04-11 Thread Heiko Krämer
Hi Mohammed,


do you have sync the db with keystone-manage sync_db ?

Do you see any errors in your keystone.log?

Greetings
Heiko
On 11.04.2013 14:17, Mohammed Amine SAYA wrote:
 Hi all,

 I upgraded my install from folsom to grizzly but I can't log in
 dashboard. I keep getting this error : 

 HTTPConnectionPool(host='192.168.0.1', port=8776): Max retries
 exceeded with url:
 /v1/5690876e82414117b80e64167a3ee3f8/os-quota-sets/5690876e82414117b80e64167a3ee3f8

 I haven't changed the database content. I installed grizzly packages only.
 Keystone, nova and apache are running fine. I can list endpoints,
 users and tenants.

 nova-manage service list gives this:
 +--++--+-+---++
 | Binary   | Host   | Zone | Status  | State |
 Updated_at |
 +--++--+-+---++
 | nova-cert| openstack0 | internal | enabled | up|
 2013-04-11T12:17:07.00 |
 | nova-compute | openstack1 | nova | enabled | down  |
 2013-04-03T12:45:08.00 |
 | nova-console | openstack0 | internal | enabled | up|
 2013-04-11T12:17:11.00 |
 | nova-consoleauth | openstack0 | internal | enabled | down  |
 2013-04-03T12:45:14.00 |
 | nova-scheduler   | openstack0 | internal | enabled | up|
 2013-04-11T12:17:11.00 |
 +--++--+-+---++

 Do you know how to fix this please?

 Thanks for your help.
 Amine.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Can't log in grizzly's dashboard

2013-04-11 Thread Mouad Benchchaoui
Hi Mohammed,

The port 8776 is the default port used by cinder, did you install cinder ?
if so make sure that it's up and running.

Cheers,

--
Mouad



On Thu, Apr 11, 2013 at 2:17 PM, Mohammed Amine SAYA 
asaya.openst...@gmail.com wrote:

 Hi all,

 I upgraded my install from folsom to grizzly but I can't log in dashboard.
 I keep getting this error :

 HTTPConnectionPool(host='192.168.0.1', port=8776): Max retries exceeded
 with url:
 /v1/5690876e82414117b80e64167a3ee3f8/os-quota-sets/5690876e82414117b80e64167a3ee3f8

 I haven't changed the database content. I installed grizzly packages only.
 Keystone, nova and apache are running fine. I can list endpoints, users
 and tenants.

 nova-manage service list gives this:

 +--++--+-+---++
 | Binary   | Host   | Zone | Status  | State | Updated_at
 |

 +--++--+-+---++
 | nova-cert| openstack0 | internal | enabled | up|
 2013-04-11T12:17:07.00 |
 | nova-compute | openstack1 | nova | enabled | down  |
 2013-04-03T12:45:08.00 |
 | nova-console | openstack0 | internal | enabled | up|
 2013-04-11T12:17:11.00 |
 | nova-consoleauth | openstack0 | internal | enabled | down  |
 2013-04-03T12:45:14.00 |
 | nova-scheduler   | openstack0 | internal | enabled | up|
 2013-04-11T12:17:11.00 |

 +--++--+-+---++

 Do you know how to fix this please?

 Thanks for your help.
 Amine.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Create Route to physical net

2013-04-11 Thread Heiko Krämer
Hiho guys,

i'm running OpenStack grizzly with quantum and all stuff. All is running
fine but i'm trying to get a connection from each fixed_network
(namespaced) to a physical network.

Example:

Fixed-Network: 10.100.0.0/24
Gre-Tunneling net: 100.20.20/24

Pysical-Network (3. Interface on Network-Node): 10.0.0.0/24

Now i'll create a route on Router xy with interface 10.100.0.1 to =
10.0.0.17(pysical interface on network host)
On the 10.0.0.0/24 network i'm running shared services like mysql
cluster, searching cluster and so on and the goal is that each fixed
network reach the shared services.
My problem is to get a connection between a router (namespace) and the
physical interface.


If you need more details please let me know :)


Greetings
Heiko

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Summit Runners

2013-04-11 Thread Luke Tymowski
G'day,


Would anyone be interested in morning runs (5K) during the Summit in PDX
next week?

If you are, let's meet in the lobby of the Portland Hilton on Sixth Avenue
at 0600 on Monday and 0700 from Tuesday to Friday.

Some of the Monday morning runners have breakfast meetings, hence the early
start.

It's a great way to see a bit of Portland!


Luke
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Create Route to physical net

2013-04-11 Thread Robert van Leeuwen
 i'm running OpenStack grizzly with quantum and all stuff. All is running
 fine but i'm trying to get a connection from each fixed_network
 (namespaced) to a physical network.

 My problem is to get a connection between a router (namespace) and the
 physical interface.

Did you setup a quantum l3-router?
http://docs.openstack.org/folsom/openstack-network/admin/content/use_cases.html
http://docs.openstack.org/folsom/openstack-network/admin/content/l3_router_and_nat.html


There are some options described to get connectivity, basically use SNAT or 
Floating IP pools to get access.
We are running in a more adventurous mode  ( I have not seen any documentation 
on this but it works) without NAT.
We do this because it is an private cloud and we just want to allow employees 
to access their cloud virtuals without any further configuration.
Our physical router/firewall is setup to forward all traffic for our private 
networks segments (we defined a range for this) to the quantum-l3-router.
The l3-router has an interface in all networks that needs this connectivity.
We setup some Policy Based Routing rules to make everything works and not allow 
traffic between tenants without going through our firewall.
Maybe not the most elegant solution but it works.

Cheers,
Robert van Leeuwen

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Summit Runners

2013-04-11 Thread Shohel Ahmed
Count me in.___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [quantum][folsom] error with dhcp agent

2013-04-11 Thread Gary Kotton

On 04/11/2013 01:59 PM, Arindam Choudhury wrote:

Hi,

I am trying to install openstack folsom on fedora 18. while installing 
quantum I am having this problem:


# cat dhcp-agent.log
2013-04-11 12:53:31 INFO [quantum.common.config] Logging enabled!
2013-04-11 12:55:39ERROR [quantum.openstack.common.rpc.impl_qpid] 
Unable to connect to AMQP server: [Errno 110] ETIMEDOUT. Sleeping 1 
seconds


Did you run the script - quantum-dhcp-setup? If so you would be prompted 
to indicate the IP address of the message broker (QPID in this case).




# cat openvswitch-agent.log
2013-04-11 12:53:22 INFO [quantum.common.config] Logging enabled!
2013-04-11 12:53:22 INFO 
[quantum.plugins.openvswitch.agent.ovs_quantum_agent] Bridge mappings: {}
2013-04-11 12:53:22ERROR [quantum.agent.linux.ovs_lib] Unable to 
execute ['ovs-vsctl', '--timeout=2', '--', '--if-exists', 'del-port', 
'br-int', 'patch-tun']. Exception:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', '--', '--if-exists', 'del-port', 'br-int', 
'patch-tun']

Exit code: 1
Stdout: ''


Are you using the Grizzly packages? This issue was fixed a few days ago.

Stderr: 'Traceback (most recent call last):\n  File 
/usr/bin/quantum-rootwrap, line 95, in module\n
env=filtermatch.get_environment(userargs))\n  File 
/usr/lib64/python2.7/subprocess.py, line 679, in __init__\n
errread, errwrite)\n  File /usr/lib64/python2.7/subprocess.py, line 
1249, in _execute_child\nraise child_exception\nOSError: [Errno 
13] Permission denied\n'
2013-04-11 12:53:22ERROR [quantum.agent.linux.ovs_lib] Unable to 
execute ['ovs-ofctl', 'del-flows', 'br-int']. Exception:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 
'ovs-ofctl', 'del-flows', 'br-int']

Exit code: 1
Stdout: ''
Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'
2013-04-11 12:53:23ERROR [quantum.agent.linux.ovs_lib] Unable to 
execute ['ovs-ofctl', 'add-flow', 'br-int', 
'hard_timeout=0,idle_timeout=0,priority=1,actions=normal']. Exception:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 
'ovs-ofctl', 'add-flow', 'br-int', 
'hard_timeout=0,idle_timeout=0,priority=1,actions=normal']

Exit code: 1
Stdout: ''
Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'
2013-04-11 12:55:30ERROR [quantum.openstack.common.rpc.impl_qpid] 
Unable to connect to AMQP server: [Errno 110] ETIMEDOUT. Sleeping 1 
seconds




I have followed these instructions:
[(keystone_admin)]$ keystone service-create --name openstack_network 
--type network --description Openstack Networking Service

+-+--+
| Property | Value |
+-+--+
| description | Openstack Networking Service |
| id | cdc15413851341e7a1a827672b81c8e8 |
| name | openstack_network |
| type | network |
+-+--+
[(keystone_admin)]$ keystone endpoint-create --region RegionOne 
--service-id cdc15413851341e7a1a827672b81c8e8 --publicurl 
'http://109.158.65.21:9696' --adminurl 'http://109.158.65.21:9696' 
--internalurl 'http://109.158.65.21:9696'

+-+--+
| Property | Value |
+-+--+
| adminurl | http://109.158.65.21:9696 |
| id | 16668a90ff0445068b8e9fc01d568b89 |
| internalurl | http://109.158.65.21:9696 |
| publicurl | http://109.158.65.21:9696 |
| region | RegionOne |
| service_id | cdc15413851341e7a1a827672b81c8e8 |
+-+--+
[(keystone_admin)]$ keystone tenant-create --name openstack_network 
--description OpenStack network tenant

+-+--+
| Property | Value |
+-+--+
| description | OpenStack network tenant |
| enabled | True |
| id | c8274f56a1cd4b2a968ba81a47278476 |
| name | openstack_network |
+-+--+
[(keystone_admin)]$ keystone user-create --name openstack_network 
--pass openstack_network --tenant-id c8274f56a1cd4b2a968ba81a47278476

+--+-+
| Property | Value |
+--+-+
| email | |
| enabled | True |
| id | e2ba6a8d3c1b4758844972b1d1df3d6b |
| name | openstack_network |
| password | 
$6$rounds=4$h7GvK.VDbEUrikf4$UP.KDuvz8VBhALEm75ZSOFpEdj1z2MecQhPYyyliyJn3Q.oxCmU/PWxPsD8cJ33z.YqtvhMI7RKXQEulceFok0 
|

| tenantId | c8274f56a1cd4b2a968ba81a47278476 |
+--+-+
[(keystone_admin)]$ keystone role-list
+--+---+
| id | name |

[Openstack] using Glusterfs for instance storage

2013-04-11 Thread John Paul Walters
Hi,

We've started implementing a Glusterfs-based solution for instance storage in 
order to provide live migration.  I've run into a strange problem when using a 
multi-node Gluster setup that I hope someone has a suggestion to resolve.

I have a 12 node distributed/replicated Gluster cluster.  I can mount it to my 
client machines, and it seems to be working alright.  When I launch instances, 
the nova-compute log on the client machines are giving me two error messages:

First is a qemu-kvm error: could not open disk image 
/exports/instances/instances/instance-0242/disk: Invalid argument
(full output at http://pastebin.com/i8vzWegJ)

The second error message comes a short time later ending with 
nova.openstack.common.rpc.amqp Invalid: Instance has already been created
(full output at http://pastebin.com/6Ta4kkBN)

This happens reliably with the multi-Gluster-node setup.  Oddly, after creating 
a test Gluster volume composed of a single brick and single node, everything 
works fine.

Does anyone have any suggestions?

thanks,
JP


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] using Glusterfs for instance storage

2013-04-11 Thread Razique Mahroua
Hi JP,my bet is that this is a writing permissions issue. Does nova has the right to write within the mounted directory?
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 11 avr. 2013 à 16:36, John Paul Walters jwalt...@isi.edu a écrit :Hi,We've started implementing a Glusterfs-based solution for instance storage in order to provide live migration. I've run into a strange problem when using a multi-node Gluster setup that I hope someone has a suggestion to resolve.I have a 12 node distributed/replicated Gluster cluster. I can mount it to my client machines, and it seems to be working alright. When I launch instances, the nova-compute log on the client machines are giving me two error messages:First is a qemu-kvm error: could not open disk image /exports/instances/instances/instance-0242/disk: Invalid argument(full output at http://pastebin.com/i8vzWegJ)The second error message comes a short time later ending with nova.openstack.common.rpc.amqp Invalid: Instance has already been created(full output at http://pastebin.com/6Ta4kkBN)This happens reliably with the multi-Gluster-node setup. Oddly, after creating a test Gluster volume composed of a single brick and single node, everything works fine.Does anyone have any suggestions?thanks,JP___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [quantum][folsom] error with dhcp agent

2013-04-11 Thread Arindam Choudhury
Hi,

Thanks for your kind reply. I fixed it. I am installing folsom. I had to put 
selinux in permissive mode for the second error. 





Date: Thu, 11 Apr 2013 17:36:38 +0300
From: gkot...@redhat.com
To: arin...@live.com
CC: openstack@lists.launchpad.net
Subject: Re: [Openstack] [quantum][folsom] error with dhcp agent


  

  
  
On 04/11/2013 01:59 PM, Arindam Choudhury wrote:

  
  Hi,



I am trying to install openstack folsom on fedora 18. while
installing quantum I am having this problem:



# cat dhcp-agent.log 

2013-04-11 12:53:31 INFO [quantum.common.config] Logging
enabled!

2013-04-11 12:55:39ERROR
[quantum.openstack.common.rpc.impl_qpid] Unable to connect to
AMQP server: [Errno 110] ETIMEDOUT. Sleeping 1 seconds

  



Did you run the script - quantum-dhcp-setup? If so you would be
prompted to indicate the IP address of the message broker (QPID in
this case).




  

# cat openvswitch-agent.log 

2013-04-11 12:53:22 INFO [quantum.common.config] Logging
enabled!

2013-04-11 12:53:22 INFO
[quantum.plugins.openvswitch.agent.ovs_quantum_agent] Bridge
mappings: {}

2013-04-11 12:53:22ERROR [quantum.agent.linux.ovs_lib]
Unable to execute ['ovs-vsctl', '--timeout=2', '--',
'--if-exists', 'del-port', 'br-int', 'patch-tun']. Exception: 

Command: ['sudo', 'quantum-rootwrap',
'/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', '--',
'--if-exists', 'del-port', 'br-int', 'patch-tun']

Exit code: 1

Stdout: ''

  



Are you using the Grizzly packages? This issue was fixed a few days
ago. 




  Stderr: 'Traceback (most recent call last):\n  File
/usr/bin/quantum-rootwrap, line 95, in module\n   
env=filtermatch.get_environment(userargs))\n  File
/usr/lib64/python2.7/subprocess.py, line 679, in __init__\n   
errread, errwrite)\n  File /usr/lib64/python2.7/subprocess.py,
line 1249, in _execute_child\nraise
child_exception\nOSError: [Errno 13] Permission denied\n'

2013-04-11 12:53:22ERROR [quantum.agent.linux.ovs_lib]
Unable to execute ['ovs-ofctl', 'del-flows', 'br-int'].
Exception: 

Command: ['sudo', 'quantum-rootwrap',
'/etc/quantum/rootwrap.conf', 'ovs-ofctl', 'del-flows',
'br-int']

Exit code: 1

Stdout: ''

Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'

2013-04-11 12:53:23ERROR [quantum.agent.linux.ovs_lib]
Unable to execute ['ovs-ofctl', 'add-flow', 'br-int',
'hard_timeout=0,idle_timeout=0,priority=1,actions=normal'].
Exception: 

Command: ['sudo', 'quantum-rootwrap',
'/etc/quantum/rootwrap.conf', 'ovs-ofctl', 'add-flow', 'br-int',
'hard_timeout=0,idle_timeout=0,priority=1,actions=normal']

Exit code: 1

Stdout: ''

Stderr: 'ovs-ofctl: br-int is not a bridge or a socket\n'

2013-04-11 12:55:30ERROR
[quantum.openstack.common.rpc.impl_qpid] Unable to connect to
AMQP server: [Errno 110] ETIMEDOUT. Sleeping 1 seconds







I have followed these instructions:

[(keystone_admin)]$ keystone service-create --name openstack_network 
--type network --description Openstack Networking 
Service+-+--+|   Property  |   
   Value   |+-+--+| 
description |   Openstack Networking Service   ||  id | 
cdc15413851341e7a1a827672b81c8e8 || name|openstack_network !
 |
/div| type| network  
|+-+--+[(keystone_admin)]$ keystone 
endpoint-create --region RegionOne --service-id 
cdc15413851341e7a1a827672b81c8e8 --publicurl 'http://109.158.65.21:9696' 
--adminurl 'http://109.158.65.21:9696' --internalurl 
'http://109.158.65.21:9696'+-+--+|  
 Pro!
 perty  |  
Value   
|+-+--+|   adminurl  |
http://109.158.65.21:9696 ||  id | 16668a90ff0445068b8e9fc01d568b89 
|| internalurl |http://109.158.65.21:9696 ||  publicurl  |
http://109.158.65.21:9696 ||region   |   !
  Regio
nOne ||  service_id | cdc15413851341e7a1a827672b81c8e8 
|+-+--+[(keystone_admin)]$ keystone 
tenant-create --name openstack_network --description OpenStack network 
tenant+-+--+|   Property  |
  Value   |+-+--+| 
description | OpenStack network 

Re: [Openstack] using Glusterfs for instance storage

2013-04-11 Thread John Paul Walters
Hi Razique,

Thanks for chiming in.  Yes, nova owns the instances directory that it's 
writing to.  In fact, between the multi-node volume and the single node volume, 
I gave the same permissions: created a directory instances on the gluster 
volume, and chown nova.nova instances.  The individual instance directories get 
created whenever I try to launch an instance, and the permissions all seem okay 
to me:

Here are the permissions on the gluster volume:
[root@openstack-13 instances]# ls -al
total 29
drwxr-xr-x.  4 nova nova   234 Apr 11 14:20 .
drwxr-xr-x.  3 root root  4096 Apr 10 15:52 ..
drwxr-x---. 11 nova nova 24576 Apr 11 14:31 instances  

Inside of instances:
[root@openstack-13 instances]# ls -al
total 33
drwxr-x---. 11 nova nova 24576 Apr 11 14:31 .
drwxr-xr-x.  4 nova nova   234 Apr 11 14:20 ..
drwxr-xr-x.  2 nova nova  8302 Apr 11 14:21 _base
drwxr-xr-x.  2 nova nova   110 Apr 11 14:21 instance-023b
drwxr-xr-x.  2 nova nova   110 Apr 11 14:22 instance-023c
drwxr-xr-x.  2 nova nova   110 Apr 11 14:22 instance-023d
drwxr-xr-x.  2 nova nova   110 Apr 11 14:22 instance-023e
drwxr-xr-x.  2 nova nova   110 Apr 11 14:22 instance-023f
drwxr-xr-x.  2 nova nova   110 Apr 11 14:22 instance-0240
drwxr-xr-x.  2 nova nova   110 Apr 11 14:25 instance-0241
drwxr-xr-x.  2 nova nova   110 Apr 11 14:31 instance-0242

instance-0241 is an example of one that's failed, inside of there:
[root@openstack-13 instance-0241]# ls -al
total 4678
drwxr-xr-x.  2 nova nova 110 Apr 11 14:25 .
drwxr-x---. 11 nova nova   24576 Apr 11 14:31 ..
-rw-rw.  1 root root   0 Apr 11 14:25 console.log
-rw-r--r--.  1 root root  262144 Apr 11 14:25 disk
-rw-r--r--.  1 root root 4404752 Apr 11 14:25 kernel
-rw-r--r--.  1 nova nova1277 Apr 11 14:25 libvirt.xml
-rw-r--r--.  1 root root   96629 Apr 11 14:25 ramdisk

To me, it seems reasonable.  I'm happy to be wrong though.
thanks,
JP

On Apr 11, 2013, at 10:49 AM, Razique Mahroua razique.mahr...@gmail.com wrote:

 Hi JP,
 my bet is that this is a writing permissions issue. Does nova has the right 
 to write within the mounted directory?
 
 Razique Mahroua - Nuage  Co
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15
 
 NUAGECO-LOGO-Fblan_petit.jpg
 
 Le 11 avr. 2013 à 16:36, John Paul Walters jwalt...@isi.edu a écrit :
 
 Hi,
 
 We've started implementing a Glusterfs-based solution for instance storage 
 in order to provide live migration.  I've run into a strange problem when 
 using a multi-node Gluster setup that I hope someone has a suggestion to 
 resolve.
 
 I have a 12 node distributed/replicated Gluster cluster.  I can mount it to 
 my client machines, and it seems to be working alright.  When I launch 
 instances, the nova-compute log on the client machines are giving me two 
 error messages:
 
 First is a qemu-kvm error: could not open disk image 
 /exports/instances/instances/instance-0242/disk: Invalid argument
 (full output at http://pastebin.com/i8vzWegJ)
 
 The second error message comes a short time later ending with 
 nova.openstack.common.rpc.amqp Invalid: Instance has already been created
 (full output at http://pastebin.com/6Ta4kkBN)
 
 This happens reliably with the multi-Gluster-node setup.  Oddly, after 
 creating a test Gluster volume composed of a single brick and single node, 
 everything works fine.
 
 Does anyone have any suggestions?
 
 thanks,
 JP
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Summit Runners

2013-04-11 Thread Eoghan Glynn

+1

Thanks,
Eoghan

- Original Message -
 G'day,
 
 
 Would anyone be interested in morning runs (5K) during the Summit in PDX next
 week?
 
 If you are, let's meet in the lobby of the Portland Hilton on Sixth Avenue at
 0600 on Monday and 0700 from Tuesday to Friday.
 
 Some of the Monday morning runners have breakfast meetings, hence the early
 start.
 
 It's a great way to see a bit of Portland!
 
 
 Luke
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] using Glusterfs for instance storage

2013-04-11 Thread Sylvain Bauza

Agree.
As for other shared FS, this is *highly* important to make sure Nova UID 
and GID are consistent in between all compute nodes.

If this is not the case, then you have to usermod all instances...

-Sylvain

Le 11/04/2013 16:49, Razique Mahroua a écrit :

Hi JP,
my bet is that this is a writing permissions issue. Does nova has the 
right to write within the mounted directory?


*Razique Mahroua** - **Nuage  Co*
razique.mahr...@gmail.com mailto:razique.mahr...@gmail.com
Tel : +33 9 72 37 94 15



Le 11 avr. 2013 à 16:36, John Paul Walters jwalt...@isi.edu 
mailto:jwalt...@isi.edu a écrit :



Hi,

We've started implementing a Glusterfs-based solution for instance 
storage in order to provide live migration.  I've run into a strange 
problem when using a multi-node Gluster setup that I hope someone has 
a suggestion to resolve.


I have a 12 node distributed/replicated Gluster cluster.  I can mount 
it to my client machines, and it seems to be working alright.  When I 
launch instances, the nova-compute log on the client machines are 
giving me two error messages:


First is a qemu-kvm error: could not open disk image 
/exports/instances/instances/instance-0242/disk: Invalid argument

(full output at http://pastebin.com/i8vzWegJ)

The second error message comes a short time later ending with 
nova.openstack.common.rpc.amqp Invalid: Instance has already been created

(full output at http://pastebin.com/6Ta4kkBN)

This happens reliably with the multi-Gluster-node setup.  Oddly, 
after creating a test Gluster volume composed of a single brick and 
single node, everything works fine.


Does anyone have any suggestions?

thanks,
JP


___
Mailing list: https://launchpad.net/~openstack 
https://launchpad.net/%7Eopenstack
Post to : openstack@lists.launchpad.net 
mailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack 
https://launchpad.net/%7Eopenstack

More help   : https://help.launchpad.net/ListHelp




___
Mailing list:https://launchpad.net/~openstack
Post to :openstack@lists.launchpad.net
Unsubscribe :https://launchpad.net/~openstack
More help   :https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] using Glusterfs for instance storage

2013-04-11 Thread John Paul Walters
Hi Sylvain,

I agree, though I've confirmed that the UID and GID are consistent across both 
the compute nodes and my Glusterfs nodes. 

JP


On Apr 11, 2013, at 11:22 AM, Sylvain Bauza sylvain.ba...@digimind.com wrote:

 Agree.
 As for other shared FS, this is *highly* important to make sure Nova UID and 
 GID are consistent in between all compute nodes. 
 If this is not the case, then you have to usermod all instances...
 
 -Sylvain
 
 Le 11/04/2013 16:49, Razique Mahroua a écrit :
 Hi JP,
 my bet is that this is a writing permissions issue. Does nova has the right 
 to write within the mounted directory?
 
 Razique Mahroua - Nuage  Co
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15
 
 
 
 Le 11 avr. 2013 à 16:36, John Paul Walters jwalt...@isi.edu a écrit :
 
 Hi,
 
 We've started implementing a Glusterfs-based solution for instance storage 
 in order to provide live migration.  I've run into a strange problem when 
 using a multi-node Gluster setup that I hope someone has a suggestion to 
 resolve.
 
 I have a 12 node distributed/replicated Gluster cluster.  I can mount it to 
 my client machines, and it seems to be working alright.  When I launch 
 instances, the nova-compute log on the client machines are giving me two 
 error messages:
 
 First is a qemu-kvm error: could not open disk image 
 /exports/instances/instances/instance-0242/disk: Invalid argument
 (full output at http://pastebin.com/i8vzWegJ)
 
 The second error message comes a short time later ending with 
 nova.openstack.common.rpc.amqp Invalid: Instance has already been created
 (full output at http://pastebin.com/6Ta4kkBN)
 
 This happens reliably with the multi-Gluster-node setup.  Oddly, after 
 creating a test Gluster volume composed of a single brick and single node, 
 everything works fine.
 
 Does anyone have any suggestions?
 
 thanks,
 JP
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [folsom][quantum-server][horizon] error in horizon and quantum server

2013-04-11 Thread Arindam Choudhury
Hi,

I am a openstack newbie and trying to install folsom on fedora 18.

I have so far configured. 

$ openstack-status 
== Nova services ==
openstack-nova-api:   active
openstack-nova-cert:  active
openstack-nova-compute:   active
openstack-nova-network:   inactive (disabled on boot)
openstack-nova-scheduler: active
openstack-nova-volume:inactive (disabled on boot)
openstack-nova-conductor: inactive (disabled on boot)
== Glance services ==
openstack-glance-api: active
openstack-glance-registry:active
== Keystone service ==
openstack-keystone:   active
== Horizon service ==
openstack-dashboard:  active
== Quantum services ==
quantum-server:   active
== Support services ==
httpd:active
libvirtd: active
qpidd:active
memcached:active

The instructions I followed is in 
https://gist.github.com/arindamchoudhury/b07f886b5203b5577d83

In the dashboard when I click on networks, it does not respond.also launch 
instance takes a lot of time.

[(keystone_user)]$ quantum ext-list
[Errno 110] Connection timed out


The /var/log/quantum/server.log:
[arindam@aopcach ~]$ cat /var/log/quantum/server.log 
2013-04-11 17:45:23 INFO [quantum.common.config] Logging enabled!
2013-04-11 17:45:23 INFO [quantum.common.config] Config paste file: 
/etc/quantum/api-paste.ini
2013-04-11 17:45:23 INFO [quantum.manager] Loading Plugin: 
quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
2013-04-11 17:45:23 INFO [quantum.db.api] Database registration exception: 
(OperationalError) (2002, Can't connect to local MySQL server through socket 
'/var/lib/mysql/mysql.sock' (2)) None None
2013-04-11 17:45:24 INFO [quantum.db.api] Unable to connect to database, 
infinite attempts left. Retrying in 2 seconds
2013-04-11 17:45:26 INFO [quantum.plugins.openvswitch.ovs_quantum_plugin] 
Network VLAN ranges: {}
2013-04-11 17:45:26 INFO [quantum.openstack.common.rpc.impl_qpid] Connected 
to AMQP server on 158.109.65.21:5672
2013-04-11 17:45:26 INFO [quantum.api.extensions] Initializing extension 
manager.
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
__init__.pyc
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
extensions.pyo
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
_quotav2_model.py
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
l3.py
2013-04-11 17:45:26  WARNING [quantum.api.extensions] Loaded extension: router
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
_quotav2_model.pyc
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
_quotav2_driver.pyo
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
quotasv2.pyo
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
providernet.pyc
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
flavor.pyo
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
l3.pyo
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
quotasv2.pyc
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
l3.pyc
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
extensions.pyc
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
providernet.pyo
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
quotasv2.py
2013-04-11 17:45:26  WARNING [quantum.api.extensions] Exception loading 
extension: Invalid extension environment: quota driver 
quantum.extensions._quotav2_driver.DbQuotaDriver is needed.
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
extensions.py
2013-04-11 17:45:26  WARNING [quantum.api.extensions] Did not find expected 
name Extensions in 
/usr/lib/python2.7/site-packages/quantum/extensions/extensions.py
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
__init__.py
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
flavor.py
2013-04-11 17:45:26  WARNING [quantum.api.extensions] extension flavor not 
supported by plugin 
quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2 object at 
0x2402450
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
__init__.pyo
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
_quotav2_driver.pyc
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
flavor.pyc
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading extension file: 
providernet.py
2013-04-11 17:45:26  WARNING [quantum.api.extensions] Loaded extension: provider
2013-04-11 17:45:26 INFO [quantum.api.extensions] Loading 

Re: [Openstack] EXTERNAL: Re: Injecting a specific MAC address

2013-04-11 Thread Burney, Jeffrey P (N-Engineering Service Professionals)
Thanks Rob,

We just started looking at Quantum and it does appear to do what we want.

Also found this 
https://blueprints.launchpad.net/nova/+spec/configurable-mac-addresses
which also suggests Quantum.

Thanks for the help,
Jeff


-Original Message-
From: Robert 

 Hi Stackers,



 We have a little test lab where we are working with puppet and OpenStack.
 Here's what we are trying to do on Folsom (currently not using Quantum).



 From a puppet master (also running dhcp, dns, kickstart), issue a 
 command to OpenStack controller to boot a small pxe boot image.  We 
 would like to inject a MAC address into the newly instantiated VM so 
 that when It goes to our dhcp server, it will get the correct IP and 
 be built by our kickstart server.



 We've looked at node_openstack but it does not have inject a MAC.



 I'd imagine this has been done before I just can't find anything online.



 Any help would be greatly appreciated.

The MAC address used is generated dynamically by nova and then handed out to 
either nova-network or Quantum. You may be able to manually generate the MAC by 
using Quantum and making a call to Quantum to create a port with that MAC, then 
passing the port id to nova boot.

HTH,
Rob

--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Cloud Services

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Savanna] 0.1 Release!

2013-04-11 Thread Lloyd Dewolf
On Wed, Apr 10, 2013 at 3:45 PM, Robert Collins
robe...@robertcollins.net wrote:

 On 11 April 2013 08:30, Sergey Lukjanov slukja...@mirantis.com wrote:
  Hi everybody,
 
  we finished Phase 1 of our roadmap and released the first project release!
 
  Currently Savanna has the REST API for Hadoop cluster provisioning using 
  pre-installed images and we started working on pluggable mechanism for 
  custom cluster provisioning tools.
 
  Also I'd like to note that custom pages for OpenStack Dashbord have been 
  published too.
 
  You can find more info on Savanna site:

 Savanna seems to fit into the same space as Heat (going off your
 diagram on http://savanna.mirantis.com/) - can you explain how it is
 different?


My understanding of Savanna is it's complete focus on Hadoop.

Monty also recently asked about the opportunity for Savanna to use
Heat in a thread titled Re: [openstack-dev] [EHO] Project name change
[Savanna]
http://markmail.org/message/2vre6r4kgwqhvhav

Hope that helps,
Lloyd

--
@lloyddewolf
http://www.pistoncloud.com/

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack] Open vSwitch features

2013-04-11 Thread Balamurugan V G
Hi,

While using the OVS plugin and quantum for networking, is it possible to
use some of the features of OVS like STP, LACP etc (basically the ones
listed at http://openvswitch.org/features/) via OpenStack API? Or should
one have to use OVS API to achieve most of these?

Thanks,
Balu
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly release packages available in the Ubuntu Cloud Archive

2013-04-11 Thread Adam Gandelman

On 04/11/2013 10:05 AM, Derek Morton wrote:

If you've removed the Ubuntu theme you'll need to change the following
in /etc/openstack-dashboard/local_settings.py to get the default
theme working:

COMPRESS_OFFLINE = False

to

COMPRESS_OFFLINE = True

-Derek




This shouldn't be the case.  A simple purge of the theme 
openstack-dashboard-ubuntu-theme package should be all thats required to 
enable the default dashboard styling.


Just confirmed using the openstack-dashboard 1:2013.1-0ubuntu2~cloud0 
package from the Ubuntu Cloud Archive. With offline compression 
functions enabled and the openstack-ubuntu-theme-package uninstalled, 
everything works as expected and clients end up loading the following 
compressed assets which are shipped /w the package:


967e5ade6890.js
f8791faeb8f8.js
d272fede7fb7.css

If that is not the case for you please file a bug against the Horizon 
package in Ubuntu with apache error logs, stating which manifest keys 
are missing during offline compression.


Thanks
Adam




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] using Glusterfs for instance storage

2013-04-11 Thread Vishvananda Ishaya
You should check your syslog for app armor denied messages. It is possible
app armor is getting in the way here.

Vish

On Apr 11, 2013, at 8:35 AM, John Paul Walters jwalt...@isi.edu wrote:

 Hi Sylvain,
 
 I agree, though I've confirmed that the UID and GID are consistent across 
 both the compute nodes and my Glusterfs nodes. 
 
 JP
 
 
 On Apr 11, 2013, at 11:22 AM, Sylvain Bauza sylvain.ba...@digimind.com 
 wrote:
 
 Agree.
 As for other shared FS, this is *highly* important to make sure Nova UID and 
 GID are consistent in between all compute nodes. 
 If this is not the case, then you have to usermod all instances...
 
 -Sylvain
 
 Le 11/04/2013 16:49, Razique Mahroua a écrit :
 Hi JP,
 my bet is that this is a writing permissions issue. Does nova has the right 
 to write within the mounted directory?
 
 Razique Mahroua - Nuage  Co
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15
 
 
 
 Le 11 avr. 2013 à 16:36, John Paul Walters jwalt...@isi.edu a écrit :
 
 Hi,
 
 We've started implementing a Glusterfs-based solution for instance storage 
 in order to provide live migration.  I've run into a strange problem when 
 using a multi-node Gluster setup that I hope someone has a suggestion to 
 resolve.
 
 I have a 12 node distributed/replicated Gluster cluster.  I can mount it 
 to my client machines, and it seems to be working alright.  When I launch 
 instances, the nova-compute log on the client machines are giving me two 
 error messages:
 
 First is a qemu-kvm error: could not open disk image 
 /exports/instances/instances/instance-0242/disk: Invalid argument
 (full output at http://pastebin.com/i8vzWegJ)
 
 The second error message comes a short time later ending with 
 nova.openstack.common.rpc.amqp Invalid: Instance has already been created
 (full output at http://pastebin.com/6Ta4kkBN)
 
 This happens reliably with the multi-Gluster-node setup.  Oddly, after 
 creating a test Gluster volume composed of a single brick and single node, 
 everything works fine.
 
 Does anyone have any suggestions?
 
 thanks,
 JP
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] using Glusterfs for instance storage

2013-04-11 Thread Razique Mahroua
Also,you can import manually one instance and see if it boots.$ cd/exports/instances/instances/instance-0242$ virsh define libvirt.xml$ virsh start instance-0242and see if it boots, if so, we should start looking somewhere else
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 11 avr. 2013 à 20:14, Vishvananda Ishaya vishvana...@gmail.com a écrit :You should check your syslog for app armor denied messages. It is possibleapp armor is getting in theway here.VishOn Apr 11, 2013, at 8:35 AM, John Paul Walters jwalt...@isi.edu wrote:Hi Sylvain,I agree, though I've confirmed that the UID and GID are consistent across both the compute nodes and my Glusterfs nodes.JPOn Apr 11, 2013, at 11:22 AM, Sylvain Bauza sylvain.ba...@digimind.com wrote:
  

  
  
Agree.
  As for other shared FS, this is *highly* important to make sure
  Nova UID and GID are consistent in between all compute nodes. 
  If this is not the case, then you have to usermod all instances...
  
  -Sylvain
  
  Le 11/04/2013 16:49, Razique Mahroua a écrit:


  
  Hi JP,
  my bet is that this is a writing permissions issue. Does nova
has the right to write within the mounted directory?
  
 Razique Mahroua-Nuage  Co
razique.mahr...@gmail.com
Tel:

+33 9 72 37 94 15
  
  



  Le 11 avr. 2013 à 16:36, John Paul Walters jwalt...@isi.edu

a écrit :
  
  Hi,

We've started implementing a Glusterfs-based solution for
instance storage in order to provide live migration. I've
run into a strange problem when using a multi-node Gluster
setup that I hope someone has a suggestion to resolve.

I have a 12 node distributed/replicated Gluster cluster. I
can mount it to my client machines, and it seems to be
working alright. When I launch instances, the nova-compute
log on the client machines are giving me two error messages:

First is a qemu-kvm error: could not open disk image
/exports/instances/instances/instance-0242/disk: Invalid
argument
(full output at http://pastebin.com/i8vzWegJ)

The second error message comes a short time later ending
with nova.openstack.common.rpc.amqp Invalid: Instance has
already been created
(full output at http://pastebin.com/6Ta4kkBN)

This happens reliably with the multi-Gluster-node setup.
Oddly, after creating a test Gluster volume composed of a
single brick and single node, everything works fine.

Does anyone have any suggestions?

thanks,
JP


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
  


  
  
  
  
  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



  

___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Savanna] 0.1 Release!

2013-04-11 Thread Sergey Lukjanov
Absolutely. The main difference is that Savanna is the Hadoop oriented service. 
It aims to provide unified user-friendly API that will allow users to deploy 
Hadoop clusters fastly and without additional configurations for cluster 
provisioning.

Sergey Lukjanov

On Apr 11, 2013, at 20:21, Lloyd Dewolf lloydost...@gmail.com wrote:

 On Wed, Apr 10, 2013 at 3:45 PM, Robert Collins
 robe...@robertcollins.net wrote:
 
 On 11 April 2013 08:30, Sergey Lukjanov slukja...@mirantis.com wrote:
 Hi everybody,
 
 we finished Phase 1 of our roadmap and released the first project release!
 
 Currently Savanna has the REST API for Hadoop cluster provisioning using 
 pre-installed images and we started working on pluggable mechanism for 
 custom cluster provisioning tools.
 
 Also I'd like to note that custom pages for OpenStack Dashbord have been 
 published too.
 
 You can find more info on Savanna site:
 
 Savanna seems to fit into the same space as Heat (going off your
 diagram on http://savanna.mirantis.com/) - can you explain how it is
 different?
 
 
 My understanding of Savanna is it's complete focus on Hadoop.
 
 Monty also recently asked about the opportunity for Savanna to use
 Heat in a thread titled Re: [openstack-dev] [EHO] Project name change
 [Savanna]
 http://markmail.org/message/2vre6r4kgwqhvhav
 
 Hope that helps,
 Lloyd
 
 --
 @lloyddewolf
 http://www.pistoncloud.com/


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] DevStack, Xen, VirtualBox

2013-04-11 Thread Aaron Paradowski
Hi,

I'm trying to follow the guide posted here 
https://wiki.openstack.org/wiki/XenServer/VirtualBox except using 
https://github.com/openstack-dev/devstack/blob/master/tools/xen/README.md when 
it comes to DevStack. I get a DHCP error each time when deploying the machine. 
I have tried multiple times. I really need this up and running ASAP. I am more 
than willing to have a Skype call and remote desktop for someone to access my 
machine and show me how to complete this process successfully. I'm sure I'm a 
few settings away but I can't figure out where.

Any help is greatly appreciated and I'm really desperate for this help to 
complete an academic project.

Many thanks in advance!

Aaron

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] DevStack, Xen, VirtualBox

2013-04-11 Thread Bob Ball
Hi Aaron,

Check that the bridge settings on the virtual box VM are set as in the guide - 
particularly promiscuous mode must be enabled.

The other settings are in the 'Installing XCP' section.

Thanks,

Bob

Aaron Paradowski aa...@paradowski.co.uk wrote:


Hi,

I’m trying to follow the guide posted here 
https://wiki.openstack.org/wiki/XenServer/VirtualBox except using 
https://github.com/openstack-dev/devstack/blob/master/tools/xen/README.md when 
it comes to DevStack. I get a DHCP error each time when deploying the machine. 
I have tried multiple times. I really need this up and running ASAP. I am more 
than willing to have a Skype call and remote desktop for someone to access my 
machine and show me how to complete this process successfully. I’m sure I’m a 
few settings away but I can’t figure out where.

Any help is greatly appreciated and I’m really desperate for this help to 
complete an academic project.

Many thanks in advance!

Aaron

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] ODS schedule app for Android?

2013-04-11 Thread Eric Windisch
Is there a good reason there is no Android app published for ODS? 
I've use the website on my phone in the past and it is okay, but not great.

I see there is an iOS app, but sched.org offers applications for both 
ecosystems.

Regards,
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ODS schedule app for Android?

2013-04-11 Thread Stefano Maffulli

On Thu 11 Apr 2013 03:22:46 PM PDT, Eric Windisch wrote:

Is there a good reason there is no Android app published for ODS?


I have no idea... is there supposed to be one?

I use no app, I'm in the I can't stand apps phase. I get the calendar 
via .ics in my calendar applications. The ics feed can be accessed from 
the mobile url:


http://openstacksummitapril2013.sched.org/mobile-site#.UWc-XkmJSoM

HTH
stef

--
Ask and answer questions on https://ask.openstack.org

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ODS schedule app for Android?

2013-04-11 Thread Eric Windisch


On Thursday, April 11, 2013 at 18:54 PM, Stefano Maffulli wrote:

 On Thu 11 Apr 2013 03:22:46 PM PDT, Eric Windisch wrote:
  Is there a good reason there is no Android app published for ODS?
 
 
 I have no idea... is there supposed to be one?
 
 I use no app, I'm in the I can't stand apps phase. I get the calendar 
 via .ics in my calendar applications. The ics feed can be accessed from 
 the mobile url:
 
 http://openstacksummitapril2013.sched.org/mobile-site#.UWc-XkmJSoM
 

Right, there are viable alternatives to the app. However, the question is, 
since there *is* an app available from sched.org for organizers, and the iOS 
app is being made available to attendees, the question is: why the disparity?

Regards,
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ODS schedule app for Android?

2013-04-11 Thread Eric Windisch
Someone has just informed me of the @OpenStack twitter feed which reads: 

Coming to the Summit? Download the new iphone app! (Android coming soon) 
awe.sm/dE87S (http://awe.sm/dE87S)

Case closed, I guess ;-) 

Regards,
Eric Windisch


On Thursday, April 11, 2013 at 19:48 PM, Eric Windisch wrote:

 
 
 On Thursday, April 11, 2013 at 18:54 PM, Stefano Maffulli wrote:
 
  On Thu 11 Apr 2013 03:22:46 PM PDT, Eric Windisch wrote:
   Is there a good reason there is no Android app published for ODS?
  
  
  I have no idea... is there supposed to be one?
  
  I use no app, I'm in the I can't stand apps phase. I get the calendar 
  via .ics in my calendar applications. The ics feed can be accessed from 
  the mobile url:
  
  http://openstacksummitapril2013.sched.org/mobile-site#.UWc-XkmJSoM
  
 
 Right, there are viable alternatives to the app. However, the question is, 
 since there *is* an app available from sched.org (http://sched.org) for 
 organizers, and the iOS app is being made available to attendees, the 
 question is: why the disparity?
 
 Regards,
 Eric Windisch
 
 
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ceilometer-agent-central starting fail

2013-04-11 Thread Liu Wenmao
Thanks, the ceilometer seems to lack some default options in configuration
files and the official guidance. (
http://docs.openstack.org/developer/ceilometer/configuration.html)

So maybe it is not ready for users yet?


On Wed, Apr 10, 2013 at 8:28 PM, Doug Hellmann
doug.hellm...@dreamhost.comwrote:




 On Wed, Apr 10, 2013 at 6:10 AM, Liu Wenmao marvel...@gmail.com wrote:

 Actually this is not over.

 The main reason of service failure is that central/manager.py
 and service.py use different vairables:

 central/manager.py
  70 def interval_task(self, task):
  71 self.keystone = ksclient.Client(
  72 username=cfg.CONF.*os_username*,
  73 password=cfg.CONF.os_password,
  74 tenant_id=cfg.CONF.os_tenant_id,
  75 tenant_name=cfg.CONF.os_tenant_name,
  76 auth_url=cfg.CONF.os_auth_url)

 44 CLI_OPTIONS = [
  45 cfg.StrOpt('*os-username*',
  46default=os.environ.get('OS_USERNAME', 'ceilometer'),
  47help='Username to use for openstack service access'),
  48 cfg.StrOpt('os-password',
  49default=os.environ.get('OS_PASSWORD', 'admin'),
  50help='Password to use for openstack service access'),
  51 cfg.StrOpt('os-tenant-id',
  52default=os.environ.get('OS_TENANT_ID', ''),
  53help='Tenant ID to use for openstack service access'),
  54 cfg.StrOpt('os-tenant-name',
  55default=os.environ.get('OS_TENANT_NAME', 'admin'),
  56help='Tenant name to use for openstack service
 access'),
  57 cfg.StrOpt('os_auth_url',
  58default=os.environ.get('OS_AUTH_URL',
  59   'http://localhost:5000/v2.0'),

 So after I change all - to _ and modify all options in
 /etc/ceilometer/ceilometer.conf, the service starts OK.


 The thing that fixed it was changing - to _ in your configuration
 file. The options library allows option names to have - in them so they
 look nice as command line switches, but the option name uses the _.

 Doug





 On Wed, Apr 10, 2013 at 2:02 PM, Liu Wenmao marvel...@gmail.com wrote:

 I solve this problem by two steps:

 1 modify /etc/init/ceilometer-agent-central.conf
 exec start-stop-daemon --start --chuid ceilometer --exec
 /usr/local/bin/ceilometer-agent-central --
 --config-file=/etc/ceilometer/ceilometer.conf
 2 add some lines to /etc/ceilometer/ceilometer.conf:
 os-username=ceilometer
 os-password=nsfocus
 os-tenant-name=service
 os-auth-url=http://controller:5000/v2.0



 On Wed, Apr 10, 2013 at 1:36 PM, Liu Wenmao marvel...@gmail.com wrote:

 Hi all:

 I have just install ceilometer grizzly github version, but fail to
 start ceilometer-agent-central service. I think it is due to that I didn't
 set up the keystone user/password like other projects. but I follow the
 instructions(
 http://docs.openstack.org/developer/ceilometer/install/manual.html#configuring-keystone-to-work-with-api)
 but it does not include the ceilometer configuration.

 # service ceilometer-agent-central start
 ceilometer-agent-central start/running, process 5679

 # cat /etc/init/ceilometer-agent-central.conf
 description ceilometer-agent-compute
 author Chuck Short zul...@ubuntu.com

 start on runlevel [2345]
 stop on runlelvel [!2345]

 chdir /var/run

 pre-start script
 mkdir -p /var/run/ceilometer
 chown ceilometer:ceilometer /var/run/ceilometer

 mkdir -p /var/lock/ceilometer
 chown ceilometer:ceilometer /var/lock/ceilometer
 end script

 exec start-stop-daemon --start --chuid ceilometer --exec
 /usr/local/bin/ceilometer-agent-central


 /var/log/ceilometer/ceilometer-agent-central.log
 2013-04-10 13:01:39ERROR [ceilometer.openstack.common.loopingcall]
 in looping call
 Traceback (most recent call last):
   File
 /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/openstack/common/loopingcall.py,
 line 67, in _inner
 self.f(*self.args, **self.kw)
   File
 /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/central/manager.py,
 line 76, in interval_task
 auth_url=cfg.CONF.os_auth_url)
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py,
 line 134, in __init__
 self.authenticate()
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/client.py,
 line 205, in authenticate
 token)
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py,
 line 174, in get_raw_token_from_identity_servicetoken=token)
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py,
 line 202, in _base_authN
 resp, body = self.request(url, 'POST', body=params, headers=headers)
   File
 

[Openstack] Fwd: Re: [openStack] instance status

2013-04-11 Thread Deepak A.P
-- Forwarded message --
From: Deepak A.P swift007.dee...@gmail.com
Date: Tue, Apr 9, 2013 at 2:25 PM
Subject: Re: Re: [Openstack] [openStack] instance status
To: Wangpan hzwang...@corp.netease.com


Hello,

The output of 'nova-manage service list'command is



* Binary   Host  Zone
StatusState  Updated_At*

nova-computeopenStack   nova
 Enabled   XXX   2012-04-03

nova-network openStack   nova
 Enabled   XXX  2012-04-03

nova-scheduler   openStack  nova
 Enabled  XXX2012-04-05

 nova-cert  openStack   nova
 Enabled   XXX   2012-04-05






On Tue, Apr 9, 2013 at 12:21 PM, Wangpan hzwang...@corp.netease.com wrote:

 **
 What the result of running 'nova-manage service list', 'sudo' may be
 needed by non-root user.

 2013-04-09
  --
  Wangpan
  --
  *发件人:*Deepak A.P
 *发送时间:*2013-04-09 13:54
 *主题:*Re: [Openstack] [openStack] instance status
 *收件人:*sachin tripathisachinku...@gmail.com
 *抄送:*OpenStack Mailing Listopenstack@lists.launchpad.net

  Below is output which i got on running '*nova console-log*'

 nova console-log *d46530e2-.*

 Error: The server has either erred or either incapable of performing the
 requested operation. HTTP(500) (Request-ID: req-be41c539...)

 i tried the other nova commands like 'nova reboot' , ' nova reload' but
 got the similar error,

 and logs are not created under '/var/log/nova/'

 stuck with the above issue



 On Tue, Apr 9, 2013 at 10:32 AM, sachin tripathi sachinku...@gmail.comwrote:

Hello,
 You can check the console logs
 nova console-log instance_id

 On the api node, you can find /var/log/nova/nova-*.log,

 And from nova  show instance_id  get the hypervisor hostname and
 check the hypervisor compute-log too.

 Hope, this will give some info to start troubleshoot.

 +Sachin



  On Tue, Apr 9, 2013 at 10:03 AM, Deepak A.P 
 swift007.dee...@gmail.comwrote:

  Hello,

 After launching the instance i waited for long time , but still
 the status of the instance  shows 'BUILD' , the flavor of the image is
 'm1.small' . Is there any work around to know what's happening with the

 instance ?. Are there any logs where i can check the instance
 launch status ?





 On Sat, Apr 6, 2013 at 12:01 AM, Lloyd Dewolf lloydost...@gmail.comwrote:

  On Fri, Apr 5, 2013 at 4:18 AM, Deepak A.P 
 swift007.dee...@gmail.comwrote:


 Hi ,

  i have a list of instances created using the below command

  nova boot myInstance --image 0e2f43a8-e614-48ff-92bd-be0c68da19f4

--flavor 2 --key_name openstack


i ran the below command to check the status of instances


 nova list


 all the instances show status as  *BUILD* , how to se the status of the 
 image to

 ACTIVE  , i tried rebooting the instance am getting the below error


 Once the instance finishes building then it will be in the active
 state. Depending on the image, flavor and configuration starting an
 instance can take a long time. I would suggest first trying with a small
 image like Cirrus and using a tiny flavor.

 http://docs.openstack.org/trunk/openstack-compute/admin/content/starting-images.html

 Hope that helps,
 --
 @lloyddewolf
 http://www.pistoncloud.com/




 --
 Cheers,
 Deepak

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 --
 Cheers,
 Deepak




-- 
Cheers,
Deepak



-- 
Cheers,
Deepak
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fwd: Re: [openStack] instance status

2013-04-11 Thread Aaron Rosen
Do you have NTP configured? If the nodes running nova-compute have clocks
that differ from each other the status shows XXX . (Not sure why it's done
this way though).

Aaron


On Thu, Apr 11, 2013 at 10:44 PM, Deepak A.P swift007.dee...@gmail.comwrote:



 -- Forwarded message --
 From: Deepak A.P swift007.dee...@gmail.com
 Date: Tue, Apr 9, 2013 at 2:25 PM
 Subject: Re: Re: [Openstack] [openStack] instance status
 To: Wangpan hzwang...@corp.netease.com


 Hello,

 The output of 'nova-manage service list'command is



 * Binary   Host  Zone
   StatusState  Updated_At*

 nova-computeopenStack   nova
  Enabled   XXX   2012-04-03

 nova-network openStack   nova
Enabled   XXX  2012-04-03

 nova-scheduler   openStack  nova
  Enabled  XXX2012-04-05

  nova-cert  openStack   nova
  Enabled   XXX   2012-04-05






 On Tue, Apr 9, 2013 at 12:21 PM, Wangpan hzwang...@corp.netease.comwrote:

 **
 What the result of running 'nova-manage service list', 'sudo' may be
 needed by non-root user.

 2013-04-09
  --
  Wangpan
  --
  *发件人:*Deepak A.P
 *发送时间:*2013-04-09 13:54
 *主题:*Re: [Openstack] [openStack] instance status
 *收件人:*sachin tripathisachinku...@gmail.com
 *抄送:*OpenStack Mailing Listopenstack@lists.launchpad.net

  Below is output which i got on running '*nova console-log*'

 nova console-log *d46530e2-.*

 Error: The server has either erred or either incapable of performing the
 requested operation. HTTP(500) (Request-ID: req-be41c539...)

 i tried the other nova commands like 'nova reboot' , ' nova reload' but
 got the similar error,

 and logs are not created under '/var/log/nova/'

 stuck with the above issue



 On Tue, Apr 9, 2013 at 10:32 AM, sachin tripathi 
 sachinku...@gmail.comwrote:

Hello,
 You can check the console logs
 nova console-log instance_id

 On the api node, you can find /var/log/nova/nova-*.log,

 And from nova  show instance_id  get the hypervisor hostname and
 check the hypervisor compute-log too.

 Hope, this will give some info to start troubleshoot.

 +Sachin



  On Tue, Apr 9, 2013 at 10:03 AM, Deepak A.P 
 swift007.dee...@gmail.comwrote:

  Hello,

 After launching the instance i waited for long time , but still
 the status of the instance  shows 'BUILD' , the flavor of the image is
 'm1.small' . Is there any work around to know what's happening with the

 instance ?. Are there any logs where i can check the instance
 launch status ?





 On Sat, Apr 6, 2013 at 12:01 AM, Lloyd Dewolf lloydost...@gmail.comwrote:

  On Fri, Apr 5, 2013 at 4:18 AM, Deepak A.P swift007.dee...@gmail.com
  wrote:


 Hi ,

  i have a list of instances created using the below command

  nova boot myInstance --image
 0e2f43a8-e614-48ff-92bd-be0c68da19f4

--flavor 2 --key_name openstack


i ran the below command to check the status of instances


 nova list


 all the instances show status as  *BUILD* , how to se the status of 
 the image to

 ACTIVE  , i tried rebooting the instance am getting the below error


 Once the instance finishes building then it will be in the active
 state. Depending on the image, flavor and configuration starting an
 instance can take a long time. I would suggest first trying with a small
 image like Cirrus and using a tiny flavor.

 http://docs.openstack.org/trunk/openstack-compute/admin/content/starting-images.html

 Hope that helps,
 --
 @lloyddewolf
 http://www.pistoncloud.com/




 --
 Cheers,
 Deepak

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 --
 Cheers,
 Deepak




 --
 Cheers,
 Deepak



 --
 Cheers,
 Deepak

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: quantal_folsom_deploy_proposed #9

2013-04-11 Thread openstack-testing-bot
Title: quantal_folsom_deploy_proposed
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_deploy_proposed/9/Project:quantal_folsom_deploy_proposedDate of build:Thu, 11 Apr 2013 04:07:11 -0400Build duration:57 minBuild cause:Started by user James PageBuilt on:masterHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesBuild Artifactslogs/nonelogs/test-03.os.magners.qa.lexington-log.tar.gzlogs/test-04.os.magners.qa.lexington-log.tar.gzlogs/test-05.os.magners.qa.lexington-log.tar.gzlogs/test-06.os.magners.qa.lexington-log.tar.gzlogs/test-07.os.magners.qa.lexington-log.tar.gzlogs/test-08.os.magners.qa.lexington-log.tar.gzlogs/test-09.os.magners.qa.lexington-log.tar.gzlogs/test-11.os.magners.qa.lexington-log.tar.gzlogs/test-12.os.magners.qa.lexington-log.tar.gzConsole Output[...truncated 11824 lines...]INFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-12.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-08.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-09.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-04.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-05.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-11.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-03.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-06.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Archiving logs on test-02.os.magners.qa.lexingtonWARNING:paramiko.transport:Oops, unhandled type 3ERROR:root:Coult not create tarball of logs on test-02.os.magners.qa.lexingtonINFO:root:Grabbing information from test-07.os.magners.qa.lexingtonINFO:root:Grabbing information from test-12.os.magners.qa.lexingtonINFO:root:Grabbing information from test-08.os.magners.qa.lexingtonINFO:root:Grabbing information from test-09.os.magners.qa.lexingtonINFO:root:Grabbing information from test-04.os.magners.qa.lexingtonINFO:root:Grabbing information from test-05.os.magners.qa.lexingtonINFO:root:Grabbing information from test-11.os.magners.qa.lexingtonINFO:root:Grabbing information from test-03.os.magners.qa.lexingtonINFO:root:Grabbing information from test-06.os.magners.qa.lexingtonINFO:root:Grabbing information from test-02.os.magners.qa.lexingtonERROR:root:Unable to get information from test-02.os.magners.qa.lexingtonINFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.INFO:paramiko.transport.sftp:[chan 1] sftp session closed.Traceback (most recent call last):  File "/var/lib/jenkins/tools/jenkins-scripts/collate-test-logs.py", line 88, in connections[host]["sftp"].close()KeyError: 'sftp'+ exit 1Build step 'Execute shell' marked build as failureArchiving artifactsEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #62

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/62/Project:precise_havana_nova_trunkDate of build:Thu, 11 Apr 2013 08:31:34 -0400Build duration:59 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesRemove undefined name pyflake errorsby stanislaw.pituchaeditnova/tests/test_powervm.pyeditnova/cmd/baremetal_manage.pyeditnova/cmd/dhcpbridge.pyedittools/run_pep8.sheditnova/cmd/baremetal_deploy_helper.pyeditnova/tests/test_db_api.pyConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision 6b2af9c084754a1e678f741bfc6b97e13f1cf8a5 (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision 284cc009175f0d87683dc6a98ee997e8476ef87c (origin/master)Checking out Revision 284cc009175f0d87683dc6a98ee997e8476ef87c (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson155862896587948837.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: quantal_folsom_deploy_proposed #10

2013-04-11 Thread openstack-testing-bot
Title: quantal_folsom_deploy_proposed
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_deploy_proposed/10/Project:quantal_folsom_deploy_proposedDate of build:Thu, 11 Apr 2013 09:07:27 -0400Build duration:16 minBuild cause:Started by user James PageBuilt on:masterHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesBuild Artifactslogs/nonelogs/test-03.os.magners.qa.lexington-log.tar.gzlogs/test-04.os.magners.qa.lexington-log.tar.gzlogs/test-05.os.magners.qa.lexington-log.tar.gzlogs/test-06.os.magners.qa.lexington-log.tar.gzlogs/test-07.os.magners.qa.lexington-log.tar.gzlogs/test-08.os.magners.qa.lexington-log.tar.gzlogs/test-09.os.magners.qa.lexington-log.tar.gzlogs/test-11.os.magners.qa.lexington-log.tar.gzlogs/test-12.os.magners.qa.lexington-log.tar.gzConsole Output[...truncated 2845 lines...]  -> Relation: openstack-dashboard <-> keystone  -> Relation: nova-compute <-> mysql  -> Relation: nova-compute <-> rabbitmq  -> Relation: nova-compute <-> glance  -> Relation: nova-compute <-> nova-cloud-controller  -> Relation: nova-compute <-> ceph- Sleeping for 60 before ensuring relation state.- Deployment complete in 903 seconds.- Juju command log:juju deploy -n 1 --config=/tmp/tmptfGmoz --repository=/var/lib/jenkins/jobs/quantal_folsom_deploy_proposed/workspace local:nova-compute nova-compute -e quantaljuju deploy -n 1 --config=/tmp/tmptfGmoz --repository=/var/lib/jenkins/jobs/quantal_folsom_deploy_proposed/workspace local:nova-cloud-controller nova-cloud-controller -e quantaljuju deploy -n 1 --config=/tmp/tmptfGmoz --repository=/var/lib/jenkins/jobs/quantal_folsom_deploy_proposed/workspace local:ceph ceph -e quantaljuju deploy -n 1 --config=/tmp/tmptfGmoz --repository=/var/lib/jenkins/jobs/quantal_folsom_deploy_proposed/workspace local:keystone keystone -e quantaljuju deploy -n 1 --repository=/var/lib/jenkins/jobs/quantal_folsom_deploy_proposed/workspace local:rabbitmq-server rabbitmq -e quantaljuju deploy -n 1 --repository=/var/lib/jenkins/jobs/quantal_folsom_deploy_proposed/workspace local:mysql mysql -e quantaljuju deploy -n 1 --config=/tmp/tmptfGmoz --repository=/var/lib/jenkins/jobs/quantal_folsom_deploy_proposed/workspace local:openstack-dashboard openstack-dashboard -e quantaljuju deploy -n 1 --config=/tmp/tmptfGmoz --repository=/var/lib/jenkins/jobs/quantal_folsom_deploy_proposed/workspace local:cinder cinder -e quantaljuju deploy -n 1 --config=/tmp/tmptfGmoz --repository=/var/lib/jenkins/jobs/quantal_folsom_deploy_proposed/workspace local:glance glance -e quantaljuju add-relation keystone mysql -e quantaljuju add-relation nova-cloud-controller mysql -e quantaljuju add-relation nova-cloud-controller rabbitmq -e quantaljuju add-relation nova-cloud-controller glance -e quantaljuju add-relation nova-cloud-controller keystone -e quantaljuju add-relation glance mysql -e quantaljuju add-relation glance keystone -e quantaljuju add-relation glance ceph -e quantaljuju add-relation cinder mysql -e quantaljuju add-relation cinder rabbitmq -e quantaljuju add-relation cinder nova-cloud-controller -e quantaljuju add-relation cinder keystone -e quantaljuju add-relation cinder ceph -e quantaljuju add-relation openstack-dashboard keystone -e quantaljuju add-relation nova-compute mysql -e quantaljuju add-relation nova-compute rabbitmq -e quantaljuju add-relation nova-compute glance -e quantaljuju add-relation nova-compute nova-cloud-controller -e quantaljuju add-relation nova-compute ceph -e quantal+ rc=0+ echo 'Deployer returned: 0'Deployer returned: 0+ [[ 0 != 0 ]]+ jenkins-cli build folsom_coverage+ exit 0Archiving artifactsEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_python-novaclient_trunk #103

2013-04-11 Thread openstack-testing-bot
Title: precise_grizzly_python-novaclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_python-novaclient_trunk/103/Project:precise_grizzly_python-novaclient_trunkDate of build:Thu, 11 Apr 2013 12:01:32 -0400Build duration:2 min 19 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesMake --vlan option work in network-create in VLAN modeby xu-haiweiedittests/v1_1/test_shell.pyeditnovaclient/v1_1/shell.pyConsole Output[...truncated 1879 lines...]  Uploading python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1.debian.tar.gz: done.  Uploading python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1_amd64.changes']File "pool/main/p/python-novaclient/python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1_all.deb" is already registered with different checksums!md5 expected: ab6c42dded17a46e09a551e27a9b, got: 39730ddebc65cf80142ce2a74ad36650sha1 expected: cc5230a22429759212ff00fef1f3798731b112ef, got: ee69d6033de0b7d95ba7c9adc43b969eed2978desha256 expected: 9eaa6b2da4fecd210d234f88db9fd5ca8435efa9882dc1cfb2964736606e8e7b, got: 147d6678c46b1b0359588c5e5c2d221519c530363c9e2c0080eec9ffb35be66esize expected: 84914, got: 84930There have been errors!ERROR:root:Error occurred during package creation/build: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254ERROR:root:Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-novaclient/grizzly /tmp/tmpX8dppp/python-novaclientmk-build-deps -i -r -t apt-get -y /tmp/tmpX8dppp/python-novaclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 69f9971da54b46a8883148e4cef6346c7933b6ec..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2.13.0.10.gc230812+git201304111201~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [c230812] Make --vlan option work in network-create in VLAN modedch -a [1216a32] Fixing shell command 'service-disable' descriptiondch -a [8ce2330] Fix problem with nova --versiondebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-grizzly python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1_amd64.changesTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.10.gc230812+git201304111201~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #63

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/63/Project:precise_havana_nova_trunkDate of build:Thu, 11 Apr 2013 15:01:35 -0400Build duration:27 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changesbaremetal: Change node api related to prov_mac_addressby notsueditnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-create-req.xml.tpleditnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-create-req.json.tpleditnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-create-resp.json.tpladddoc/api_samples/os-baremetal-nodes/baremetal-node-create-with-address-resp.xmleditdoc/api_samples/os-baremetal-nodes/baremetal-node-show-resp.jsonadddoc/api_samples/os-baremetal-nodes/baremetal-node-create-with-address-req.jsoneditnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-list-resp.json.tpleditnova/tests/integrated/test_api_samples.pyeditdoc/api_samples/os-baremetal-nodes/baremetal-node-list-resp.xmleditnova/api/openstack/compute/contrib/baremetal_nodes.pyeditdoc/api_samples/os-baremetal-nodes/baremetal-node-create-resp.xmladdnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-create-with-address-req.json.tpleditnova/tests/api/openstack/compute/contrib/test_baremetal_nodes.pyeditdoc/api_samples/os-baremetal-nodes/baremetal-node-list-resp.jsoneditnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-show-resp.xml.tpladdnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-create-with-address-resp.xml.tpleditdoc/api_samples/os-baremetal-nodes/baremetal-node-create-req.xmleditdoc/api_samples/os-baremetal-nodes/baremetal-node-create-req.jsoneditnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-list-resp.xml.tpladddoc/api_samples/os-baremetal-nodes/baremetal-node-create-with-address-resp.jsonadddoc/api_samples/os-baremetal-nodes/baremetal-node-create-with-address-req.xmleditnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-show-resp.json.tpleditdoc/api_samples/os-baremetal-nodes/baremetal-node-create-resp.jsoneditnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-create-resp.xml.tpladdnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-create-with-address-req.xml.tpladdnova/tests/integrated/api_samples/os-baremetal-nodes/baremetal-node-create-with-address-resp.json.tpleditdoc/api_samples/os-baremetal-nodes/baremetal-node-show-resp.xmlConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision 284cc009175f0d87683dc6a98ee997e8476ef87c (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision f18cdb5291cf7847d4fc7df322bee3e89294f3c4 (origin/master)Checking out Revision f18cdb5291cf7847d4fc7df322bee3e89294f3c4 (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson150329178465232532.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_nova_trunk #977

2013-04-11 Thread openstack-testing-bot
Title: precise_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/977/Project:precise_grizzly_nova_trunkDate of build:Thu, 11 Apr 2013 15:31:35 -0400Build duration:35 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesSet defaultbranch in .gitreview to stable/grizzlyby pbradyedit.gitreviewConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_grizzly_nova_trunkCheckout:precise_grizzly_nova_trunk / /var/lib/jenkins/slave/workspace/precise_grizzly_nova_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision 4216ba7971caa0939461000a084014b91656e77c (remotes/origin/stable/grizzly)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_grizzly_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision fd665452397b31c7391cdffb5b90986593427e19 (remotes/origin/stable/grizzly)Checking out Revision fd665452397b31c7391cdffb5b90986593427e19 (remotes/origin/stable/grizzly)No emails were triggered.[precise_grizzly_nova_trunk] $ /bin/sh -xe /tmp/hudson6087700598260338002.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_cinder_trunk #18

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_cinder_trunk/18/Project:precise_havana_cinder_trunkDate of build:Thu, 11 Apr 2013 15:31:33 -0400Build duration:1 min 21 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFix incompatible Storwize/SVC commands.by avishayeditcinder/volume/drivers/storwize_svc.pyeditcinder/tests/test_storwize_svc.pyConsole Output[...truncated 1380 lines...]DEBUG:root:['bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']Building using working treeBuilding package in merge modeLooking for a way to retrieve the upstream tarballUsing the upstream tarball that is present in /tmp/tmp9KYyu8bzr: ERROR: An error (1) occurred running quilt: Applying patch fix_cinder_dependencies.patchpatching file tools/pip-requiresHunk #1 FAILED at 18.1 out of 1 hunk FAILED -- rejects in file tools/pip-requiresPatch fix_cinder_dependencies.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-8c6b8c49-6184-481a-9c4d-05302f2ef5a5', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-8c6b8c49-6184-481a-9c4d-05302f2ef5a5', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmp9KYyu8/cindermk-build-deps -i -r -t apt-get -y /tmp/tmp9KYyu8/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log -n5 --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304111531~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [56a49ad] fix default config option typesdch -a [f38f943] Fix incompatible Storwize/SVC commands.dch -a [23bd028] Fix backup manager formatting error.dch -a [1c77c54] Add service list functionality cinder-managedch -a [6d7a681] Clean up attach/detach tests.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-8c6b8c49-6184-481a-9c4d-05302f2ef5a5', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-8c6b8c49-6184-481a-9c4d-05302f2ef5a5', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_folsom_nova_stable #724

2013-04-11 Thread openstack-testing-bot
Title: precise_folsom_nova_stable
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_nova_stable/724/Project:precise_folsom_nova_stableDate of build:Thu, 11 Apr 2013 15:31:35 -0400Build duration:2 min 10 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFinal versioning for 2012.2.4by apeveceditnova/version.pyedittools/test-requiresConsole Output[...truncated 2484 lines...]Hunk #10 succeeded at 1184 with fuzz 1 (offset 24 lines).6 out of 10 hunks FAILED -- rejects in file nova/tests/test_quota.pyPatch CVE-2013-1838.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-068c0192-8d65-4a29-90af-d2dff6d28f3d', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-068c0192-8d65-4a29-90af-d2dff6d28f3d', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/folsom /tmp/tmpPC_EET/novamk-build-deps -i -r -t apt-get -y /tmp/tmpPC_EET/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 5b43cef510b68cff1f6e2f80742d3204b0b51e45..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 2012.2.4+git201304111532~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [49931ce] Final versioning for 2012.2.4dch -a [975a312] Fix Network object encoding issue when using qpiddch -a [056a7df] Use format_message on exceptions instead of str()dch -a [c4c417e] Set default fixed_ip quota to unlimited.dch -a [8f8ef21] Add a format_message method to the Exceptionsdch -a [c85683e] Adding netmask to dnsmasq argument --dhcp-rangedch -a [50dece6] Fix Wrong syntax for set:tag in dnsmasq startup optiondch -a [2dd8f3e] LibvirtHybridOVSBridgeDriver update for STPdch -a [69ba489] Fixes PowerVM spawn failed as missing attr supported_instancesdch -a [28aacf6] Fix bad Log statement in nova-managedch -a [524a5a3] Don't include traceback when wrapping exceptionsdch -a [67eb495] Decouple EC2 API from using instance iddch -a [f8c5492] libvirt: Optimize test_connection and capabilitiesdch -a [53626bf] populate dnsmasq lease db with valid leasesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-068c0192-8d65-4a29-90af-d2dff6d28f3d', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-068c0192-8d65-4a29-90af-d2dff6d28f3d', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: quantal_folsom_nova_stable #715

2013-04-11 Thread openstack-testing-bot
Title: quantal_folsom_nova_stable
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_nova_stable/715/Project:quantal_folsom_nova_stableDate of build:Thu, 11 Apr 2013 15:32:11 -0400Build duration:3 min 26 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesFinal versioning for 2012.2.4by apeveceditnova/version.pyedittools/test-requiresConsole Output[...truncated 2967 lines...]Hunk #10 succeeded at 1184 with fuzz 1 (offset 24 lines).6 out of 10 hunks FAILED -- rejects in file nova/tests/test_quota.pyPatch CVE-2013-1838.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-260038bb-c5e5-488a-9ed5-81715d079e11', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-260038bb-c5e5-488a-9ed5-81715d079e11', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/folsom /tmp/tmpSTh5g7/novamk-build-deps -i -r -t apt-get -y /tmp/tmpSTh5g7/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 5b43cef510b68cff1f6e2f80742d3204b0b51e45..HEAD --no-merges --pretty=format:[%h] %sdch -b -D quantal --newversion 2012.2.4+git201304111533~quantal-0ubuntu1 Automated Ubuntu testing build:dch -a [49931ce] Final versioning for 2012.2.4dch -a [975a312] Fix Network object encoding issue when using qpiddch -a [056a7df] Use format_message on exceptions instead of str()dch -a [c4c417e] Set default fixed_ip quota to unlimited.dch -a [8f8ef21] Add a format_message method to the Exceptionsdch -a [c85683e] Adding netmask to dnsmasq argument --dhcp-rangedch -a [50dece6] Fix Wrong syntax for set:tag in dnsmasq startup optiondch -a [2dd8f3e] LibvirtHybridOVSBridgeDriver update for STPdch -a [69ba489] Fixes PowerVM spawn failed as missing attr supported_instancesdch -a [28aacf6] Fix bad Log statement in nova-managedch -a [524a5a3] Don't include traceback when wrapping exceptionsdch -a [67eb495] Decouple EC2 API from using instance iddch -a [f8c5492] libvirt: Optimize test_connection and capabilitiesdch -a [53626bf] populate dnsmasq lease db with valid leasesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-260038bb-c5e5-488a-9ed5-81715d079e11', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-260038bb-c5e5-488a-9ed5-81715d079e11', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_cinder_trunk #19

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_cinder_trunk/19/Project:precise_havana_cinder_trunkDate of build:Thu, 11 Apr 2013 16:31:35 -0400Build duration:1 min 33 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changesnew cinder.conf.sample and fix extract_opts.pyby darren.birkettedittools/conf/extract_opts.pyeditetc/cinder/cinder.conf.sampleConsole Output[...truncated 1380 lines...]DEBUG:root:['bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']Building using working treeBuilding package in merge modeLooking for a way to retrieve the upstream tarballUsing the upstream tarball that is present in /tmp/tmpu2iaNubzr: ERROR: An error (1) occurred running quilt: Applying patch fix_cinder_dependencies.patchpatching file tools/pip-requiresHunk #1 FAILED at 18.1 out of 1 hunk FAILED -- rejects in file tools/pip-requiresPatch fix_cinder_dependencies.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-38d22219-6a4a-466d-a8ae-20cab08ccaa6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-38d22219-6a4a-466d-a8ae-20cab08ccaa6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpu2iaNu/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpu2iaNu/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log -n5 --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304111631~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [2c540b3] new cinder.conf.sample and fix extract_opts.pydch -a [56a49ad] fix default config option typesdch -a [f38f943] Fix incompatible Storwize/SVC commands.dch -a [23bd028] Fix backup manager formatting error.dch -a [1c77c54] Add service list functionality cinder-managedebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-38d22219-6a4a-466d-a8ae-20cab08ccaa6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-38d22219-6a4a-466d-a8ae-20cab08ccaa6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #64

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/64/Project:precise_havana_nova_trunkDate of build:Thu, 11 Apr 2013 16:33:10 -0400Build duration:45 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesRemove multi scheduler.by rbryantdeletenova/tests/scheduler/test_multi_scheduler.pydeletenova/scheduler/multi.pyConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@19e88093:pkg-builderUsing strategy: DefaultLast Built Revision: Revision f18cdb5291cf7847d4fc7df322bee3e89294f3c4 (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@7e5bfd3dWiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision fdcc1d20aa5a14272e9966507fa9213c2ed5ae3d (origin/master)Checking out Revision fdcc1d20aa5a14272e9966507fa9213c2ed5ae3d (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson4439647433991870960.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #34

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/34/Project:precise_havana_quantum_trunkDate of build:Thu, 11 Apr 2013 16:31:35 -0400Build duration:2 min 30 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesImplement LB plugin delete_pool_health_monitor().by rpodolyakaeditquantum/plugins/services/agent_loadbalancer/plugin.pyConsole Output[...truncated 2991 lines...]Fail-Stage: buildHost Architecture: amd64Install-Time: 37Job: quantum_2013.2+git201304111631~precise-0ubuntu1.dscMachine Architecture: amd64Package: quantumPackage-Time: 54Source-Version: 1:2013.2+git201304111631~precise-0ubuntu1Space: 15164Status: attemptedVersion: 1:2013.2+git201304111631~precise-0ubuntu1Finished at 20130411-1634Build needed 00:00:54, 15164k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304111631~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304111631~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpxaK_pk/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpxaK_pk/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log de5c1e4f281f59d550b476919b27ac4e2aae14ac..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304111631~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304111631~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304111631~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304111631~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304111631~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_python-novaclient_trunk #104

2013-04-11 Thread openstack-testing-bot
Title: precise_grizzly_python-novaclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_python-novaclient_trunk/104/Project:precise_grizzly_python-novaclient_trunkDate of build:Thu, 11 Apr 2013 17:01:36 -0400Build duration:2 min 8 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20Changesmake sure .get() also updates _infoby mikeedittests/v1_1/test_servers.pyeditnovaclient/base.pySupport force update quotaby gengjheditnovaclient/v1_1/quotas.pyedittests/v1_1/test_quotas.pyeditnovaclient/v1_1/shell.pyedittests/v1_1/test_shell.pyConsole Output[...truncated 1887 lines...]Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.14.g3a1058a+git201304111701~precise-0ubuntu1_amd64.changes']File "pool/main/p/python-novaclient/python-novaclient_2.13.0.14.g3a1058a+git201304111701~precise-0ubuntu1_all.deb" is already registered with different checksums!md5 expected: 4b4c78107e55988c958344b01b2b00b7, got: ab5b815b8e35e55c3f40a37aba1cc6b7sha1 expected: 4a4c4189e6e3271a335f342553381ff848fee1af, got: f6457c64628ac5292c6051054ca2bdfd59b146f8sha256 expected: 9d2802b324af004757afaeacb8f0cb4d0512e2248d0d3ffa557dec40a4a8d3dc, got: 722c3bd10b0c448ba9dcf8e0cf2047345e0f271dc82487098d78c87c26fc1fb9size expected: 84940, got: 85132There have been errors!ERROR:root:Error occurred during package creation/build: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.14.g3a1058a+git201304111701~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254ERROR:root:Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.14.g3a1058a+git201304111701~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-novaclient/grizzly /tmp/tmpRlUy96/python-novaclientmk-build-deps -i -r -t apt-get -y /tmp/tmpRlUy96/python-novaclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 69f9971da54b46a8883148e4cef6346c7933b6ec..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2.13.0.14.g3a1058a+git201304111701~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [c230812] Make --vlan option work in network-create in VLAN modedch -a [e8b665e] Support force update quotadch -a [2a495c0] make sure .get() also updates _infodch -a [1216a32] Fixing shell command 'service-disable' descriptiondch -a [8ce2330] Fix problem with nova --versiondebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-novaclient_2.13.0.14.g3a1058a+git201304111701~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A python-novaclient_2.13.0.14.g3a1058a+git201304111701~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/grizzly-trunk-testing python-novaclient_2.13.0.14.g3a1058a+git201304111701~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-grizzly python-novaclient_2.13.0.14.g3a1058a+git201304111701~precise-0ubuntu1_amd64.changesTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.14.g3a1058a+git201304111701~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-grizzly', 'python-novaclient_2.13.0.14.g3a1058a+git201304111701~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_havana_python-glanceclient_trunk #5

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_python-glanceclient_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_python-glanceclient_trunk/5/Project:precise_havana_python-glanceclient_trunkDate of build:Thu, 11 Apr 2013 17:31:37 -0400Build duration:2 min 12 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesFix problem running glance --versionby dimseditglanceclient/__init__.pyConsole Output[...truncated 1762 lines...]  Uploading python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1.dsc: done.  Uploading python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise.orig.tar.gz: done.  Uploading python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1.debian.tar.gz: done.  Uploading python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-havana', 'python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1_amd64.changes']File "pool/main/p/python-glanceclient/python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1_all.deb" is already registered with different checksums!md5 expected: bb419ad12c8933a6b598b76dc249378d, got: 0c066b50ea6a768bdc43eb8f7f50c782sha1 expected: 373e269c0f1e4d47502094ff23fa10337871742f, got: d7355fae5193429e8f1a6713cd01d0f1fd3b560bsha256 expected: 4726251f0d943e3aa93e8ca226ce1e1c5d4783833cc04f208ddbcf39f2d55d67, got: dbe12613b20b2b2e924ba70f7c35eec54f9f3d68a5d1f7251d6f0c72b48b8db1size expected: 36948, got: 36954There have been errors!ERROR:root:Error occurred during package creation/build: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-havana', 'python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254ERROR:root:Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-havana', 'python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-glanceclient/grizzly /tmp/tmpguMS5z/python-glanceclientmk-build-deps -i -r -t apt-get -y /tmp/tmpguMS5z/python-glanceclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 0995045f2a6c6179b9d3daf7e8c994e30e4e2d8c..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [b0ce15b] Fix problem running glance --versiondebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1_amd64.changesTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-havana', 'python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-havana', 'python-glanceclient_0.9.0.2.gb0ce15b+git201304111731~precise-0ubuntu1_amd64.changes']' returned non-zero exit status 254Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_folsom_keystone_stable #102

2013-04-11 Thread openstack-testing-bot
Title: precise_folsom_keystone_stable
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_keystone_stable/102/Project:precise_folsom_keystone_stableDate of build:Thu, 11 Apr 2013 18:01:39 -0400Build duration:2 min 12 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesBump version to 2012.2.5by apeveceditsetup.pyConsole Output[...truncated 1242 lines...]patching file etc/keystone.conf.sampleApplying patch CVE-2013-1865.patchpatching file keystone/service.pyHunk #1 FAILED at 490.1 out of 1 hunk FAILED -- rejects in file keystone/service.pypatching file tests/test_service.pyHunk #1 FAILED at 150.1 out of 1 hunk FAILED -- rejects in file tests/test_service.pyPatch CVE-2013-1865.patch can be reverse-appliedERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-0e93c217-5a75-44fe-a4a6-155895c26cc4', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-0e93c217-5a75-44fe-a4a6-155895c26cc4', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/folsom /tmp/tmp6arOH9/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmp6arOH9/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 255b1d43500f5d98ec73a0056525b492b14fec05..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 2012.2.5+git201304111801~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [09f2802] Bump version to 2012.2.5dch -a [1889299] key all backends off of hash of pki token.dch -a [9e0a97d] Retry http_request and json_request failure.dch -a [40660f0] auth_token hash pki key PKI tokens on hash in memcached when accessed by auth_token middelwaredch -a [b3ce6a7] Use the right subprocess based on os monkeypatchdch -a [bb1ded0] add check for config-dir parameter (bug1101129)dch -a [5ea4fcf] mark 2.0 API as stabledebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-0e93c217-5a75-44fe-a4a6-155895c26cc4', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-0e93c217-5a75-44fe-a4a6-155895c26cc4', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_folsom_glance_stable #212

2013-04-11 Thread openstack-testing-bot
Title: precise_folsom_glance_stable
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_glance_stable/212/Project:precise_folsom_glance_stableDate of build:Thu, 11 Apr 2013 18:01:37 -0400Build duration:2 min 24 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesBump version to 2012.2.5by apeveceditglance/version.pyConsole Output[...truncated 1459 lines...]Applying patch disable-network-for-docs.patchpatching file doc/source/conf.pyApplying patch CVE-2013-1840.patchpatching file glance/api/middleware/cache.pyHunk #1 FAILED at 111.1 out of 1 hunk FAILED -- rejects in file glance/api/middleware/cache.pyPatch CVE-2013-1840.patch can be reverse-appliedERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-47b7f9fa-35d6-4db3-8e2f-4c50a05eccf6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-47b7f9fa-35d6-4db3-8e2f-4c50a05eccf6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/folsom /tmp/tmpbKFL48/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpbKFL48/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log afe61664ac5f933622e349da1c0a92d134a81230..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 2012.2.5+git201304111801~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [dbd3d3d] Bump version to 2012.2.5dch -a [5b4d21d] Check if creds are present and not Nonedch -a [cfaa2d8] Ensure repeated member deletion fails with 404dch -a [dd849a9] Do not return location in headersdch -a [04f88c8] Fixes deletion of invalid image memberdch -a [5597697] Wait in TestBinGlance.test_update_copying_from until image is activedch -a [12d28c3] Swallow UserWarning from glance-cache-managedch -a [5183360] Clean dangling image fragments in filesystem storedch -a [03dc862] Avoid dangling partial image on size/checksum mismatchdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-47b7f9fa-35d6-4db3-8e2f-4c50a05eccf6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-47b7f9fa-35d6-4db3-8e2f-4c50a05eccf6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_folsom_nova_stable #725

2013-04-11 Thread openstack-testing-bot
Title: precise_folsom_nova_stable
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_nova_stable/725/Project:precise_folsom_nova_stableDate of build:Thu, 11 Apr 2013 18:36:41 -0400Build duration:4 min 21 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesBump version to 2012.2.5by apeveceditnova/version.pyConsole Output[...truncated 2486 lines...]6 out of 10 hunks FAILED -- rejects in file nova/tests/test_quota.pyPatch CVE-2013-1838.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-ff44bfe0-bdec-423c-a252-7ae1c170aa4f', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-ff44bfe0-bdec-423c-a252-7ae1c170aa4f', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/folsom /tmp/tmpboofom/novamk-build-deps -i -r -t apt-get -y /tmp/tmpboofom/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 5b43cef510b68cff1f6e2f80742d3204b0b51e45..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 2012.2.5+git201304111838~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [9ecd965] Bump version to 2012.2.5dch -a [49931ce] Final versioning for 2012.2.4dch -a [975a312] Fix Network object encoding issue when using qpiddch -a [056a7df] Use format_message on exceptions instead of str()dch -a [c4c417e] Set default fixed_ip quota to unlimited.dch -a [8f8ef21] Add a format_message method to the Exceptionsdch -a [c85683e] Adding netmask to dnsmasq argument --dhcp-rangedch -a [50dece6] Fix Wrong syntax for set:tag in dnsmasq startup optiondch -a [2dd8f3e] LibvirtHybridOVSBridgeDriver update for STPdch -a [69ba489] Fixes PowerVM spawn failed as missing attr supported_instancesdch -a [28aacf6] Fix bad Log statement in nova-managedch -a [524a5a3] Don't include traceback when wrapping exceptionsdch -a [67eb495] Decouple EC2 API from using instance iddch -a [f8c5492] libvirt: Optimize test_connection and capabilitiesdch -a [53626bf] populate dnsmasq lease db with valid leasesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-ff44bfe0-bdec-423c-a252-7ae1c170aa4f', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-ff44bfe0-bdec-423c-a252-7ae1c170aa4f', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: quantal_folsom_nova_stable #716

2013-04-11 Thread openstack-testing-bot
Title: quantal_folsom_nova_stable
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_nova_stable/716/Project:quantal_folsom_nova_stableDate of build:Thu, 11 Apr 2013 18:40:42 -0400Build duration:2 min 55 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesBump version to 2012.2.5by apeveceditnova/version.pyConsole Output[...truncated 2969 lines...]6 out of 10 hunks FAILED -- rejects in file nova/tests/test_quota.pyPatch CVE-2013-1838.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-5c613237-dc3a-47cb-b121-591dafd4ee79', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-5c613237-dc3a-47cb-b121-591dafd4ee79', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/folsom /tmp/tmpJgYHur/novamk-build-deps -i -r -t apt-get -y /tmp/tmpJgYHur/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 5b43cef510b68cff1f6e2f80742d3204b0b51e45..HEAD --no-merges --pretty=format:[%h] %sdch -b -D quantal --newversion 2012.2.5+git201304111841~quantal-0ubuntu1 Automated Ubuntu testing build:dch -a [9ecd965] Bump version to 2012.2.5dch -a [49931ce] Final versioning for 2012.2.4dch -a [975a312] Fix Network object encoding issue when using qpiddch -a [056a7df] Use format_message on exceptions instead of str()dch -a [c4c417e] Set default fixed_ip quota to unlimited.dch -a [8f8ef21] Add a format_message method to the Exceptionsdch -a [c85683e] Adding netmask to dnsmasq argument --dhcp-rangedch -a [50dece6] Fix Wrong syntax for set:tag in dnsmasq startup optiondch -a [2dd8f3e] LibvirtHybridOVSBridgeDriver update for STPdch -a [69ba489] Fixes PowerVM spawn failed as missing attr supported_instancesdch -a [28aacf6] Fix bad Log statement in nova-managedch -a [524a5a3] Don't include traceback when wrapping exceptionsdch -a [67eb495] Decouple EC2 API from using instance iddch -a [f8c5492] libvirt: Optimize test_connection and capabilitiesdch -a [53626bf] populate dnsmasq lease db with valid leasesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-5c613237-dc3a-47cb-b121-591dafd4ee79', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-5c613237-dc3a-47cb-b121-591dafd4ee79', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #35

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/35/Project:precise_havana_quantum_trunkDate of build:Thu, 11 Apr 2013 20:01:37 -0400Build duration:1 min 57 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesEnsure unit tests work with all interface typesby gkottoneditquantum/tests/unit/linuxbridge/test_lb_quantum_agent.pyeditquantum/tests/unit/openvswitch/test_ovs_quantum_agent.pyeditquantum/tests/unit/openvswitch/test_ovs_tunnel.pyConsole Output[...truncated 2994 lines...]Host Architecture: amd64Install-Time: 29Job: quantum_2013.2+git201304112001~precise-0ubuntu1.dscMachine Architecture: amd64Package: quantumPackage-Time: 44Source-Version: 1:2013.2+git201304112001~precise-0ubuntu1Space: 15164Status: attemptedVersion: 1:2013.2+git201304112001~precise-0ubuntu1Finished at 20130411-2003Build needed 00:00:44, 15164k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304112001~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304112001~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpHYhVtw/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpHYhVtw/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log de5c1e4f281f59d550b476919b27ac4e2aae14ac..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304112001~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304112001~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304112001~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304112001~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304112001~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #36

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/36/Project:precise_havana_quantum_trunkDate of build:Thu, 11 Apr 2013 20:31:38 -0400Build duration:1 min 56 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesImported Translations from Transifexby Jenkinseditquantum/locale/ja/LC_MESSAGES/quantum.poeditquantum/locale/quantum.potConsole Output[...truncated 2997 lines...]Install-Time: 28Job: quantum_2013.2+git201304112031~precise-0ubuntu1.dscMachine Architecture: amd64Package: quantumPackage-Time: 43Source-Version: 1:2013.2+git201304112031~precise-0ubuntu1Space: 15164Status: attemptedVersion: 1:2013.2+git201304112031~precise-0ubuntu1Finished at 20130411-2033Build needed 00:00:43, 15164k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304112031~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304112031~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpEOEIbJ/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpEOEIbJ/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log de5c1e4f281f59d550b476919b27ac4e2aae14ac..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304112031~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [bd702cb] Imported Translations from Transifexdch -a [91bed75] Ensure unit tests work with all interface typesdch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304112031~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304112031~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304112031~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304112031~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #65

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/65/Project:precise_havana_nova_trunkDate of build:Thu, 11 Apr 2013 21:31:37 -0400Build duration:56 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesRemove unnecessary db call in scheduler driver live-migration codeby hanlindeditnova/tests/scheduler/test_scheduler.pyeditnova/scheduler/driver.pyConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@19e88093:pkg-builderUsing strategy: DefaultLast Built Revision: Revision fdcc1d20aa5a14272e9966507fa9213c2ed5ae3d (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@7e5bfd3dWiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision b2eb7f41dc7588e90562904d8762d9df2c3bd0ee (origin/master)Checking out Revision b2eb7f41dc7588e90562904d8762d9df2c3bd0ee (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson4893115606696655697.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-params  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-params", line 15if job_name == 'pipeline_manual_trigger'   ^SyntaxError: invalid syntaxBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #66

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/66/Project:precise_havana_nova_trunkDate of build:Thu, 11 Apr 2013 22:01:45 -0400Build duration:56 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesImported Translations from Transifexby Jenkinseditnova/locale/ru/LC_MESSAGES/nova.poeditnova/locale/en_US/LC_MESSAGES/nova.poeditnova/locale/fr/LC_MESSAGES/nova.poeditnova/locale/tl/LC_MESSAGES/nova.poeditnova/locale/bs/LC_MESSAGES/nova.poeditnova/locale/en_GB/LC_MESSAGES/nova.poeditnova/locale/uk/LC_MESSAGES/nova.poeditnova/locale/zh_CN/LC_MESSAGES/nova.poeditnova/locale/ko/LC_MESSAGES/nova.poeditnova/locale/nova.poteditnova/locale/nb/LC_MESSAGES/nova.poeditnova/locale/zh_TW/LC_MESSAGES/nova.poeditnova/locale/en_AU/LC_MESSAGES/nova.poeditnova/locale/es/LC_MESSAGES/nova.poeditnova/locale/cs/LC_MESSAGES/nova.poeditnova/locale/tr/LC_MESSAGES/nova.poeditnova/locale/tr_TR/LC_MESSAGES/nova.poeditnova/locale/da/LC_MESSAGES/nova.poeditnova/locale/it/LC_MESSAGES/nova.poeditnova/locale/de/LC_MESSAGES/nova.poeditnova/locale/pt_BR/LC_MESSAGES/nova.poeditnova/locale/ja/LC_MESSAGES/nova.poConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@19e88093:pkg-builderUsing strategy: DefaultLast Built Revision: Revision b2eb7f41dc7588e90562904d8762d9df2c3bd0ee (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@7e5bfd3dWiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision 4861a3b6a56217bc75ab8056411a4340486b50c4 (origin/master)Checking out Revision 4861a3b6a56217bc75ab8056411a4340486b50c4 (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson7502360225126807868.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-params  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-params", line 15if job_name == 'pipeline_manual_trigger'   ^SyntaxError: invalid syntaxBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #37

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/37/Project:precise_havana_quantum_trunkDate of build:Fri, 12 Apr 2013 00:31:38 -0400Build duration:1 min 56 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesShorten the path of the nicira nvp plugin.by marundeletequantum/plugins/nicira/nicira_nvp_plugin/common/securitygroups.pyeditquantum/db/migration/alembic_migrations/versions/1d76643bcec4_nvp_netbinding.pyeditquantum/db/migration/alembic_migrations/versions/1341ed32cc1e_nvp_netbinding_update.pydeletequantum/plugins/nicira/nicira_nvp_plugin/api_client/request_eventlet.pyaddquantum/plugins/nicira/extensions/nvp_networkgw.pyaddquantum/plugins/nicira/common/securitygroups.pydeletequantum/plugins/nicira/nicira_nvp_plugin/extensions/nvp_qos.pyaddquantum/plugins/nicira/api_client/common.pydeletequantum/plugins/nicira/nicira_nvp_plugin/NvpApiClient.pydeletequantum/plugins/nicira/nicira_nvp_plugin/nvp_cluster.pyeditquantum/db/migration/alembic_migrations/versions/45680af419f9_nvp_qos.pyaddquantum/plugins/nicira/nvp_cluster.pydeletequantum/plugins/nicira/nicira_nvp_plugin/common/config.pyaddquantum/plugins/nicira/common/config.pyaddquantum/plugins/nicira/api_client/client_eventlet.pydeletequantum/plugins/nicira/nicira_nvp_plugin/nicira_db.pyeditquantum/db/migration/alembic_migrations/versions/3cb5d900c5de_security_groups.pyaddquantum/plugins/nicira/extensions/__init__.pyeditquantum/tests/unit/nicira/test_nicira_plugin.pydeletequantum/plugins/nicira/nicira_nvp_plugin/nicira_networkgw_db.pyaddquantum/plugins/nicira/nvplib.pydeletequantum/plugins/nicira/nicira_nvp_plugin/api_client/request.pyaddquantum/plugins/nicira/api_client/request_eventlet.pyaddquantum/plugins/nicira/nicira_db.pyaddquantum/plugins/nicira/api_client/__init__.pyeditquantum/db/migration/alembic_migrations/versions/folsom_initial.pyaddquantum/plugins/nicira/READMEdeletequantum/plugins/nicira/nicira_nvp_plugin/api_client/__init__.pyeditquantum/db/migration/alembic_migrations/versions/1149d7de0cfa_port_security.pyaddquantum/plugins/nicira/common/exceptions.pydeletequantum/plugins/nicira/nicira_nvp_plugin/QuantumPlugin.pyeditquantum/db/migration/alembic_migrations/versions/2c4af419145b_l3_support.pyeditquantum/tests/unit/nicira/test_nvplib.pyeditbin/quantum-check-nvp-configeditquantum/plugins/__init__.pydeletequantum/plugins/nicira/nicira_nvp_plugin/__init__.pydeletequantum/plugins/nicira/nicira_nvp_plugin/common/__init__.pyeditquantum/db/migration/alembic_migrations/versions/363468ac592c_nvp_network_gw.pydeletequantum/plugins/nicira/nicira_nvp_plugin/nicira_qos_db.pydeletequantum/plugins/nicira/nicira_nvp_plugin/api_client/common.pyeditquantum/tests/unit/nicira/fake_nvpapiclient.pydeletequantum/plugins/nicira/nicira_nvp_plugin/nicira_models.pyeditquantum/plugins/nicira/__init__.pydeletequantum/plugins/nicira/nicira_nvp_plugin/READMEdeletequantum/plugins/nicira/nicira_nvp_plugin/api_client/client_eventlet.pyaddquantum/plugins/nicira/nvp_plugin_version.pydeletequantum/plugins/nicira/nicira_nvp_plugin/nvp_plugin_version.pyeditquantum/tests/unit/nicira/test_nvp_api_request_eventlet.pyaddquantum/plugins/nicira/common/metadata_access.pyaddquantum/plugins/nicira/QuantumPlugin.pyeditquantum/db/migration/alembic_migrations/versions/511471cc46b_agent_ext_model_supp.pyaddquantum/plugins/nicira/extensions/nvp_qos.pyaddquantum/plugins/nicira/check_nvp_config.pydeletequantum/plugins/nicira/nicira_nvp_plugin/common/metadata_access.pyaddquantum/plugins/nicira/nicira_qos_db.pyaddquantum/plugins/nicira/nicira_models.pydeletequantum/plugins/nicira/nicira_nvp_plugin/api_client/client.pydeletequantum/plugins/nicira/nicira_nvp_plugin/check_nvp_config.pyeditquantum/tests/unit/nicira/test_nvp_api_common.pyeditquantum/db/migration/alembic_migrations/versions/4692d074d587_agent_scheduler.pydeletequantum/plugins/nicira/nicira_nvp_plugin/common/exceptions.pydeletequantum/plugins/nicira/nicira_nvp_plugin/nvplib.pyeditquantum/tests/unit/nicira/test_networkgw.pyeditquantum/db/migration/alembic_migrations/versions/38335592a0dc_nvp_portmap.pyeditquantum/tests/unit/nicira/test_defaults.pyaddquantum/plugins/nicira/nicira_networkgw_db.pyeditsetup.pyaddquantum/plugins/nicira/NvpApiClient.pyaddquantum/plugins/nicira/common/__init__.pyaddquantum/plugins/nicira/api_client/request.pyaddquantum/plugins/nicira/api_client/client.pydeletequantum/plugins/nicira/nicira_nvp_plugin/extensions/nvp_networkgw.pydeletequantum/plugins/nicira/nicira_nvp_plugin/extensions/__init__.pyConsole Output[...truncated 2996 lines...]Job: quantum_2013.2+git201304120031~precise-0ubuntu1.dscMachine Architecture: amd64Package: quantumPackage-Time: 42Source-Version: 1:2013.2+git201304120031~precise-0ubuntu1Space: 15160Status: attemptedVersion: 

[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_keystone_trunk #17

2013-04-11 Thread openstack-testing-bot
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/17/Project:precise_havana_keystone_trunkDate of build:Fri, 12 Apr 2013 01:31:38 -0400Build duration:2 min 18 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesGenerate HTTPS certificates with ssl_setup.by jlennoxeditkeystone/common/openssl.pyeditkeystone/cli.pyeditetc/keystone.conf.sampleedittests/test_cert_setup.pyeditkeystone/common/config.pyeditdoc/source/man/keystone-manage.rsteditdoc/source/configuration.rstConsole Output[...truncated 2481 lines...]Machine Architecture: amd64Package: keystonePackage-Time: 62Source-Version: 1:2013.2+git201304120131~precise-0ubuntu1Space: 13624Status: attemptedVersion: 1:2013.2+git201304120131~precise-0ubuntu1Finished at 20130412-0133Build needed 00:01:02, 13624k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304120131~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304120131~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/grizzly /tmp/tmpzFSnII/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmpzFSnII/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log a40f7fe155f2246eaa03b616ea01437da7759587..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304120131~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [28ef9cd] Generate HTTPS certificates with ssl_setup.dch -a [cbac771] Fix for configuring non-default auth plugins properlydch -a [e4ec12e] Add TLS Support for LDAPdch -a [5c217fd] use the openstack test runnerdch -a [b033538] Fix 401 status responsedch -a [0b4ee31] catch errors in wsgi.Middleware.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.2+git201304120131~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A keystone_2013.2+git201304120131~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304120131~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304120131~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp