Re: [Openstack] ceilometer-agent-central starting fail

2013-04-10 Thread Liu Wenmao
I solve this problem by two steps:

1 modify /etc/init/ceilometer-agent-central.conf
exec start-stop-daemon --start --chuid ceilometer --exec
/usr/local/bin/ceilometer-agent-central --
--config-file=/etc/ceilometer/ceilometer.conf
2 add some lines to /etc/ceilometer/ceilometer.conf:
os-username=ceilometer
os-password=nsfocus
os-tenant-name=service
os-auth-url=http://controller:5000/v2.0



On Wed, Apr 10, 2013 at 1:36 PM, Liu Wenmao marvel...@gmail.com wrote:

 Hi all:

 I have just install ceilometer grizzly github version, but fail to
 start ceilometer-agent-central service. I think it is due to that I didn't
 set up the keystone user/password like other projects. but I follow the
 instructions(
 http://docs.openstack.org/developer/ceilometer/install/manual.html#configuring-keystone-to-work-with-api)
 but it does not include the ceilometer configuration.

 # service ceilometer-agent-central start
 ceilometer-agent-central start/running, process 5679

 # cat /etc/init/ceilometer-agent-central.conf
 description ceilometer-agent-compute
 author Chuck Short zul...@ubuntu.com

 start on runlevel [2345]
 stop on runlelvel [!2345]

 chdir /var/run

 pre-start script
 mkdir -p /var/run/ceilometer
 chown ceilometer:ceilometer /var/run/ceilometer

 mkdir -p /var/lock/ceilometer
 chown ceilometer:ceilometer /var/lock/ceilometer
 end script

 exec start-stop-daemon --start --chuid ceilometer --exec
 /usr/local/bin/ceilometer-agent-central


 /var/log/ceilometer/ceilometer-agent-central.log
 2013-04-10 13:01:39ERROR [ceilometer.openstack.common.loopingcall] in
 looping call
 Traceback (most recent call last):
   File
 /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/openstack/common/loopingcall.py,
 line 67, in _inner
 self.f(*self.args, **self.kw)
   File
 /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/central/manager.py,
 line 76, in interval_task
 auth_url=cfg.CONF.os_auth_url)
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py,
 line 134, in __init__
 self.authenticate()
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/client.py,
 line 205, in authenticate
 token)
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py,
 line 174, in get_raw_token_from_identity_servicetoken=token)
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py,
 line 202, in _base_authN
 resp, body = self.request(url, 'POST', body=params, headers=headers)
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/client.py,
 line 366, in request
 raise exceptions.from_response(resp, resp.text)
 Unauthorized: Unable to communicate with identity service: {error:
 {message: Invalid user / password, code: 401, title: Not
 Authorized}}. (HTTP 401)
 2013-04-10 13:01:39ERROR [ceilometer.openstack.common.threadgroup]
 Unable to communicate with identity service: {error: {message: Invalid
 user / password, code: 401, title: Not Authorized}}. (HTTP 401)

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Ceilometer] Can not start agents, can't find a publisher manager

2013-04-10 Thread Zehnder Toni (zehndton)
I`ve a problem with the ceilometer agents. When I want to start one, it can`t 
find a publisher manager('meter_publisher').
The whole error is :

CRITICAL ceilometer [-] Pipeline {'publishers': ['meter_publisher'], 
'interval': 60, 'transformers': None, 'name': 'meter_pipeline', 'counters': 
['*']}: Publishers set(['meter_publisher']) invalid

Is there a solution for this problem? It looks similar to 
https://bugs.launchpad.net/devstack/+bug/1131467

Thanks,

Toni Zehnder
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] vm unable to reach 169.254.169.254

2013-04-10 Thread Mouad Benchchaoui
Hi Liu,

The output above is normal if you didn't supply any user data when booting
your 
instancehttp://docs.openstack.org/trunk/openstack-compute/admin/content/user-data.html
 :)

Cheers,


On Wed, Apr 10, 2013 at 7:25 AM, Liu Wenmao marvel...@gmail.com wrote:

 Thanks Mouad

 After I install the latest grizzly quantum and remove metadata_port  from
 l3_agent.ini, I can connect 169.254.169.254:80, but the server return a
 404 NOT Found error:


 Starting dropbear sshd: generating rsa key... generating dsa key... OK
 = cloudfinal: system completely up in 5.03 seconds 
   instanceid: i005b
   publicipv4:
   localipv4 : 100.0.0.4
 wget: server returned error: HTTP/1.1 404 Not Found
 clouduserdata: failed to read user data url: 
 http://169.254.169.254/20090404/userdata
 WARN: /etc/rc3.d/S99clouduserdata failed

 I use cirros-0.3.0-x86_64-disk.img, is it a problem of cirros-image, or
 quantum?



 On Tue, Apr 9, 2013 at 6:18 PM, Mouad Benchchaoui 
 m.benchcha...@cloudbau.de wrote:

 Hi,

 Are you using namespaces ? b/c i think this is related to
 https://bugs.launchpad.net/quantum/+bug/1160955, if so a fix was just
 commited in the stable grizzly branch, so upgrade if you want to use
 another port than the default one, or i think removing the option 
 metadata_port
 from l3_agent.ini should also make it work for you.


 HTH,

 --
 Mouad


 On Tue, Apr 9, 2013 at 11:48 AM, Liu Wenmao marvel...@gmail.com wrote:

 hi all:

 I setup quantum and nova grizzly, but vms can not get public key from
 169.254.169.254:



  debug end   ##
 cloudsetup: failed to read iid from metadata. tried 30
 WARN: /etc/rc3.d/S45cloudsetup failed
 Starting dropbear sshd: generating rsa key... generating dsa key... OK
 = cloudfinal: system completely up in 39.98 seconds 
 wget: can't connect to remote host (169.254.169.254): Connection refused
 wget: can't connect to remote host (169.254.169.254): Connection refused
 wget: can't connect to remote host (169.254.169.254): Connection refused
   instanceid:
   publicipv4:

 I have configured nova.conf
 enabled_apis=ec2,osapi_compute,metadata
 metadata_manager=nova.api.manager.MetadataManager
 metadata_listen=0.0.0.0
 metadata_listen_port=8775
 service_quantum_metadata_proxy=true
 metadata_host=20.0.0.1
 metadata_port=8775

 quantum l3_agent.ini
 metadata_ip = 20.0.0.1
 metadata_port = 8775

 metadata_agent.ini
 nova_metadata_ip = 20.0.0.1
 nova_metadata_port = 8775

 20.0.0.1 is my controller ip.

 p.s. I can not see any things like 169.254.169.254 in the iptables of
 controller or compute nodes.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ceilometer-agent-central starting fail

2013-04-10 Thread Doug Hellmann
On Wed, Apr 10, 2013 at 6:10 AM, Liu Wenmao marvel...@gmail.com wrote:

 Actually this is not over.

 The main reason of service failure is that central/manager.py
 and service.py use different vairables:

 central/manager.py
  70 def interval_task(self, task):
  71 self.keystone = ksclient.Client(
  72 username=cfg.CONF.*os_username*,
  73 password=cfg.CONF.os_password,
  74 tenant_id=cfg.CONF.os_tenant_id,
  75 tenant_name=cfg.CONF.os_tenant_name,
  76 auth_url=cfg.CONF.os_auth_url)

 44 CLI_OPTIONS = [
  45 cfg.StrOpt('*os-username*',
  46default=os.environ.get('OS_USERNAME', 'ceilometer'),
  47help='Username to use for openstack service access'),
  48 cfg.StrOpt('os-password',
  49default=os.environ.get('OS_PASSWORD', 'admin'),
  50help='Password to use for openstack service access'),
  51 cfg.StrOpt('os-tenant-id',
  52default=os.environ.get('OS_TENANT_ID', ''),
  53help='Tenant ID to use for openstack service access'),
  54 cfg.StrOpt('os-tenant-name',
  55default=os.environ.get('OS_TENANT_NAME', 'admin'),
  56help='Tenant name to use for openstack service access'),
  57 cfg.StrOpt('os_auth_url',
  58default=os.environ.get('OS_AUTH_URL',
  59   'http://localhost:5000/v2.0'),

 So after I change all - to _ and modify all options in
 /etc/ceilometer/ceilometer.conf, the service starts OK.


The thing that fixed it was changing - to _ in your configuration file.
The options library allows option names to have - in them so they look
nice as command line switches, but the option name uses the _.

Doug





 On Wed, Apr 10, 2013 at 2:02 PM, Liu Wenmao marvel...@gmail.com wrote:

 I solve this problem by two steps:

 1 modify /etc/init/ceilometer-agent-central.conf
 exec start-stop-daemon --start --chuid ceilometer --exec
 /usr/local/bin/ceilometer-agent-central --
 --config-file=/etc/ceilometer/ceilometer.conf
 2 add some lines to /etc/ceilometer/ceilometer.conf:
 os-username=ceilometer
 os-password=nsfocus
 os-tenant-name=service
 os-auth-url=http://controller:5000/v2.0



 On Wed, Apr 10, 2013 at 1:36 PM, Liu Wenmao marvel...@gmail.com wrote:

 Hi all:

 I have just install ceilometer grizzly github version, but fail to
 start ceilometer-agent-central service. I think it is due to that I didn't
 set up the keystone user/password like other projects. but I follow the
 instructions(
 http://docs.openstack.org/developer/ceilometer/install/manual.html#configuring-keystone-to-work-with-api)
 but it does not include the ceilometer configuration.

 # service ceilometer-agent-central start
 ceilometer-agent-central start/running, process 5679

 # cat /etc/init/ceilometer-agent-central.conf
 description ceilometer-agent-compute
 author Chuck Short zul...@ubuntu.com

 start on runlevel [2345]
 stop on runlelvel [!2345]

 chdir /var/run

 pre-start script
 mkdir -p /var/run/ceilometer
 chown ceilometer:ceilometer /var/run/ceilometer

 mkdir -p /var/lock/ceilometer
 chown ceilometer:ceilometer /var/lock/ceilometer
 end script

 exec start-stop-daemon --start --chuid ceilometer --exec
 /usr/local/bin/ceilometer-agent-central


 /var/log/ceilometer/ceilometer-agent-central.log
 2013-04-10 13:01:39ERROR [ceilometer.openstack.common.loopingcall]
 in looping call
 Traceback (most recent call last):
   File
 /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/openstack/common/loopingcall.py,
 line 67, in _inner
 self.f(*self.args, **self.kw)
   File
 /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/central/manager.py,
 line 76, in interval_task
 auth_url=cfg.CONF.os_auth_url)
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py,
 line 134, in __init__
 self.authenticate()
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/client.py,
 line 205, in authenticate
 token)
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py,
 line 174, in get_raw_token_from_identity_servicetoken=token)
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py,
 line 202, in _base_authN
 resp, body = self.request(url, 'POST', body=params, headers=headers)
   File
 /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/client.py,
 line 366, in request
 raise exceptions.from_response(resp, resp.text)
 Unauthorized: Unable to communicate with identity service: {error:
 {message: Invalid user / password, code: 401, title: Not
 Authorized}}. (HTTP 401)
 2013-04-10 13:01:39

[Openstack] Grizzly Dashboad problem...

2013-04-10 Thread Martinx - ジェームズ
Guys,

 I just install Grizzly from UCA.

 When I try to access the Dashboard, I'm getting:

Internal Server Error

The server encountered an internal error or misconfiguration and was unable
to complete your request.

Please contact the server administrator, webmaster@localhost and inform
them of the time the error occurred, and anything you might have done that
may have caused the error.

More information about this error may be available in the server error log.
--

 On the apache error.log:

UncompressableFileError: 'horizon/js/horizon.js' isn't accessible via
COMPRESS_URL ('/static/') and can't be compressed, referer:
http://10.32.14.232/horizon

The file /etc/openstack-dashboard/local_settings.py contains:

COMPRESS_OFFLINE = False

What am I doing wrong?

I'm weeks now without Dashboard, I tough that this problem was solved with
the stable release but, it isn't stable yet...

 I appreciate any help.

Thanks!
Thiago
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly Dashboad problem...

2013-04-10 Thread Martinx - ジェームズ
Here is the full apache error log after login into the Dashboard:

http://paste.openstack.org/show/35722/

What can I do?

Tks,
Thiago


On 10 April 2013 12:04, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Guys,

  I just install Grizzly from UCA.

  When I try to access the Dashboard, I'm getting:

 Internal Server Error

 The server encountered an internal error or misconfiguration and was
 unable to complete your request.

 Please contact the server administrator, webmaster@localhost and inform
 them of the time the error occurred, and anything you might have done that
 may have caused the error.

 More information about this error may be available in the server error log.
 --

  On the apache error.log:

 UncompressableFileError: 'horizon/js/horizon.js' isn't accessible via
 COMPRESS_URL ('/static/') and can't be compressed, referer:
 http://10.32.14.232/horizon

 The file /etc/openstack-dashboard/local_settings.py contains:

 COMPRESS_OFFLINE = False

 What am I doing wrong?

 I'm weeks now without Dashboard, I tough that this problem was solved with
 the stable release but, it isn't stable yet...

  I appreciate any help.

 Thanks!
 Thiago

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Folsom]Very slow Openstack Dashboard

2013-04-10 Thread Arindam Choudhury
Hi,

I am installing OpenStack Folsom on Fedora 18.

The openstack dashboard is very slow and its giving the following error message:

Error: Unable to retrieve quota information.

I have followed this instructions:

# yum install openstack-dashboard memcached -ymodify CACHE_BACKEND = 
'memcached://127.0.0.1:11211' in /etc/openstack-dashboard/local_settings# 
service httpd restart# chkconfig httpd on# chkconfig memcached on# service 
memcached start# setsebool -P httpd_can_network_connect=on
  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Folsom]Very slow Openstack Dashboard

2013-04-10 Thread Julie Pichon
Arindam Choudhury arin...@live.com wrote:
 Hi,
 
 I am installing OpenStack Folsom on Fedora 18.
 
 The openstack dashboard is very slow and its giving the following error
 message:
 
 Error: Unable to retrieve quota information.

Hello! Could you look at the httpd logs and copy the stack trace that happens 
when you see that error?

You might also want to check that the nova-network service is up and running. 
Because it takes about a minute (by default) to time out, it could be what's 
causing the slowness (see e.g. https://bugs.launchpad.net/horizon/+bug/1079882 )

Regards,

Julie

 
 I have followed this instructions:
 
 # yum install openstack-dashboard memcached -ymodify CACHE_BACKEND =
 'memcached://127.0.0.1:11211' in /etc/openstack-dashboard/local_settings#
 service httpd restart# chkconfig httpd on# chkconfig memcached on# service
 memcached start# setsebool -P httpd_can_network_connect=on
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly release packages available in the Ubuntu Cloud Archive

2013-04-10 Thread Martinx - ジェームズ
Guys,

 This isn't ready. Just install Ubuntu 12.04 + Grizzly via UCA, after login
into Dashboard, getting Internal Server Error message.

 Dashboard is broken.

Tks,
Thiago


On 9 April 2013 10:20, James Page james.p...@canonical.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Hi All

 OpenStack Grizzly release packages are now available in the Ubuntu
 Cloud Archive for Ubuntu 12.04 - see
 https://wiki.ubuntu.com/ServerTeam/CloudArchive for details on how to
 enable and use these packages on Ubuntu 12.04.

 Please note that further Ubuntu related updates may land into the
 Cloud Archive for Grizzly between now and the final release of Ubuntu
 13.04 in a couple of weeks time; 13.04 is now feature frozen so these
 should be bug fixes only.

 Enjoy

 James

 - --
 James Page
 Technical Lead
 Ubuntu Server Team
 james.p...@canonical.com
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.12 (GNU/Linux)
 Comment: Using GnuPG with undefined - http://www.enigmail.net/

 iQIcBAEBCAAGBQJRZBWSAAoJEL/srsug59jDasAQALXgHF2OfrAySBGdati0GImP
 gKJ7gHs5uNgOHi99m4XX/LUsRWMYrYhmswKTGpFvnIhgy4nxsvvQPc5+9k9Om/lz
 ArGDivKmf7idInGfRTdp7hGm/llgNa7WLaU+GwVACuj0utBkF5RcTTE0kES1kFX2
 CAvHMQMDqLfVBpDWunsarVyE9VBMJdVHJQZWpdzDhiTForhawcZXxB9fh2qHpKhS
 nX6AqP77JZ6XARw4fTLI30n6gQritwPsbK1J93QwXFtNqu5W5TUc+GAukQSVcoAy
 frkYSkJX+4MawkhI7PJ919O0y9q9O3UAn6sH+q4xk8Mpak/xJ0KUxHZX81MUw0Q5
 5BmdRsJwCkRPYiz1Qc0sqqT5ROlr/WnDiHUIEwjs8IYAf2/hjTUD+KjOz7ycPWqg
 V/asjzqtgTuLDCESWv5yG4vF/CWHTf00e6nqTgfoORHHVBnTnImFsq7CryLzUxes
 nSRvTAALoa/71+1qMpUoUS61bCcKhY2fBsCn2uqMM1nHiot2MUH1wVEajKiX332N
 X2IWSyHvNzr7/UP3BS5A5LKj3ck5NTdr46ft0HfLeknu5jcjOb7cltDH2wkFSunU
 9t7p2Z3yBPw5tK5c8Fmt5gAscw9YfYhjE4Dufd12nOCD3Go2Xw8gbzjCkSQYtiY7
 RoxivOeqbSwAJu0Q6Zm4
 =kvSY
 -END PGP SIGNATURE-

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly release packages available in the Ubuntu Cloud Archive

2013-04-10 Thread James Page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 10/04/13 17:14, Martinx - ジェームズ wrote:
 Guys, This isn't ready. Just install Ubuntu 12.04 + Grizzly via
 UCA, after login into Dashboard, getting Internal Server Error
 message. Dashboard is broken.

Please can you turn Debug on in
/etc/openstack-dashboard/local_settings.py, restart apache and grab a
stack trace?

I've been testing with the full set of OpenStack services with the CA
and I'm not hitting this issue.

- -- 
James Page
Technical Lead
Ubuntu Server Team
james.p...@canonical.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iQIcBAEBCAAGBQJRZZPXAAoJEL/srsug59jDJXsP/1d4fVD2Pd9qZ91P8wkQSCOl
4VDAJLf8ZQX9LDD8k//mZ4WrsGzsZx4/FU9fGU3oQPs8t0o7+ao7c+GLLTvIf2D4
8SvbtfIssOiVM8jdMVk/iTGEORh6qUWZ5RJDeHfJvACWYc5Gm3PSJGFj/A97bWOu
Yd1xJBRT+7q6dbZld4/NqIwZeZWBsEDuSP+3f6WhqAVXBoclt/wfb6zhKmYotCfH
s139kastJKo6ypPS4c8ZxByhwlGyeVGugTS79fDeCmfrbgOtJmjpAk4n4P5SL2xw
2pZ3HOwHjsB0+u5uohfTxZsIdvp40u5zbSxfRZWG1UoebQdsLkg+zBNEwRmRdn8/
K3qR5G0eDvtZDQhWlIp51/r8E55Xzvhgueu4kPN7hAENx2f6jlXF/Aj1jhq+neMG
DoOsWWuVbD6yO9hAapH1+hv/6H29vjyOSlWiW+fJpei9fYRnanSZZz+s7aiidajq
pACxMcwRE9DP+Cp0UzK03ISX2jIKC35qqpSm8GklZBl1/3NCSKJg9iPZDHvwTlEj
HFwxZfBzmE0LWB+3cp3PzuOmYwsaAJxoolFQ7kfuut36j6BYO03j4YVqDU8XZQW1
MWS9iTHue8ReXMjZPhCrPq91CMaswo14Jpgtkkk2sRl8HZ2ydxvuAWIk6UIgXEyk
XfVu2O5C+tImbccrRC+C
=qmfk
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly Dashboad problem...

2013-04-10 Thread Ritesh Nanda
Most probably your nova-* services are having some problem. Check whether
nova is working properly.


On Wed, Apr 10, 2013 at 8:47 PM, Martinx - ジェームズ
thiagocmarti...@gmail.comwrote:

 Here is the full apache error log after login into the Dashboard:

 http://paste.openstack.org/show/35722/

 What can I do?

 Tks,
 Thiago


 On 10 April 2013 12:04, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Guys,

  I just install Grizzly from UCA.

  When I try to access the Dashboard, I'm getting:

 Internal Server Error

 The server encountered an internal error or misconfiguration and was
 unable to complete your request.

 Please contact the server administrator, webmaster@localhost and inform
 them of the time the error occurred, and anything you might have done that
 may have caused the error.

 More information about this error may be available in the server error
 log.
 --

  On the apache error.log:

 UncompressableFileError: 'horizon/js/horizon.js' isn't accessible via
 COMPRESS_URL ('/static/') and can't be compressed, referer:
 http://10.32.14.232/horizon

 The file /etc/openstack-dashboard/local_settings.py contains:

 COMPRESS_OFFLINE = False

 What am I doing wrong?

 I'm weeks now without Dashboard, I tough that this problem was solved
 with the stable release but, it isn't stable yet...

  I appreciate any help.

 Thanks!
 Thiago



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 

* With Regards
*

* Ritesh Nanda
*

***
*
http://www.ericsson.com/
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack Networking, use of Quantum

2013-04-10 Thread Mark Collier

We have to phase out the trademark or attention getting use of the code name 
Quantum when referring to the the OpenStack Networking project, as part of a 
legal agreement with Quantum Corporation, the owner of the Quantum trademark. 
The Board of Directors and Technical Committee members involved in Networking 
related development and documentation were notified so we could start working 
to remove Quantum from public references.


We made a lot of progress updating public references during the Grizzly release 
cycle and will continue that work through Havana as well. The highest priority 
items to update are locations that are attention getting and public--our 
biggest area of work remaining is probably on the wiki, where we could really 
use everyone's help. In other official communications, we refer to the projects 
by their functional OpenStack names (Compute, Object Storage, Networking, etc).

At the summit we have a session scheduled to talk about project names generally 
and the path forward for OpenStack Networking specifically. For instance, in 
places where there is a need for something shorter, such as the CLI, we could 
come up with a new code name or use something more descriptive like 
os-network. This is a question it probably makes sense to look at across 
projects at the same time. If you have input on this, please come participate 
in the session Thursday April 18 at 4:10pm: 
[http://openstacksummitapril2013.sched.org/event/95df68f88b519a3e4981ed9da7cd1de5#.UWWOZBnR16A]
 
http://openstacksummitapril2013.sched.org/event/95df68f88b519a3e4981ed9da7cd1de5#.UWWOZBnR16A

Mark___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly Dashboad problem...

2013-04-10 Thread Martinx - ジェームズ
Okay, I'll double check it... Tks!


On 10 April 2013 13:59, Ritesh Nanda riteshnand...@gmail.com wrote:

 Most probably your nova-* services are having some problem. Check whether
 nova is working properly.


 On Wed, Apr 10, 2013 at 8:47 PM, Martinx - ジェームズ 
 thiagocmarti...@gmail.com wrote:

 Here is the full apache error log after login into the Dashboard:

 http://paste.openstack.org/show/35722/

 What can I do?

 Tks,
 Thiago


 On 10 April 2013 12:04, Martinx - ジェームズ thiagocmarti...@gmail.comwrote:

 Guys,

  I just install Grizzly from UCA.

  When I try to access the Dashboard, I'm getting:

 Internal Server Error

 The server encountered an internal error or misconfiguration and was
 unable to complete your request.

 Please contact the server administrator, webmaster@localhost and inform
 them of the time the error occurred, and anything you might have done that
 may have caused the error.

 More information about this error may be available in the server error
 log.
 --

  On the apache error.log:

 UncompressableFileError: 'horizon/js/horizon.js' isn't accessible via
 COMPRESS_URL ('/static/') and can't be compressed, referer:
 http://10.32.14.232/horizon

 The file /etc/openstack-dashboard/local_settings.py contains:

 COMPRESS_OFFLINE = False

 What am I doing wrong?

 I'm weeks now without Dashboard, I tough that this problem was solved
 with the stable release but, it isn't stable yet...

  I appreciate any help.

 Thanks!
 Thiago



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 --

 * With Regards
 *

 * Ritesh Nanda
 *

 ***
 *
 http://www.ericsson.com/




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Injecting a specific MAC address

2013-04-10 Thread Burney, Jeffrey P (N-Engineering Service Professionals)
Hi Stackers,

We have a little test lab where we are working with puppet and OpenStack.  
Here's what we are trying to do on Folsom (currently not using Quantum).

From a puppet master (also running dhcp, dns, kickstart), issue a command to 
OpenStack controller to boot a small pxe boot image.  We would like to inject 
a MAC address into the newly instantiated VM so that when It goes to our dhcp 
server, it will get the correct IP and be built by our kickstart server.

We've looked at node_openstack but it does not have inject a MAC.

I'd imagine this has been done before I just can't find anything online.

Any help would be greatly appreciated.

Jeff

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Injecting a specific MAC address

2013-04-10 Thread Robert Collins
On 11 April 2013 07:19, Burney, Jeffrey P (N-Engineering Service
Professionals) jeffrey.p.bur...@lmco.com wrote:
 Hi Stackers,



 We have a little test lab where we are working with puppet and OpenStack.
 Here’s what we are trying to do on Folsom (currently not using Quantum).



 From a puppet master (also running dhcp, dns, kickstart), issue a command to
 OpenStack controller to boot a small pxe boot image.  We would like to
 inject a MAC address into the newly instantiated VM so that when It goes to
 our dhcp server, it will get the correct IP and be built by our kickstart
 server.



 We’ve looked at node_openstack but it does not have inject a MAC.



 I’d imagine this has been done before I just can’t find anything online.



 Any help would be greatly appreciated.

The MAC address used is generated dynamically by nova and then handed
out to either nova-network or Quantum. You may be able to manually
generate the MAC by using Quantum and making a call to Quantum to
create a port with that MAC, then passing the port id to nova boot.

HTH,
Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Cloud Services

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Savanna] 0.1 Release!

2013-04-10 Thread Sergey Lukjanov
Hi everybody,

we finished Phase 1 of our roadmap and released the first project release!

Currently Savanna has the REST API for Hadoop cluster provisioning using 
pre-installed images and we started working on pluggable mechanism for custom 
cluster provisioning tools.

Also I'd like to note that custom pages for OpenStack Dashbord have been 
published too.

You can find more info on Savanna site:

http://savanna.mirantis.com
http://savanna.mirantis.com/quickstart.html
http://savanna.mirantis.com/horizon/howto.html

Sergey Lukjanov
Mirantis Inc.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] can't get db_sync to work

2013-04-10 Thread Greg Hill
Trying to get openstack going with postgresql 9.2 as the database. 
Started with glance.  I got all the grizzly RPMs from this repo:


http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/

The database is created and the connection seems to work fine, but I get 
this error:


[root@glance site-packages]# glance-manage db_sync
2013-04-10 20:36:47.482 2786 CRITICAL glance [-] 'PGSchemaChanger' 
object has no attribute '_validate_identifier'


I googled that and found a reference that it was an upstream SQLAlchemy 
bug that was fixed in 0.7.1.


Luckily, the repo above provides version 0.7.8 of SQLAlchemy.  I noticed 
that both version 0.5.5 and 0.7.8 of SQLAlchemy are installed:


[root@glance glance]# yum list installed python-sqlalchemy*
Installed Packages
python-sqlalchemy.noarch 0.5.5-3.el6_2 @system-base
python-sqlalchemy0.7.x86_64 0.7.8-1.el6 @epel

But forcing removal of 0.5.5 and restarting everything does not cause 
the issue to go away.  Is there something else I can check? Is there 
some way to just dump the SQL commands to manually create the database?  
Is there some way to force glance to use the 0.7.8 version of SQLAlchemy?


Not that I believe it's relevant, but here's the connection string from 
the config file (scrubbed).  Both the api and registry configs have this 
set, and it's the only config value I changed from the default.


sql_connection = postgresql://$user:$pass@$ip/glance

Greg

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] can't get db_sync to work

2013-04-10 Thread Greg Hill

I made a bad assumption, it is using 0.7.8 and still gets the error:

[root@glance site-packages]# glance-manage -v db_sync
2013-04-10 22:08:09.970 2966 INFO glance.db.sqlalchemy.migration [-] 
Upgrading database to version latest
2013-04-10 22:08:10.019 2966 CRITICAL glance [-] 'PGSchemaChanger' 
object has no attribute '_validate_identifier'

2013-04-10 22:08:10.019 2966 TRACE glance Traceback (most recent call last):
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/bin/glance-manage, line 134, in module

2013-04-10 22:08:10.019 2966 TRACE glance main()
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/bin/glance-manage, line 128, in main

2013-04-10 22:08:10.019 2966 TRACE glance CONF.command.func()
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/bin/glance-manage, line 80, in do_db_sync

2013-04-10 22:08:10.019 2966 TRACE glance CONF.command.current_version)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/glance/db/sqlalchemy/migration.py, 
line 127, in db_sync

2013-04-10 22:08:10.019 2966 TRACE glance upgrade(version=version)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/glance/db/sqlalchemy/migration.py, 
line 66, in upgrade
2013-04-10 22:08:10.019 2966 TRACE glance return 
versioning_api.upgrade(sql_connection, repo_path, version)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/versioning/api.py, line 185, 
in upgrade
2013-04-10 22:08:10.019 2966 TRACE glance return _migrate(url, 
repository, version, upgrade=True, err=err, **opts)
2013-04-10 22:08:10.019 2966 TRACE glance   File string, line 2, in 
_migrate
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/versioning/util/__init__.py, 
line 160, in with_engine

2013-04-10 22:08:10.019 2966 TRACE glance return f(*a, **kw)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/versioning/api.py, line 364, 
in _migrate
2013-04-10 22:08:10.019 2966 TRACE glance schema.runchange(ver, change, 
changeset.step)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/versioning/schema.py, line 
90, in runchange

2013-04-10 22:08:10.019 2966 TRACE glance change.run(self.engine, step)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/versioning/script/py.py, line 
145, in run

2013-04-10 22:08:10.019 2966 TRACE glance script_func(engine)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py, 
line 85, in upgrade
2013-04-10 22:08:10.019 2966 TRACE glance 
index.rename('ix_image_properties_image_id_name')
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/changeset/schema.py, line 
620, in rename
2013-04-10 22:08:10.019 2966 TRACE glance 
engine._run_visitor(visitorcallable, self, connection, **kwargs)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py, 
line 2303, in _run_visitor
2013-04-10 22:08:10.019 2966 TRACE glance 
conn._run_visitor(visitorcallable, element, **kwargs)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py, 
line 1973, in _run_visitor

2013-04-10 22:08:10.019 2966 TRACE glance **kwargs).traverse_single(element)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/changeset/ansisql.py, line 
55, in traverse_single
2013-04-10 22:08:10.019 2966 TRACE glance ret = 
super(AlterTableVisitor, self).traverse_single(elem)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/sql/visitors.py, 
line 106, in traverse_single

2013-04-10 22:08:10.019 2966 TRACE glance return meth(obj, **kw)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/changeset/ansisql.py, line 
172, in visit_index
2013-04-10 22:08:10.019 2966 TRACE glance 
(self.preparer.quote(self._validate_identifier(index.name,
2013-04-10 22:08:10.019 2966 TRACE glance AttributeError: 
'PGSchemaChanger' object has no attribute '_validate_identifier'

2013-04-10 22:08:10.019 2966 TRACE glance

I'll keep digging, but any help is appreciated.

Greg

On 04/10/2013 01:56 PM, Greg Hill wrote:
Trying to get openstack going with postgresql 9.2 as the database. 
Started with glance.  I got all the grizzly RPMs from this repo:


http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/

The database is created and the connection seems to work fine, but I 
get this error:


[root@glance site-packages]# glance-manage db_sync
2013-04-10 20:36:47.482 2786 CRITICAL glance [-] 'PGSchemaChanger' 

Re: [Openstack] [Savanna] 0.1 Release!

2013-04-10 Thread Robert Collins
On 11 April 2013 08:30, Sergey Lukjanov slukja...@mirantis.com wrote:
 Hi everybody,

 we finished Phase 1 of our roadmap and released the first project release!

 Currently Savanna has the REST API for Hadoop cluster provisioning using 
 pre-installed images and we started working on pluggable mechanism for custom 
 cluster provisioning tools.

 Also I'd like to note that custom pages for OpenStack Dashbord have been 
 published too.

 You can find more info on Savanna site:

Savanna seems to fit into the same space as Heat (going off your
diagram on http://savanna.mirantis.com/) - can you explain how it is
different?

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Cloud Services

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] can't get db_sync to work

2013-04-10 Thread Greg Hill
I was able to force this to work by updating SQLAlchemy's migrate 
utility to the latest version, FYI.


https://code.google.com/p/sqlalchemy-migrate/downloads/detail?name=sqlalchemy-migrate-0.7.2.tar.gz

Hopefully the RH guys can get that RPM updated on the EPEL repo at some 
point to prevent other future adventurers from repeating the same errors 
I did.


Greg

On 04/10/2013 03:15 PM, Greg Hill wrote:

I made a bad assumption, it is using 0.7.8 and still gets the error:

[root@glance site-packages]# glance-manage -v db_sync
2013-04-10 22:08:09.970 2966 INFO glance.db.sqlalchemy.migration [-] 
Upgrading database to version latest
2013-04-10 22:08:10.019 2966 CRITICAL glance [-] 'PGSchemaChanger' 
object has no attribute '_validate_identifier'
2013-04-10 22:08:10.019 2966 TRACE glance Traceback (most recent call 
last):
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/bin/glance-manage, line 134, in module

2013-04-10 22:08:10.019 2966 TRACE glance main()
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/bin/glance-manage, line 128, in main

2013-04-10 22:08:10.019 2966 TRACE glance CONF.command.func()
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/bin/glance-manage, line 80, in do_db_sync

2013-04-10 22:08:10.019 2966 TRACE glance CONF.command.current_version)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/glance/db/sqlalchemy/migration.py, 
line 127, in db_sync

2013-04-10 22:08:10.019 2966 TRACE glance upgrade(version=version)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/glance/db/sqlalchemy/migration.py, 
line 66, in upgrade
2013-04-10 22:08:10.019 2966 TRACE glance return 
versioning_api.upgrade(sql_connection, repo_path, version)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/versioning/api.py, line 
185, in upgrade
2013-04-10 22:08:10.019 2966 TRACE glance return _migrate(url, 
repository, version, upgrade=True, err=err, **opts)
2013-04-10 22:08:10.019 2966 TRACE glance   File string, line 2, 
in _migrate
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/versioning/util/__init__.py, line 
160, in with_engine

2013-04-10 22:08:10.019 2966 TRACE glance return f(*a, **kw)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/versioning/api.py, line 
364, in _migrate
2013-04-10 22:08:10.019 2966 TRACE glance schema.runchange(ver, 
change, changeset.step)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/versioning/schema.py, line 
90, in runchange

2013-04-10 22:08:10.019 2966 TRACE glance change.run(self.engine, step)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/versioning/script/py.py, 
line 145, in run

2013-04-10 22:08:10.019 2966 TRACE glance script_func(engine)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/glance/db/sqlalchemy/migrate_repo/versions/006_key_to_name.py, 
line 85, in upgrade
2013-04-10 22:08:10.019 2966 TRACE glance 
index.rename('ix_image_properties_image_id_name')
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/changeset/schema.py, line 
620, in rename
2013-04-10 22:08:10.019 2966 TRACE glance 
engine._run_visitor(visitorcallable, self, connection, **kwargs)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py, 
line 2303, in _run_visitor
2013-04-10 22:08:10.019 2966 TRACE glance 
conn._run_visitor(visitorcallable, element, **kwargs)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py, 
line 1973, in _run_visitor
2013-04-10 22:08:10.019 2966 TRACE glance 
**kwargs).traverse_single(element)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/changeset/ansisql.py, line 
55, in traverse_single
2013-04-10 22:08:10.019 2966 TRACE glance ret = 
super(AlterTableVisitor, self).traverse_single(elem)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/sql/visitors.py, 
line 106, in traverse_single

2013-04-10 22:08:10.019 2966 TRACE glance return meth(obj, **kw)
2013-04-10 22:08:10.019 2966 TRACE glance   File 
/usr/lib/python2.6/site-packages/migrate/changeset/ansisql.py, line 
172, in visit_index
2013-04-10 22:08:10.019 2966 TRACE glance 
(self.preparer.quote(self._validate_identifier(index.name,
2013-04-10 22:08:10.019 2966 TRACE glance AttributeError: 
'PGSchemaChanger' object has no attribute '_validate_identifier'

2013-04-10 22:08:10.019 2966 TRACE glance

I'll keep digging, but any help is appreciated.

Greg

On 04/10/2013 01:56 PM, Greg Hill wrote:
Trying to 

[Openstack] [SUMMIT] Looking for volunteers in Portland

2013-04-10 Thread Stefano Maffulli

Hello folks,

we need 5 volunteers at the coming Summit in Portland to help with 
registration duties. The volunteers are requested for one hour on 
Sunday, between 11am and 3pm for training and then Monday and Tuesday 
morning from 7am  to 11am to help with registration.


Volunteers will get a free pass to the Summit: if you know anybody in 
Portland interested in helping the OpenStack community and get to the 
Summit this is a good chance.


Please spread the word: candidates please write to me stating your 
availability.


thanks,
stef

--
Ask and answer questions on https://ask.openstack.org

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Heat] heat-cfntools v1.2.3 released - temp file race condition fix.

2013-04-10 Thread Clint Byrum

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The heat development community would like to announce the release of 
heat-cfntools version 1.2.3. This release contains security fixes.


heat-cfntools contains the tools that can be installed on Heat 
provisioned cloud instances to implement portions of CloudFormation 
compatibility.


This release can be installed from the following locations:
http://tarballs.openstack.org/heat-cfntools/heat-cfntools-1.2.3.tar.gz
https://pypi.python.org/pypi/heat-cfntools/1.2.3

During normal development, improper handling of temporary files in
heat-cfntools was found and fixed. Heat-cfntools are a set of tools to
enable Heat templates to initialize and respond to configuration 
changes
via the orchestration layer. A local user could exploit predictable 
temp

file creation to make root overwrite a file, potentially by also using
local DNS cache poisoning, with a file of their choosing.

It is recommended that any users update these tools immediately. In
particular if you have downloaded older HEAT-JEOS images, you should
download new ones which have been built with the fixed heat-cfntools
embedded.

The following issues are fixed in this release:

#1166323 (Clint Byrum) Predictable /tmp filenames used in 
SourcesHandler
#1164756 (Clint Byrum) /tmp/last_metadata is vulnerable to tmpfile 
races by arbitrary users

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJRZh9EAAoJEFOMB2b0vLOOWi8H/2jVn7hUgIP1FMxCXBV2Zyzi
AGv6zBAG3XWufZ9HRX7As1m8XfQu1LLvBdxW0O/Wln+5aZjaAlBnTtwNoYKAp7UO
dqpbm5iESQyk/8jJWrLb0z8Ojs8eoCMI43WeTIF2Qu15Z3G3V4+5jTXq4ujDuyRP
1LT5Vf4fqMiwB65s+SH0HmZFm+HEVModBqBCBN7DFnLJwjmBxssy/iUmYGBTZ4ql
E4h4ezA9hsTJ1CIYWq/fJbCfMnTh1DvRxN5y6G0pinPo48fi6lkp6lMI1Z44Sz/O
BQqb+KEI4K3N0xjIKGuf56n5SEVEdhvmBC+PqfsZBLT4B0PTKwCG0NJkcg06juE=
=Qc9s
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_folsom_quantum_stable #303

2013-04-10 Thread openstack-testing-bot
Title: precise_folsom_quantum_stable
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_quantum_stable/303/Project:precise_folsom_quantum_stableDate of build:Wed, 10 Apr 2013 05:02:02 -0400Build duration:12 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60Changesplugin/nec: Make sure resources on OFC is globally unique.by motokieditquantum/plugins/nec/drivers/pfc.pyeditquantum/tests/unit/nec/test_pfc_driver.pyConsole Output[...truncated 6916 lines...]Good signature on /tmp/tmpYXYcme/quantum_2012.2.4+git201304100502~precise-0ubuntu1.dsc.Uploading to ppa (via ftp to ppa.launchpad.net):  Uploading quantum_2012.2.4+git201304100502~precise-0ubuntu1.dsc: done.  Uploading quantum_2012.2.4+git201304100502~precise.orig.tar.gz: done.  Uploading quantum_2012.2.4+git201304100502~precise-0ubuntu1.debian.tar.gz: done.  Uploading quantum_2012.2.4+git201304100502~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptExporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-folsom/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-folsom/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/q/quantum/python-quantum_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-common_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-dhcp-agent_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-l3-agent_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-cisco_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-linuxbridge-agent_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-linuxbridge_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-metaplugin_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-nec_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-nicira_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-openvswitch-agent_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-openvswitch_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-ryu-agent_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-ryu_2012.2.4+git201303150908~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-server_2012.2.4+git201303150908~precise-0ubuntu1_all.debINFO:root:Storing current commit for next build: a31069a96725b610da6b0641c7dc3045256a9fe3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/folsom /tmp/tmpYXYcme/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpYXYcme/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 5a2ef81430a5c91feffc244382657c23b4db57d1..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 2012.2.4+git201304100502~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [a31069a] plugin/nec: Make sure resources on OFC is globally unique.dch -a [a109f7e] Ensure that l3 agent creates client session if necessarydch -a [913586b] Sets default MySql engine to InnoDBdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2012.2.4+git201304100502~precise-0ubuntu1_source.changessbuild -d precise-folsom -n -A quantum_2012.2.4+git201304100502~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/folsom-stable-testing quantum_2012.2.4+git201304100502~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-folsom quantum_2012.2.4+git201304100502~precise-0ubuntu1_amd64.changesEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_cinder_trunk #16

2013-04-10 Thread openstack-testing-bot
Title: precise_havana_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_cinder_trunk/16/Project:precise_havana_cinder_trunkDate of build:Wed, 10 Apr 2013 12:31:32 -0400Build duration:1 min 8 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFix backup manager formatting error.by avishayeditcinder/tests/test_netapp.pyeditcinder/backup/manager.pyConsole Output[...truncated 1380 lines...]DEBUG:root:['bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']Building using working treeBuilding package in merge modeLooking for a way to retrieve the upstream tarballUsing the upstream tarball that is present in /tmp/tmpdZmWfabzr: ERROR: An error (1) occurred running quilt: Applying patch fix_cinder_dependencies.patchpatching file tools/pip-requiresHunk #1 FAILED at 18.1 out of 1 hunk FAILED -- rejects in file tools/pip-requiresPatch fix_cinder_dependencies.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-1b7d960c-39fb-4e7f-9360-b7d061dfb85c', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-1b7d960c-39fb-4e7f-9360-b7d061dfb85c', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpdZmWfa/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpdZmWfa/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log -n5 --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304101231~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [23bd028] Fix backup manager formatting error.dch -a [8870460] Reformat openstack-common.confdch -a [c304730] Sync with oslo-incubator copy of setup.pydch -a [719ec5c] Don't hard code AUTH_ into the swift backup urldch -a [134afd8] Remove update_volume_status log message from NFS driverdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-1b7d960c-39fb-4e7f-9360-b7d061dfb85c', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-1b7d960c-39fb-4e7f-9360-b7d061dfb85c', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_havana_nova_trunk #54

2013-04-10 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/54/Project:precise_havana_nova_trunkDate of build:Wed, 10 Apr 2013 16:01:34 -0400Build duration:1 min 15 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesFix typo: libvir => libvirtby jogoeditnova/tests/fakelibvirt.pyConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision 6b8d5eec07307c54c2756a4d3428297880c156bc (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision e7ab0b062e45602cbb65bdfd08f1dbdbc8ab309e (origin/master)Checking out Revision e7ab0b062e45602cbb65bdfd08f1dbdbc8ab309e (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson3587968716250235866.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #55

2013-04-10 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/55/Project:precise_havana_nova_trunkDate of build:Wed, 10 Apr 2013 16:31:34 -0400Build duration:27 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60Changesbaremetal: Integrate provisioning and non-provisioning interfacesby notsueditnova/tests/test_migrations.pyeditnova/tests/baremetal/test_pxe.pyeditnova/tests/baremetal/test_tilera.pyeditnova/virt/baremetal/pxe.pyaddnova/virt/baremetal/db/sqlalchemy/migrate_repo/versions/006_move_prov_mac_address.pyeditnova/virt/baremetal/tilera.pyFix error message in pre_live_migration.by mikaleditnova/virt/libvirt/driver.pyImported Translations from Transifexby Jenkinseditnova/locale/nova.poteditnova/locale/uk/LC_MESSAGES/nova.poeditnova/locale/en_AU/LC_MESSAGES/nova.poeditnova/locale/en_GB/LC_MESSAGES/nova.poeditnova/locale/de/LC_MESSAGES/nova.poeditnova/locale/pt_BR/LC_MESSAGES/nova.poeditnova/locale/bs/LC_MESSAGES/nova.poeditnova/locale/en_US/LC_MESSAGES/nova.poeditnova/locale/zh_CN/LC_MESSAGES/nova.poeditnova/locale/cs/LC_MESSAGES/nova.poeditnova/locale/ru/LC_MESSAGES/nova.poeditnova/locale/tr_TR/LC_MESSAGES/nova.poeditnova/locale/ja/LC_MESSAGES/nova.poeditnova/locale/zh_TW/LC_MESSAGES/nova.poeditnova/locale/nb/LC_MESSAGES/nova.poeditnova/locale/ko/LC_MESSAGES/nova.poeditnova/locale/es/LC_MESSAGES/nova.poeditnova/locale/tr/LC_MESSAGES/nova.poeditnova/locale/tl/LC_MESSAGES/nova.poeditnova/locale/fr/LC_MESSAGES/nova.poeditnova/locale/it/LC_MESSAGES/nova.poeditnova/locale/da/LC_MESSAGES/nova.poConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision e7ab0b062e45602cbb65bdfd08f1dbdbc8ab309e (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision e3521d37b089674275a94ef7aef45715482377d0 (origin/master)Checking out Revision e3521d37b089674275a94ef7aef45715482377d0 (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson7900894067363227190.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #56

2013-04-10 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/56/Project:precise_havana_nova_trunkDate of build:Wed, 10 Apr 2013 17:01:36 -0400Build duration:1 min 41 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40Changesset timeout for paramiko ssh connectionby phongdlyeditnova/virt/powervm/common.pyeditnova/virt/powervm/constants.pyConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision e3521d37b089674275a94ef7aef45715482377d0 (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision 6609224c2261edea03086b77b8f73e8301524775 (origin/master)Checking out Revision 6609224c2261edea03086b77b8f73e8301524775 (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson3103311269932485790.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #57

2013-04-10 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/57/Project:precise_havana_nova_trunkDate of build:Wed, 10 Apr 2013 18:01:35 -0400Build duration:1 min 7 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesReplace metadata joins with another queryby danmseditnova/db/api.pyeditnova/tests/test_db_api.pyeditnova/db/sqlalchemy/api.pyConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision 6609224c2261edea03086b77b8f73e8301524775 (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision 32456dc736b834eab1d534eecca5edeef5086d6d (origin/master)Checking out Revision 32456dc736b834eab1d534eecca5edeef5086d6d (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson3855161159299416350.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #58

2013-04-10 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/58/Project:precise_havana_nova_trunkDate of build:Wed, 10 Apr 2013 19:01:35 -0400Build duration:33 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesChange DB API instance functions for selective metadata fetchingby danmseditnova/db/api.pyeditnova/conductor/rpcapi.pyeditnova/db/sqlalchemy/api.pyeditnova/tests/conductor/test_conductor.pyeditnova/conductor/api.pyeditnova/conductor/manager.pyOptimize some of the periodic task database queries in n-cpuby danmseditnova/compute/manager.pyeditnova/db/sqlalchemy/api.pyeditnova/tests/compute/test_compute.pyConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision 32456dc736b834eab1d534eecca5edeef5086d6d (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision 368c5205f5d920f9cf1b9fae4a8d4a936c885e3f (origin/master)Checking out Revision 368c5205f5d920f9cf1b9fae4a8d4a936c885e3f (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson8461058579333819750.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #59

2013-04-10 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/59/Project:precise_havana_nova_trunkDate of build:Wed, 10 Apr 2013 19:31:34 -0400Build duration:42 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesDont hide stacktraces for unexpected errors in rescueby jogoeditnova/api/openstack/compute/contrib/rescue.pyeditnova/api/openstack/extensions.pyConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision 368c5205f5d920f9cf1b9fae4a8d4a936c885e3f (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision ac206c5a5eb067541d619ff3a1ccc5aeebdc19f6 (origin/master)Checking out Revision ac206c5a5eb067541d619ff3a1ccc5aeebdc19f6 (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson1978233657494102285.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #60

2013-04-10 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/60/Project:precise_havana_nova_trunkDate of build:Wed, 10 Apr 2013 20:31:35 -0400Build duration:55 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesOptimize some of compute/managers periodic tasks DB queriesby danmseditnova/conductor/api.pyeditnova/tests/compute/test_compute.pyeditnova/compute/manager.pyConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision ac206c5a5eb067541d619ff3a1ccc5aeebdc19f6 (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision ad0e2afbb1dd209b1efb3db6a0b770e6005effad (origin/master)Checking out Revision ad0e2afbb1dd209b1efb3db6a0b770e6005effad (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson249468721603563576.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_cinder_trunk #17

2013-04-10 Thread openstack-testing-bot
Title: precise_havana_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_cinder_trunk/17/Project:precise_havana_cinder_trunkDate of build:Thu, 11 Apr 2013 01:01:32 -0400Build duration:1 min 27 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesClean up attach/detach tests.by avishayeditcinder/tests/test_volume.pyAdd service list functionality cinder-manageby stephen.mulcahyeditbin/cinder-managefix default config option typesby darren.birketteditcinder/volume/drivers/coraid.pyeditcinder/openstack/common/rpc/matchmaker.pyConsole Output[...truncated 1380 lines...]DEBUG:root:['bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']Building using working treeBuilding package in merge modeLooking for a way to retrieve the upstream tarballUsing the upstream tarball that is present in /tmp/tmpzokJ8bbzr: ERROR: An error (1) occurred running quilt: Applying patch fix_cinder_dependencies.patchpatching file tools/pip-requiresHunk #1 FAILED at 18.1 out of 1 hunk FAILED -- rejects in file tools/pip-requiresPatch fix_cinder_dependencies.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-940f1599-778f-4ff5-b630-546e61f87d5f', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-940f1599-778f-4ff5-b630-546e61f87d5f', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmpzokJ8b/cindermk-build-deps -i -r -t apt-get -y /tmp/tmpzokJ8b/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log -n5 --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304110101~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [56a49ad] fix default config option typesdch -a [23bd028] Fix backup manager formatting error.dch -a [1c77c54] Add service list functionality cinder-managedch -a [6d7a681] Clean up attach/detach tests.dch -a [8870460] Reformat openstack-common.confdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-940f1599-778f-4ff5-b630-546e61f87d5f', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-940f1599-778f-4ff5-b630-546e61f87d5f', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #61

2013-04-10 Thread openstack-testing-bot
Title: precise_havana_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/61/Project:precise_havana_nova_trunkDate of build:Thu, 11 Apr 2013 01:31:45 -0400Build duration:47 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFix issues with check_instance_shared_storage.by rbryanteditnova/compute/rpcapi.pyeditnova/compute/manager.pyeditnova/tests/compute/test_rpcapi.pyImported Translations from Transifexby Jenkinseditnova/locale/en_GB/LC_MESSAGES/nova.poeditnova/locale/it/LC_MESSAGES/nova.poeditnova/locale/cs/LC_MESSAGES/nova.poeditnova/locale/ja/LC_MESSAGES/nova.poeditnova/locale/es/LC_MESSAGES/nova.poeditnova/locale/tr/LC_MESSAGES/nova.poeditnova/locale/bs/LC_MESSAGES/nova.poeditnova/locale/ru/LC_MESSAGES/nova.poeditnova/locale/de/LC_MESSAGES/nova.poeditnova/locale/en_AU/LC_MESSAGES/nova.poeditnova/locale/uk/LC_MESSAGES/nova.poeditnova/locale/en_US/LC_MESSAGES/nova.poeditnova/locale/zh_CN/LC_MESSAGES/nova.poeditnova/locale/tl/LC_MESSAGES/nova.poeditnova/locale/nova.poteditnova/locale/pt_BR/LC_MESSAGES/nova.poeditnova/locale/tr_TR/LC_MESSAGES/nova.poeditnova/locale/da/LC_MESSAGES/nova.poeditnova/locale/nb/LC_MESSAGES/nova.poeditnova/locale/fr/LC_MESSAGES/nova.poeditnova/locale/ko/LC_MESSAGES/nova.poeditnova/locale/zh_TW/LC_MESSAGES/nova.poDont join metadata twice in instance_get_all()by bdelliotteditnova/db/sqlalchemy/api.pyConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_havana_nova_trunkCheckout:precise_havana_nova_trunk / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision ad0e2afbb1dd209b1efb3db6a0b770e6005effad (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitCommencing build of Revision 6b2af9c084754a1e678f741bfc6b97e13f1cf8a5 (origin/master)Checking out Revision 6b2af9c084754a1e678f741bfc6b97e13f1cf8a5 (origin/master)No emails were triggered.[precise_havana_nova_trunk] $ /bin/sh -xe /tmp/hudson5903971520339973922.sh+ /var/lib/jenkins/tools/openstack-ubuntu-testing/bin/gen-pipeline-paramsBuild step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp