Re: [Openstack] CY13-Q1 Community Analysis — OpenStack vs OpenNebula vs Eucalyptus vs CloudStack
Thanks a lot for the hints. This is very helpful. John 在 2013-4-3,下午6:29,Daniel P. Berrange berra...@redhat.com 写道: On Wed, Apr 03, 2013 at 12:15:21PM +0200, Thierry Carrez wrote: Qingye Jiang (John) wrote: I saw Jay's suggestion on removing review.openstack.org from the git domain analysis. Can you shed some light on how this system works? Is this system shadowing more real code contributors? Merge commits are created in git history when branches are merged. They appear as having two parent commits. In OpenStack, our Gerrit review system automatically creates them when merging into master, so jenk...@review.openstack.org appears as the author of all of them. NB you don't need to exclude based on author name. You can simply ask git for the history, without merges using 'git log --no-merges' Regards, Daniel -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] How to purge old meter data of Ceilometer
With the default 1 minute interval, Ceilometer collects quite large amounts of meter data. Does Ceilometer provide a TTL configuration option for the meter data, or some other functionality or API for purging old meter data? Thanks for any help, Harri ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] How to purge old meter data of Ceilometer
On Thu, Apr 04 2013, Harri Pyy wrote: With the default 1 minute interval, Ceilometer collects quite large amounts of meter data. Does Ceilometer provide a TTL configuration option for the meter data, or some other functionality or API for purging old meter data? Not yet unfortunately. -- Julien Danjou ;; Free Software hacker ; freelance consultant ;; http://julien.danjou.info pgp1Idj4d32BF.pgp Description: PGP signature ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Could not connect to VM
Hello, I'm having some issues with ping or connecting with SSH to A vm after creation. My setup is - 1 controller node + networking node - 2 compute nodes all are running ubuntu 12.04 when trying to ping a vm after creation i get the following output: ping 50.50.1.3 PING 50.50.1.3 (50.50.1.3) 56(84) bytes of data. From 109.226.32.198 icmp_seq=1 Destination Host Unreachable I can see from the horizon console log that the vm is not getting an ip address from the metaserver: wget: can't connect to remote host (169.254.169.254): Network is unreachable Any help would be very appreciated. i'm attaching the ifconfig and ovs-vsctl show of the controller node+network and 1 of the computer nodes. Thanks, Avi controller node+network: br-ex Link encap:Ethernet HWaddr ac:16:2d:76:03:b8 inet6 addr: fe80::c494:14ff:fee2:ff34/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:351438 errors:0 dropped:83332 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:21670232 (21.6 MB) TX bytes:888 (888.0 B) br-intLink encap:Ethernet HWaddr 42:98:14:5a:c7:45 inet6 addr: fe80::852:35ff:fef8:c0d/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:266 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:12828 (12.8 KB) TX bytes:468 (468.0 B) br-tunLink encap:Ethernet HWaddr 3e:6f:76:b8:33:4d inet6 addr: fe80::e4be:9ff:fecd:3d69/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:115 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5706 (5.7 KB) TX bytes:468 (468.0 B) eth0 Link encap:Ethernet HWaddr ac:16:2d:76:03:b8 inet6 addr: fe80::ae16:2dff:fe76:3b8/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:355057 errors:0 dropped:0 overruns:0 frame:0 TX packets:1481 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:23583676 (23.5 MB) TX bytes:109191 (109.1 KB) Interrupt:32 eth1 Link encap:Ethernet HWaddr ac:16:2d:76:03:b9 inet addr:172.16.20.1 Bcast:172.16.20.255 Mask:255.255.255.0 inet6 addr: fe80::ae16:2dff:fe76:3b9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1217937 errors:0 dropped:0 overruns:0 frame:0 TX packets:965032 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:348242626 (348.2 MB) TX bytes:741457958 (741.4 MB) Interrupt:36 eth3 Link encap:Ethernet HWaddr ac:16:2d:76:03:bb inet addr:109.226.32.196 Bcast:109.226.32.223 Mask:255.255.255.224 inet6 addr: fe80::ae16:2dff:fe76:3bb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:641690 errors:0 dropped:0 overruns:0 frame:0 TX packets:153774 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:338543594 (338.5 MB) TX bytes:31423041 (31.4 MB) Interrupt:36 loLink encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:39225048 errors:0 dropped:0 overruns:0 frame:0 TX packets:39225048 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8572204782 (8.5 GB) TX bytes:8572204782 (8.5 GB) tapb6bae46b-75 Link encap:Ethernet HWaddr fa:16:3e:04:79:73 inet addr:50.50.1.2 Bcast:50.50.1.255 Mask:255.255.255.0 inet6 addr: fe80::9004:22ff:fee3:6bc1/64 Scope:Link UP BROADCAST RUNNING MTU:1500 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:17 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1056 (1.0 KB) TX bytes:1098 (1.0 KB) ovs-vsctl show 1f989ce1-998a-45a7-8eef-e0bda6d8ecc8 Bridge br-int Port qr-721fe4cd-9d Interface qr-721fe4cd-9d type: internal Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port tapb6bae46b-75 tag: 2 Interface tapb6bae46b-75 type: internal Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port gre-4 Interface gre-4 type: gre
[Openstack] Swift proxy and swift-informant
Hi to all. I try to configure swift proxy to work with swift-informant. I've installed swift-informant and add/modify the following section to /etc/swift/proxy-server.conf: [pipeline:main] pipeline = informant healthcheck cache swift3 authtoken keystone proxy-server [filter:informant] use = egg:informant#informant statsd_host = 10.0.1.154 metric_name_prepend = inndig. When I start proxy-server it returns the following error: Traceback (most recent call last): File /usr/bin/swift-proxy-server, line 22, in module run_wsgi(conf_file, 'proxy-server', default_port=8080, **options) File /usr/lib/python2.7/dist-packages/swift/common/wsgi.py, line 138, in run_wsgi loadapp('config:%s' % conf_file, global_conf={'log_name': log_name}) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in loadapp return loadobj(APP, uri, name=name, **kw) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 271, in loadobj global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in loadcontext global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 450, in get_context global_additions=global_additions) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 562, in _pipeline_app_context for name in pipeline[:-1]] File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 454, in get_context section) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 476, in _context_from_use object_type, name=use, global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 406, in get_context global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in loadcontext global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 328, in _loadegg return loader.get_context(object_type, name, global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 620, in get_context object_type, name=name) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 646, in find_egg_entry_point possible.append((entry.load(), protocol, entry.name)) File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2017, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File /usr/local/lib/python2.7/dist-packages/informant-0.0.10-py2.7.egg/informant/middleware.py, line 16, in module from swift.common.swob import Request ImportError: No module named swob Searching on google I understand that swob is a module of swift = 1.7.5, but on my ubuntu 12.10 swift is 1.7.4 How can I resolv this situation? Is there a way to upgrade swift? Thank you ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] New schema for LDAP + Keystone Grizzly?
Hello to all!Before the release of version grizzly 3, the suggested schema in the openstack documentation (http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-keystone-for-ldap-backend.html) worked fine. This is the suggested schema:dn: cn=openstack,cn=org dc: openstack objectClass: dcObject objectClass: organizationalUnit ou: openstack dn: ou=Groups,cn=openstack,cn=org objectClass: top objectClass: organizationalUnit ou: groups dn: ou=Users,cn=openstack,cn=org objectClass: top objectClass: organizationalUnit ou: users dn: ou=Roles,cn=openstack,cn=org objectClass: top objectClass: organizationalUnit ou: rolesBut after the release of the version grizzly 3 I think that's not enough anymore, mainly because of the "domain" concept.I'm kind of lost trying to make LDAP work with keystone now...does anyone succeed in this? I created a new dn, something like:dn: ou=Domains,cn=openstack,cn=org objectClass: top objectClass: organizationalUnit ou: Domains But when I run the "keystone-manage db_sync" the "default" domain isn't created in the LDAP...When I manually create the domain in there, I have a problem with authentication...I think I must be doing something wrong, does anyone have a light?Thanks in advance,Marcelo M. Miziara marcelo.mizi...@serpro.gov.br - Esta mensagem do SERVIO FEDERAL DE PROCESSAMENTO DE DADOS (SERPRO), empresa pblica federal regida pelo disposto na Lei Federal n 5.615, enviada exclusivamente a seu destinatrio e pode conter informaes confidenciais, protegidas por sigilo profissional. Sua utilizao desautorizada ilegal e sujeita o infrator s penas da lei. Se voc a recebeu indevidamente, queira, por gentileza, reenvi-la ao emitente, esclarecendo o equvoco. This message from SERVIO FEDERAL DE PROCESSAMENTO DE DADOS (SERPRO) -- a government company established under Brazilian law (5.615/70) -- is directed exclusively to its addressee and may contain confidential data, protected under professional secrecy rules. Its unauthorized use is illegal and may subject the transgressor to the laws penalties. If youre not the addressee, please send it back, elucidating the failure. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift proxy and swift-informant
swob is not a separate package, it is part of the swift core. You may either get older version of informant which does not use swob or upgrade swift. Tong Li Emerging Technologies Standards Building 501/B205 liton...@us.ibm.com From: Giovanni Colapinto giovanni.colapi...@gmail.com To: openstack@lists.launchpad.net, Date: 04/04/2013 04:26 AM Subject:[Openstack] Swift proxy and swift-informant Sent by:openstack-bounces+litong01=us.ibm@lists.launchpad.net Hi to all. I try to configure swift proxy to work with swift-informant. I've installed swift-informant and add/modify the following section to /etc/swift/proxy-server.conf: [pipeline:main] pipeline = informant healthcheck cache swift3 authtoken keystone proxy-server [filter:informant] use = egg:informant#informant statsd_host = 10.0.1.154 metric_name_prepend = inndig. When I start proxy-server it returns the following error: Traceback (most recent call last): File /usr/bin/swift-proxy-server, line 22, in module run_wsgi(conf_file, 'proxy-server', default_port=8080, **options) File /usr/lib/python2.7/dist-packages/swift/common/wsgi.py, line 138, in run_wsgi loadapp('config:%s' % conf_file, global_conf={'log_name': log_name}) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in loadapp return loadobj(APP, uri, name=name, **kw) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 271, in loadobj global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in loadcontext global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 450, in get_context global_additions=global_additions) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 562, in _pipeline_app_context for name in pipeline[:-1]] File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 454, in get_context section) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 476, in _context_from_use object_type, name=use, global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 406, in get_context global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in loadcontext global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 328, in _loadegg return loader.get_context(object_type, name, global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 620, in get_context object_type, name=name) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 646, in find_egg_entry_point possible.append((entry.load(), protocol, entry.name)) File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2017, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File /usr/local/lib/python2.7/dist-packages/informant-0.0.10-py2.7.egg/informant/middleware.py, line 16, in module from swift.common.swob import Request ImportError: No module named swob Searching on google I understand that swob is a module of swift = 1.7.5, but on my ubuntu 12.10 swift is 1.7.4 How can I resolv this situation? Is there a way to upgrade swift? Thank you___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp inline: graycol.gif___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift proxy and swift-informant
Thank you for the reply. How I can upgrade swift? On Thu, Apr 4, 2013 at 3:12 PM, Tong Li liton...@us.ibm.com wrote: swob is not a separate package, it is part of the swift core. You may either get older version of informant which does not use swob or upgrade swift. Tong Li Emerging Technologies Standards Building 501/B205 liton...@us.ibm.com [image: Inactive hide details for Giovanni Colapinto ---04/04/2013 04:26:57 AM---Hi to all. I try to configure swift proxy to work with]Giovanni Colapinto ---04/04/2013 04:26:57 AM---Hi to all. I try to configure swift proxy to work with swift-informant. From: Giovanni Colapinto giovanni.colapi...@gmail.com To: openstack@lists.launchpad.net, Date: 04/04/2013 04:26 AM Subject: [Openstack] Swift proxy and swift-informant Sent by: openstack-bounces+litong01=us.ibm@lists.launchpad.net -- Hi to all. I try to configure swift proxy to work with swift-informant. I've installed swift-informant and add/modify the following section to /etc/swift/proxy-server.conf: [pipeline:main] pipeline = informant healthcheck cache swift3 authtoken keystone proxy-server [filter:informant] use = egg:informant#informant statsd_host = 10.0.1.154 metric_name_prepend = inndig. When I start proxy-server it returns the following error: Traceback (most recent call last): File /usr/bin/swift-proxy-server, line 22, in module run_wsgi(conf_file, 'proxy-server', default_port=8080, **options) File /usr/lib/python2.7/dist-packages/swift/common/wsgi.py, line 138, in run_wsgi loadapp('config:%s' % conf_file, global_conf={'log_name': log_name}) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in loadapp return loadobj(APP, uri, name=name, **kw) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 271, in loadobj global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in loadcontext global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 450, in get_context global_additions=global_additions) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 562, in _pipeline_app_context for name in pipeline[:-1]] File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 454, in get_context section) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 476, in _context_from_use object_type, name=use, global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 406, in get_context global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in loadcontext global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 328, in _loadegg return loader.get_context(object_type, name, global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 620, in get_context object_type, name=name) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 646, in find_egg_entry_point possible.append((entry.load(), protocol, *entry.name*http://entry.name/ )) File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2017, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File /usr/local/lib/python2.7/dist-packages/informant-0.0.10-py2.7.egg/informant/middleware.py, line 16, in module from swift.common.swob import Request ImportError: No module named swob Searching on google I understand that swob is a module of swift = 1.7.5, but on my ubuntu 12.10 swift is 1.7.4 How can I resolv this situation? Is there a way to upgrade swift? Thank you___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp graycol.gif___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] OpenStack on two physical servers - devstack
Hello all. For getting a 'full' OpenStack environment up and running for testing and semi-serious use, with two physical servers (to enable redundancy), is devstack still the way to go? thanks,Andy ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift proxy and swift-informant
on ubuntu, you can try the following. apt-get install python-software-properties add-apt-repository ppa:swift-core/release apt-get update apt-get install swift python-swiftclient Also look into the doc at this link. http://docs.openstack.org/developer/swift/howto_installmultinode.html If you want to use the latest swift code, you can get the code from github and build it yourself but I would not recommend to do it for the production system. Hope that helps. Tong Li Emerging Technologies Standards Building 501/B205 liton...@us.ibm.com From: Giovanni Colapinto giovanni.colapi...@gmail.com To: Tong Li/Raleigh/IBM@IBMUS, Cc: openstack@lists.launchpad.net, openstack-bounces +litong01=us.ibm@lists.launchpad.net Date: 04/04/2013 09:20 AM Subject:Re: [Openstack] Swift proxy and swift-informant Thank you for the reply. How I can upgrade swift? On Thu, Apr 4, 2013 at 3:12 PM, Tong Li liton...@us.ibm.com wrote: swob is not a separate package, it is part of the swift core. You may either get older version of informant which does not use swob or upgrade swift. Tong Li Emerging Technologies Standards Building 501/B205 liton...@us.ibm.com Inactive hide details for Giovanni Colapinto ---04/04/2013 04:26:57 AM---Hi to all. I try to configure swift proxy to work withGiovanni Colapinto ---04/04/2013 04:26:57 AM---Hi to all. I try to configure swift proxy to work with swift-informant. From: Giovanni Colapinto giovanni.colapi...@gmail.com To: openstack@lists.launchpad.net, Date: 04/04/2013 04:26 AM Subject: [Openstack] Swift proxy and swift-informant Sent by: openstack-bounces+litong01=us.ibm@lists.launchpad.net Hi to all. I try to configure swift proxy to work with swift-informant. I've installed swift-informant and add/modify the following section to /etc/swift/proxy-server.conf: [pipeline:main] pipeline = informant healthcheck cache swift3 authtoken keystone proxy-server [filter:informant] use = egg:informant#informant statsd_host = 10.0.1.154 metric_name_prepend = inndig. When I start proxy-server it returns the following error: Traceback (most recent call last): File /usr/bin/swift-proxy-server, line 22, in module run_wsgi(conf_file, 'proxy-server', default_port=8080, **options) File /usr/lib/python2.7/dist-packages/swift/common/wsgi.py, line 138, in run_wsgi loadapp('config:%s' % conf_file, global_conf={'log_name': log_name}) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in loadapp return loadobj(APP, uri, name=name, **kw) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 271, in loadobj global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in loadcontext global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 450, in get_context global_additions=global_additions) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 562, in _pipeline_app_context for name in pipeline[:-1]] File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 454, in get_context section) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 476, in _context_from_use object_type, name=use, global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 406, in get_context global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in loadcontext global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 328, in _loadegg return loader.get_context(object_type, name, global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 620, in get_context object_type, name=name) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 646, in find_egg_entry_point possible.append((entry.load(), protocol, entry.name)) File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2017, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File /usr/local/lib/python2.7/dist-packages/informant-0.0.10-py2.7.egg/informant/middleware.py, line 16, in module from swift.common.swob import Request ImportError: No module named swob Searching on google I understand that swob is a module of swift = 1.7.5, but on my ubuntu 12.10 swift is 1.7.4 How can I resolv this situation? Is there a way to upgrade swift? Thank you___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net
[Openstack] OpenStack 2013.1 (Grizzly) is released !
Hello everyone, The Grizzly development cycle, started 6 months ago, ends today with the immediate release of OpenStack 2013.1. It is the result of the work of 550 different people who contributed code, documentation, or infrastructure configurations. This amazing journey saw the implementation of 232 blueprints and the fixing of 1900 bugs within the 7 integrated projects alone. You can find source tarballs for each integrated project, together with lists of features and bugfixes, at: OpenStack Compute:https://launchpad.net/nova/grizzly/2013.1 OpenStack Object Storage: https://launchpad.net/swift/grizzly/1.8.0 OpenStack Image Service: https://launchpad.net/glance/grizzly/2013.1 OpenStack Networking: https://launchpad.net/quantum/grizzly/2013.1 OpenStack Block Storage: https://launchpad.net/cinder/grizzly/2013.1 OpenStack Identity: https://launchpad.net/keystone/grizzly/2013.1 OpenStack Dashboard: https://launchpad.net/horizon/grizzly/2013.1 The Grizzly Release Notes contain an overview of the key features, as well as upgrade notes and current lists of known issues. You can access them at: https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly In 11 days our community will gather in Portland, OR for the OpenStack Summit: 4 days of conference to discuss the state of OpenStack and a Design Summit to plan the next 6-month development cycle, codenamed Havana. See https://www.openstack.org/summit/portland-2013/ for more details. Congratulations everyone on this awesome release ! -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [openstack-announce] OpenStack 2013.1 (Grizzly) is released !
Amazing accomplishment everyone! I just posted a blog post with a few highlights and included a link to your message, Thierry. http://www.openstack.org/blog/2013/04/openstack-grizzly/ On Thursday, April 4, 2013 9:47am, Thierry Carrez thie...@openstack.org said: Hello everyone, The Grizzly development cycle, started 6 months ago, ends today with the immediate release of OpenStack 2013.1. It is the result of the work of 550 different people who contributed code, documentation, or infrastructure configurations. This amazing journey saw the implementation of 232 blueprints and the fixing of 1900 bugs within the 7 integrated projects alone. You can find source tarballs for each integrated project, together with lists of features and bugfixes, at: OpenStack Compute:https://launchpad.net/nova/grizzly/2013.1 OpenStack Object Storage: https://launchpad.net/swift/grizzly/1.8.0 OpenStack Image Service: https://launchpad.net/glance/grizzly/2013.1 OpenStack Networking: https://launchpad.net/quantum/grizzly/2013.1 OpenStack Block Storage: https://launchpad.net/cinder/grizzly/2013.1 OpenStack Identity: https://launchpad.net/keystone/grizzly/2013.1 OpenStack Dashboard: https://launchpad.net/horizon/grizzly/2013.1 The Grizzly Release Notes contain an overview of the key features, as well as upgrade notes and current lists of known issues. You can access them at: https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly In 11 days our community will gather in Portland, OR for the OpenStack Summit: 4 days of conference to discuss the state of OpenStack and a Design Summit to plan the next 6-month development cycle, codenamed Havana. See https://www.openstack.org/summit/portland-2013/ for more details. Congratulations everyone on this awesome release ! -- Thierry Carrez (ttx) Release Manager, OpenStack ___ OpenStack-announce mailing list openstack-annou...@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Ceilometer 2013.1 released
Hello, Ceilometer 2013.1 has been released, ending the Grizzly development cycle where we were an incubated project, and became an integrated one for the Havana cycle. The tarball can be found with the list of features and bugfixes here: https://launchpad.net/ceilometer/grizzly/2013.1 Congratulations to everyone for this! -- Julien Danjou -- Free Software hacker - freelance consultant -- http://julien.danjou.info pgpGYgEau09B1.pgp Description: PGP signature ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] How to purge old meter data of Ceilometer
I created a blueprint for this: https://blueprints.launchpad.net/ceilometer/+spec/purge-data On Thu, Apr 4, 2013 at 4:06 AM, Julien Danjou jul...@danjou.info wrote: On Thu, Apr 04 2013, Harri Pyy wrote: With the default 1 minute interval, Ceilometer collects quite large amounts of meter data. Does Ceilometer provide a TTL configuration option for the meter data, or some other functionality or API for purging old meter data? Not yet unfortunately. -- Julien Danjou ;; Free Software hacker ; freelance consultant ;; http://julien.danjou.info ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Nova] Creating instances with custom UUIDs
On Wed, Apr 3, 2013 at 7:05 PM, Chris Behrens cbehr...@codestud.com wrote: I'm having a hard time understanding the original problem. nova boot should return in milliseconds. There's no blocking on provisioning. The only thing that could block is DB access, as AFAIK the RPC to the scheduler is still pass by reference. - Chris On Apr 3, 2013, at 8:32 PM, Rafael Rosa rafaelros...@gmail.com wrote: API wise I was thinking about something like nova boot --custom-instance-uuid ABC... or something like that. To avoid problems with any current implementation I would set it to disabled by default and add a config option to enable it. As for collisions, my take is that if you're passing a custom UUID you know what you're doing and is generating them in a way that won't be duplicated. Just by using standard UUID generators the possibility of collisions are really really small. Thanks for the feeback :) Rafael Rosa Fu 2013/4/3 Michael Still mi...@stillhq.com On Thu, Apr 4, 2013 at 9:16 AM, Rafael Rosa rafaelros...@gmail.com wrote: Hi, In our OpenStack installation we have an issue when creating new instances, we need to execute some long running processes before calling nova boot and the call blocks for the end user for a while. We would like to return immediately to the caller with a final instance UUID and do the work on the background, but it's only generated when during actual instance creation, which is a no go in our situation. The instance_create database call already accepts an instance UUID as an argument, so that bit looks like it should work out well for you. So, I guess this is mostly a case of working out how you want the API to work. Personally, I would have no problem with something like this, so long as we could somehow reserve the instance UUID so that another caller doesn't try and create an instance with the same UUID while you're doing your slow thing. Cheers, Michael ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Quantum] Anybody implemented DMZ?
Hi David, The quantum network node would route traffic between the non-DMZ-DMZ network if both of those subnets are uplinked to the same quantum router. I believe if you create another router for your dmz hosts then traffic in/out of that network should route our to your physical infrastructure which will go through your router to do filtering. Thanks, Aaron On Wed, Apr 3, 2013 at 8:26 AM, David Kang dk...@isi.edu wrote: Hi, We are trying to set up Quantum network for non-DMZ and DMZ networks. The cloud has both non-DMZ networks and a DMZ network. We need to route traffic from DMZ network to a specific router before it reaches anywhere else in non-DMZ networks. However, Quantum Network Node routes the traffic between DMZ network and non-DMZ network within itself by default. Have anybody configured Quantum for this case? Any help will be appreciated. We are using Quantum linuxbridge-agent. Thanks, David -- -- Dr. Dong-In David Kang Computer Scientist USC/ISI ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Quantum] Anybody implemented DMZ?
Hi Aron, Thank you for your reply. We deploy one (quantum) subnet as a DMZ network and the other (quantum) subnet as a non-DMZ network. They are routed to the network node where quantum services (dhcp, l3, linuxbridge) are running. They can talk each other through network node, now. However, we do not want to the network node to route the traffic between them directly. Instead we want them to be routed to different (external) routers such that we can apply filtering/firewall/etc. on the traffic from DMZ network. Do you think is it possible using two l3-agents or any other way? Currently, I manually set up routings for those two subnets. Thanks, David - Original Message - Hi David, The quantum network node would route traffic between the non-DMZ-DMZ network if both of those subnets are uplinked to the same quantum router. I believe if you create another router for your dmz hosts then traffic in/out of that network should route our to your physical infrastructure which will go through your router to do filtering. Thanks, Aaron On Wed, Apr 3, 2013 at 8:26 AM, David Kang dk...@isi.edu wrote: Hi, We are trying to set up Quantum network for non-DMZ and DMZ networks. The cloud has both non-DMZ networks and a DMZ network. We need to route traffic from DMZ network to a specific router before it reaches anywhere else in non-DMZ networks. However, Quantum Network Node routes the traffic between DMZ network and non-DMZ network within itself by default. Have anybody configured Quantum for this case? Any help will be appreciated. We are using Quantum linuxbridge-agent. Thanks, David -- -- Dr. Dong-In David Kang Computer Scientist USC/ISI ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- -- Dr. Dong-In David Kang Computer Scientist USC/ISI ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Gerrit Review + SSH
Hi, As OS dev cycle involves Gerrit review tool which requires ssh into the gerrit server, I was wondering if any of you guys face problems where your company/org does not allow ssh to external hosts. In general, what is the best practice in terms of environment for generating code review? I appreciate the response in advance. Thanks, Ronak ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] [Quantum] Anybody implemented DMZ?
In my reply I suggested you to create two quantum routers which I believe should solve this for you. quantum net-create DMZ-net --external=True quantum subnet-create --name DMZ-Subnet1 DMZ-net dmz_cidr # Public ip pool quantum net-create non-DMZ --external=True quantum subnet-create --name nonDMZ-Subnet1 non-DMZ non_dmz_cidr # Public ip pool quantum router-create DMZ-router quantum router-create non-DMZ-router quantum router-interface-add DMZ-router DMZ DMZ-Subnet1 quantum router-interface-add non-DMZ-router nonDMZ-Subnet1 quantum router-gateway-set DMZ-router DMZ-net quantum router-gateway-set non-DMZ-router non-DMZ On Thu, Apr 4, 2013 at 10:51 AM, David Kang dk...@isi.edu wrote: Hi Aron, Thank you for your reply. We deploy one (quantum) subnet as a DMZ network and the other (quantum) subnet as a non-DMZ network. They are routed to the network node where quantum services (dhcp, l3, linuxbridge) are running. They can talk each other through network node, now. However, we do not want to the network node to route the traffic between them directly. Instead we want them to be routed to different (external) routers such that we can apply filtering/firewall/etc. on the traffic from DMZ network. Do you think is it possible using two l3-agents or any other way? Currently, I manually set up routings for those two subnets. Thanks, David - Original Message - Hi David, The quantum network node would route traffic between the non-DMZ-DMZ network if both of those subnets are uplinked to the same quantum router. I believe if you create another router for your dmz hosts then traffic in/out of that network should route our to your physical infrastructure which will go through your router to do filtering. Thanks, Aaron On Wed, Apr 3, 2013 at 8:26 AM, David Kang dk...@isi.edu wrote: Hi, We are trying to set up Quantum network for non-DMZ and DMZ networks. The cloud has both non-DMZ networks and a DMZ network. We need to route traffic from DMZ network to a specific router before it reaches anywhere else in non-DMZ networks. However, Quantum Network Node routes the traffic between DMZ network and non-DMZ network within itself by default. Have anybody configured Quantum for this case? Any help will be appreciated. We are using Quantum linuxbridge-agent. Thanks, David -- -- Dr. Dong-In David Kang Computer Scientist USC/ISI ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- -- Dr. Dong-In David Kang Computer Scientist USC/ISI ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Gerrit Review + SSH
On 2013-04-04 10:51:20 -0700 (-0700), Ronak Shah wrote: As OS dev cycle involves Gerrit review tool which requires ssh into the gerrit server, I was wondering if any of you guys face problems where your company/org does not allow ssh to external hosts. [...] It usually involves the uphill battle of convincing whoever manages network security/firewalls/proxies for your employer that the Internet is more than just a bunch of Web pages. Companies which exclusively limit their employees to only browsing the Web are basically cutting themselves off from innovations which rely on a myriad of other protocols. For non-technology companies that might be fine, but for a technology company that's often a sign that it's going out of business pretty soon. -- Jeremy Stanley ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Gerrit Review + SSH
On 04/04/2013 02:23 PM, Jeremy Stanley wrote: On 2013-04-04 10:51:20 -0700 (-0700), Ronak Shah wrote: As OS dev cycle involves Gerrit review tool which requires ssh into the gerrit server, I was wondering if any of you guys face problems where your company/org does not allow ssh to external hosts. [...] It usually involves the uphill battle of convincing whoever manages network security/firewalls/proxies for your employer that the Internet is more than just a bunch of Web pages. Companies which exclusively limit their employees to only browsing the Web are basically cutting themselves off from innovations which rely on a myriad of other protocols. For non-technology companies that might be fine, but for a technology company that's often a sign that it's going out of business pretty soon. +1000 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Gerrit Review + SSH
+1 Regards, Pranav On Fri, Apr 5, 2013 at 12:07 AM, Jay Pipes jaypi...@gmail.com wrote: On 04/04/2013 02:23 PM, Jeremy Stanley wrote: On 2013-04-04 10:51:20 -0700 (-0700), Ronak Shah wrote: As OS dev cycle involves Gerrit review tool which requires ssh into the gerrit server, I was wondering if any of you guys face problems where your company/org does not allow ssh to external hosts. [...] It usually involves the uphill battle of convincing whoever manages network security/firewalls/proxies for your employer that the Internet is more than just a bunch of Web pages. Companies which exclusively limit their employees to only browsing the Web are basically cutting themselves off from innovations which rely on a myriad of other protocols. For non-technology companies that might be fine, but for a technology company that's often a sign that it's going out of business pretty soon. +1000 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] multi-host mode in quantum
Hello, As Grizzly is released today, can anybody confirm that the multi-host network mode is supported by Quantum in this new release? Thanks, Xin On 12/13/2012 6:04 AM, Heiko Krämer wrote: Hey Guys, it's a good point. I hope this option will include in Grizzly. We get now (since the switch to Quantum) network I/O bottlenecks without using all NIC's of our nodes. So I'm looking forward to Grizzly Greetings Heiko Am 12.12.2012 17:11, schrieb Gary Kotton: On 12/12/2012 05:58 PM, Xin Zhao wrote: Hello, If I understand it correctly, multi-host network mode is not supported (yet) in quantum in Folsom. I wonder what's the recommended way of running multiple network nodes (for load balancing and bandwidth concerns) in quantum? Any documentation links will be appreciated. At the moment this is in discussion upstream. It is currently not supported but we are hoping to have support for this in grizzly. Thanks, Xin ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list:https://launchpad.net/~openstack Post to :openstack@lists.launchpad.net Unsubscribe :https://launchpad.net/~openstack More help :https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] multi-host mode in quantum
Hi Xin, This is in the release notes at https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly#OpenStack_Network_Service_.28Quantum.29 - Multiple Network support for multiple network nodes running L3-agents and DHCP-agents - provides better scale + high-available for quantum deployments. Anne On Thu, Apr 4, 2013 at 2:04 PM, Xin Zhao xz...@bnl.gov wrote: Hello, As Grizzly is released today, can anybody confirm that the multi-host network mode is supported by Quantum in this new release? Thanks, Xin On 12/13/2012 6:04 AM, Heiko Krämer wrote: Hey Guys, it's a good point. I hope this option will include in Grizzly. We get now (since the switch to Quantum) network I/O bottlenecks without using all NIC's of our nodes. So I'm looking forward to Grizzly Greetings Heiko Am 12.12.2012 17:11, schrieb Gary Kotton: On 12/12/2012 05:58 PM, Xin Zhao wrote: Hello, If I understand it correctly, multi-host network mode is not supported (yet) in quantum in Folsom. I wonder what's the recommended way of running multiple network nodes (for load balancing and bandwidth concerns) in quantum? Any documentation links will be appreciated. At the moment this is in discussion upstream. It is currently not supported but we are hoping to have support for this in grizzly. Thanks, Xin ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] multi-host mode in quantum
Unfortunately, I don't think multiple network nodes is the same multi-host network mode that Xin is asking about. The following did not make it into grizzly and is now targeted for havana: https://blueprints.launchpad.net/quantum/+spec/quantum-multihost -- Henry On Thu, Apr 04, at 3:11 pm Anne Gentle (a...@openstack.org) wrote: Hi Xin, This is in the release notes at https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly#OpenStack_Network_Service_.28Quantum.29 * Multiple Network support for multiple network nodes running L3-agents and DHCP-agents o provides better scale + high-available for quantum deployments. Anne On Thu, Apr 4, 2013 at 2:04 PM, Xin Zhao xz...@bnl.gov mailto:xz...@bnl.gov wrote: Hello, As Grizzly is released today, can anybody confirm that the multi-host network mode is supported by Quantum in this new release? Thanks, Xin On 12/13/2012 6:04 AM, Heiko Krämer wrote: Hey Guys, it's a good point. I hope this option will include in Grizzly. We get now (since the switch to Quantum) network I/O bottlenecks without using all NIC's of our nodes. So I'm looking forward to Grizzly Greetings Heiko Am 12.12.2012 17:11, schrieb Gary Kotton: On 12/12/2012 05:58 PM, Xin Zhao wrote: Hello, If I understand it correctly, multi-host network mode is not supported (yet) in quantum in Folsom. I wonder what's the recommended way of running multiple network nodes (for load balancing and bandwidth concerns) in quantum? Any documentation links will be appreciated. At the moment this is in discussion upstream. It is currently not supported but we are hoping to have support for this in grizzly. Thanks, Xin ___ Mailing list: https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack Post to : openstack@lists.launchpad.net mailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack https://launchpad.net/%7Eopenstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] multi-host mode in quantum
Henry, thank you for clarifying! I was just posting what was in the release notes. Perhaps someone on the Quantum core team can add to the release notes for clarity. Anne On Thu, Apr 4, 2013 at 2:23 PM, Henry Gessau ges...@cisco.com wrote: Unfortunately, I don't think multiple network nodes is the same multi-host network mode that Xin is asking about. The following did not make it into grizzly and is now targeted for havana: https://blueprints.launchpad.net/quantum/+spec/quantum-multihost -- Henry On Thu, Apr 04, at 3:11 pm Anne Gentle (a...@openstack.org) wrote: Hi Xin, This is in the release notes at https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly#OpenStack_Network_Service_.28Quantum.29 - Multiple Network support for multiple network nodes running L3-agents and DHCP-agents - provides better scale + high-available for quantum deployments. Anne On Thu, Apr 4, 2013 at 2:04 PM, Xin Zhao xz...@bnl.gov wrote: Hello, As Grizzly is released today, can anybody confirm that the multi-host network mode is supported by Quantum in this new release? Thanks, Xin On 12/13/2012 6:04 AM, Heiko Krämer wrote: Hey Guys, it's a good point. I hope this option will include in Grizzly. We get now (since the switch to Quantum) network I/O bottlenecks without using all NIC's of our nodes. So I'm looking forward to Grizzly Greetings Heiko Am 12.12.2012 17:11, schrieb Gary Kotton: On 12/12/2012 05:58 PM, Xin Zhao wrote: Hello, If I understand it correctly, multi-host network mode is not supported (yet) in quantum in Folsom. I wonder what's the recommended way of running multiple network nodes (for load balancing and bandwidth concerns) in quantum? Any documentation links will be appreciated. At the moment this is in discussion upstream. It is currently not supported but we are hoping to have support for this in grizzly. Thanks, Xin ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] multi-host mode in quantum
Hi All, On Thu, Apr 4, 2013 at 3:23 PM, Henry Gessau ges...@cisco.com wrote: Unfortunately, I don't think multiple network nodes is the same multi-host network mode that Xin is asking about. The following did not make it into grizzly and is now targeted for havana: https://blueprints.launchpad.net/quantum/+spec/quantum-multihosthttps://blueprints.launchpad.net/quantum/+spec/quantum-multihost I'm hoping that means there still needs to be a central dhcp server but the multiple L3 agents mean actual traffic from VMs can be managed directly on the compute note they are running on and doesn't need to be sent back to a different network node before being nat'ed or tagged? Multi-host was the single most important feature for me in the quantum blue prints, if nat'ed networks need to be piped through a gateway box other than the physical host the instance is on quantum remains a no go for me. Having a central (but redundant) dhcp with distributed NAT may actually be an improvement over having to run dnsmasq literally everywhere. -Jon ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] AUTO: I am out of the office. (returning 2013/04/06)
I am out of the office until 2013/04/06. I will respond to your message when I return. Note: This is an automated response to your message [Openstack] OpenStack 2013.1 (Grizzly) is released ! sent on 04/04/2013 22:47:45. This is the only notification you will receive while this person is away.___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Grizzly release notes and the never-ending image-cache-manager issue
Michael (et al): The Grizzly release notes https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly say: The image-cache-manager has been turned on by default. This may have potential issues for users who are using a shared filesystem for their instances_path. Set remove_unused_base_images=false in your nova.conf file on your compute nodes to revert this behaviour. My understanding was that in Grizzly, this wasn't an issue since shared storage was automatically detected (e.g. https://bugs.launchpad.net/nova/+bug/1075018) Is it safe to zap this from the release notes? Lorin -- Lorin Hochstein Lead Architect - Cloud Services Nimbis Services, Inc. www.nimbisservices.com ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Grizzly release notes and the never-ending image-cache-manager issue
On Fri, Apr 5, 2013 at 7:20 AM, Lorin Hochstein lo...@nimbisservices.com wrote: Michael (et al): The Grizzly release notes https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly say: The image-cache-manager has been turned on by default. This may have potential issues for users who are using a shared filesystem for their instances_path. Set remove_unused_base_images=false in your nova.conf file on your compute nodes to revert this behaviour. My understanding was that in Grizzly, this wasn't an issue since shared storage was automatically detected (e.g. https://bugs.launchpad.net/nova/+bug/1075018) This is my understanding as well -- that we now detect shared storage and do the right thing. I don't have any data on how much real world testing that code has experienced though. Is it safe to zap this from the release notes? Safe is a relative thing. I'd be more comfortable if I knew that someone had deployed the code and had a good experience, but when I ask on the operators list I get puzzled stares... Michael ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Gerrit Review + SSH
On Thu, Apr 04, 2013 at 10:51:20AM -0700, Ronak Shah wrote: Hi, As OS dev cycle involves Gerrit review tool which requires ssh into the gerrit server, I was wondering if any of you guys face problems where your company/org does not allow ssh to external hosts. In general, what is the best practice in terms of environment for generating code review? The traditional workaround when companies have insane firewalls blocking SSH, is to run an SSH server on port 443, since firewalls typically allow through any traffic on the HTTPS port, even if it isn't using the HTTPS protocol :-) This workaround only fails if your company is also doing a man-in-the-middle attack on HTTPS traffic[1] GitHub actually have an SSH server on port 443 for exactly this reason https://help.github.com/articles/using-ssh-over-the-https-port I don't know how hard it would be for OpenStack Infrastructure team to officially make Gerrit available via port 443, in addition to the normal SSH port. Regards, Daniel [1] Yes some companies really do MITM attack all HTTPS connections their employees make :-( -- |: http://berrange.com -o-http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Gerrit Review + SSH
On 2013-04-04 22:11:10 +0100 (+0100), Daniel P. Berrange wrote: [...] I don't know how hard it would be for OpenStack Infrastructure team to officially make Gerrit available via port 443, in addition to the normal SSH port. We'd need to use different hostnames mapped to different IP addresses since 443/tcp is already in use on review.openstack.org for, well, HTTPS (the availability of fancy proxies which can differentiate SSH from SSL/TLS notwithstanding--do those exist?). The bigger question is whether it's worth the effort to maintain a workaround like that... are there companies who want their employees contributing to OpenStack development but won't grant those same developers access to our code review system over the Internet? If so, maybe some brave soul will take pity on them and set up a TCP bounce proxy somewhere on port 443 to forward to port 29418 on our Gerrit server for Git+SSH access on an alternate address and port. I don't think that would need any sort of buy-off from our Infrastructure Team (we can discuss if someone's actually interested in setting it up), but probably wouldn't be official all the same. -- Jeremy Stanley ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift proxy and swift-informant
Yes, I read that documentation on upgrade, but unfortunately in that repository quantal distribution doesn't exists. Is it safe to use precise packages even if I got quantal? Thank you On Thu, Apr 4, 2013 at 4:29 PM, Tong Li liton...@us.ibm.com wrote: on ubuntu, you can try the following. apt-get install python-software-properties add-apt-repository ppa:swift-core/release apt-get update apt-get install swift python-swiftclient Also look into the doc at this link. http://docs.openstack.org/developer/swift/howto_installmultinode.html If you want to use the latest swift code, you can get the code from github and build it yourself but I would not recommend to do it for the production system. Hope that helps. Tong Li Emerging Technologies Standards Building 501/B205 liton...@us.ibm.com [image: Inactive hide details for Giovanni Colapinto ---04/04/2013 09:20:50 AM---Thank you for the reply. How I can upgrade swift? On T]Giovanni Colapinto ---04/04/2013 09:20:50 AM---Thank you for the reply. How I can upgrade swift? On Thu, Apr 4, 2013 at 3:12 PM, Tong Li litong01@ From: Giovanni Colapinto giovanni.colapi...@gmail.com To: Tong Li/Raleigh/IBM@IBMUS, Cc: openstack@lists.launchpad.net, openstack-bounces+litong01= us.ibm@lists.launchpad.net Date: 04/04/2013 09:20 AM Subject: Re: [Openstack] Swift proxy and swift-informant -- Thank you for the reply. How I can upgrade swift? On Thu, Apr 4, 2013 at 3:12 PM, Tong Li *liton...@us.ibm.com*liton...@us.ibm.com wrote: swob is not a separate package, it is part of the swift core. You may either get older version of informant which does not use swob or upgrade swift. Tong Li Emerging Technologies Standards Building 501/B205* **liton...@us.ibm.com* liton...@us.ibm.com [image: Inactive hide details for Giovanni Colapinto ---04/04/2013 04:26:57 AM---Hi to all. I try to configure swift proxy to work with]Giovanni Colapinto ---04/04/2013 04:26:57 AM---Hi to all. I try to configure swift proxy to work with swift-informant. From: Giovanni Colapinto *giovanni.colapi...@gmail.com*giovanni.colapi...@gmail.com To: *openstack@lists.launchpad.net* openstack@lists.launchpad.net, Date: 04/04/2013 04:26 AM Subject: [Openstack] Swift proxy and swift-informant Sent by: openstack-bounces+litong01=*us.ibm@lists.launchpad.net*us.ibm@lists.launchpad.net -- Hi to all. I try to configure swift proxy to work with swift-informant. I've installed swift-informant and add/modify the following section to /etc/swift/proxy-server.conf: [pipeline:main] pipeline = informant healthcheck cache swift3 authtoken keystone proxy-server [filter:informant] use = egg:informant#informant statsd_host = 10.0.1.154 metric_name_prepend = inndig. When I start proxy-server it returns the following error: Traceback (most recent call last): File /usr/bin/swift-proxy-server, line 22, in module run_wsgi(conf_file, 'proxy-server', default_port=8080, **options) File /usr/lib/python2.7/dist-packages/swift/common/wsgi.py, line 138, in run_wsgi loadapp('config:%s' % conf_file, global_conf={'log_name': log_name}) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in loadapp return loadobj(APP, uri, name=name, **kw) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 271, in loadobj global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in loadcontext global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 450, in get_context global_additions=global_additions) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 562, in _pipeline_app_context for name in pipeline[:-1]] File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 454, in get_context section) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 476, in _context_from_use object_type, name=use, global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 406, in get_context global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in loadcontext global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 328, in _loadegg return loader.get_context(object_type, name, global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 620, in get_context
[Openstack-ubuntu-testing-notifications] Build Failure: precise_havana_nova_trunk #36
Title: precise_havana_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/36/Project:precise_havana_nova_trunkDate of build:Thu, 04 Apr 2013 12:02:32 -0400Build duration:4 min 16 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesNo ChangesConsole Output[...truncated 4 lines...]Last Built Revision: Revision f96c9ab31700bc37792de0b3bec97edd7d99aa29 (origin/master)Checkout:nova / /var/lib/jenkins/slave/workspace/precise_havana_nova_trunk/nova - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/nova.gitERROR: Problem fetching from origin / origin - could be unavailable. Continuing anyway.hudson.plugins.git.GitException: Error performing command: git fetch -t https://github.com/openstack/nova.git +refs/heads/*:refs/remotes/origin/*Command "git fetch -t https://github.com/openstack/nova.git +refs/heads/*:refs/remotes/origin/*" returned status code 128: error: The requested URL returned error: 403 Forbidden while accessing https://github.com/openstack/nova.git/info/refsfatal: HTTP request failed at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:776) at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:741) at hudson.plugins.git.GitAPI.fetch(GitAPI.java:190) at hudson.plugins.git.GitAPI.fetch(GitAPI.java:978) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1049) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Thread.java:722)Caused by: hudson.plugins.git.GitException: Command "git fetch -t https://github.com/openstack/nova.git +refs/heads/*:refs/remotes/origin/*" returned status code 128: error: The requested URL returned error: 403 Forbidden while accessing https://github.com/openstack/nova.git/info/refsfatal: HTTP request failed at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771) ... 16 moreERROR: Could not fetch from any repositoryFATAL: Could not fetch from any repositoryhudson.plugins.git.GitException: Could not fetch from any repository at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1061) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Thread.java:722)-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: quantal_folsom_keystone_stable #111
Title: quantal_folsom_keystone_stable General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_keystone_stable/111/Project:quantal_folsom_keystone_stableDate of build:Thu, 04 Apr 2013 14:32:33 -0400Build duration:2 min 50 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0Changeskey all backends off of hash of pki token.by ayoungeditkeystone/token/backends/memcache.pyeditkeystone/token/backends/sql.pyeditkeystone/token/core.pyeditkeystone/token/backends/kvs.pyConsole Output[...truncated 1765 lines...]Applying patch sql_connection.patchpatching file etc/keystone.conf.sampleApplying patch CVE-2013-1865.patchpatching file keystone/service.pyHunk #1 FAILED at 490.1 out of 1 hunk FAILED -- rejects in file keystone/service.pypatching file tests/test_service.pyHunk #1 FAILED at 150.1 out of 1 hunk FAILED -- rejects in file tests/test_service.pyPatch CVE-2013-1865.patch can be reverse-appliedERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-56407614-0295-4281-826b-4602166a7d31', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-56407614-0295-4281-826b-4602166a7d31', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/folsom /tmp/tmpR0YVrQ/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmpR0YVrQ/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 255b1d43500f5d98ec73a0056525b492b14fec05..HEAD --no-merges --pretty=format:[%h] %sdch -b -D quantal --newversion 2012.2.4+git201304041433~quantal-0ubuntu1 Automated Ubuntu testing build:dch -a [1889299] key all backends off of hash of pki token.dch -a [9e0a97d] Retry http_request and json_request failure.dch -a [40660f0] auth_token hash pki key PKI tokens on hash in memcached when accessed by auth_token middelwaredch -a [b3ce6a7] Use the right subprocess based on os monkeypatchdch -a [bb1ded0] add check for config-dir parameter (bug1101129)debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-56407614-0295-4281-826b-4602166a7d31', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'quantal-amd64-56407614-0295-4281-826b-4602166a7d31', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_keystone_trunk #236
Title: precise_grizzly_keystone_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/236/Project:precise_grizzly_keystone_trunkDate of build:Thu, 04 Apr 2013 14:49:30 -0400Build duration:2 min 50 secBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesConsole Output[...truncated 454 lines...]Receiving objects: 83% (20158/24062), 8.60 MiB | 602 KiB/s Receiving objects: 83% (20158/24062), 8.61 MiB | 368 KiB/s Receiving objects: 83% (20185/24062), 8.81 MiB | 155 KiB/s Receiving objects: 83% (20185/24062), 8.82 MiB | 55 KiB/s Receiving objects: 84% (20213/24062), 8.82 MiB | 55 KiB/s Receiving objects: 85% (20453/24062), 8.90 MiB | 22 KiB/s Receiving objects: 86% (20694/24062), 8.90 MiB | 22 KiB/s Receiving objects: 87% (20934/24062), 8.90 MiB | 22 KiB/s Receiving objects: 87% (20965/24062), 8.90 MiB | 22 KiB/s Receiving objects: 88% (21175/24062), 8.90 MiB | 22 KiB/s Receiving objects: 89% (21416/24062), 8.90 MiB | 22 KiB/s Receiving objects: 90% (21656/24062), 9.24 MiB | 43 KiB/s Receiving objects: 91% (21897/24062), 9.50 MiB | 65 KiB/s Receiving objects: 91% (21964/24062), 9.73 MiB | 93 KiB/s Receiving objects: 92% (22138/24062), 9.73 MiB | 93 KiB/s Receiving objects: 93% (22378/24062), 9.85 MiB | 133 KiB/s Receiving objects: 93% (22545/24062), 9.97 MiB | 254 KiB/s Receiving objects: 93% (22545/24062), 10.05 MiB | 238 KiB/s Receiving objects: 94% (22619/24062), 10.05 MiB | 238 KiB/s Receiving objects: 94% (22646/24062), 10.19 MiB | 274 KiB/s Receiving objects: 94% (22658/24062), 10.35 MiB | 277 KiB/s Receiving objects: 94% (22711/24062), 10.61 MiB | 156 KiB/s Receiving objects: 94% (22711/24062), 10.61 MiB | 68 KiB/s Receiving objects: 94% (22711/24062), 10.63 MiB | 15 KiB/s Receiving objects: 94% (22712/24062), 10.65 MiB | 5 KiB/s error: RPC failed; result=56, HTTP code = 200fatal: The remote end hung up unexpectedlyfatal: early EOFfatal: index-pack failed at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771) ... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Thread.java:722)-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_keystone_trunk #237
Title: precise_grizzly_keystone_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/237/Project:precise_grizzly_keystone_trunkDate of build:Thu, 04 Apr 2013 15:01:29 -0400Build duration:1 min 50 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesNo ChangesConsole OutputStarted by an SCM changeBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunkCheckout:precise_grizzly_keystone_trunk / /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision d5dcb3babf57c6393c15f0ffc911fc7a833d58bd (origin/milestone-proposed)Checkout:keystone / /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunk/keystone - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/keystone.gitERROR: Couldn't find any revision to build. Verify the repository and branch configuration for this job.Email was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: precise_havana_keystone_trunk #8
Title: precise_havana_keystone_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/8/Project:precise_havana_keystone_trunkDate of build:Thu, 04 Apr 2013 15:01:30 -0400Build duration:7 min 13 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesNo ChangesConsole Output[...truncated 408 lines...]Receiving objects: 55% (13235/24062), 3.00 MiB | 374 KiB/s Receiving objects: 56% (13475/24062), 3.00 MiB | 374 KiB/s Receiving objects: 57% (13716/24062), 3.00 MiB | 374 KiB/s Receiving objects: 57% (13730/24062), 3.57 MiB | 423 KiB/s Receiving objects: 57% (13731/24062), 3.61 MiB | 352 KiB/s Receiving objects: 57% (13734/24062), 3.61 MiB | 352 KiB/s Receiving objects: 57% (13736/24062), 3.85 MiB | 204 KiB/s Receiving objects: 57% (13749/24062), 3.85 MiB | 204 KiB/s Receiving objects: 57% (13751/24062), 4.19 MiB | 203 KiB/s Receiving objects: 57% (13752/24062), 4.36 MiB | 161 KiB/s Receiving objects: 57% (13752/24062), 4.37 MiB | 134 KiB/s Receiving objects: 57% (13754/24062), 4.44 MiB | 137 KiB/s Receiving objects: 58% (13956/24062), 4.60 MiB | 98 KiB/s Receiving objects: 58% (14141/24062), 4.60 MiB | 98 KiB/s Receiving objects: 58% (14164/24062), 4.73 MiB | 111 KiB/s Receiving objects: 58% (14168/24062), 4.79 MiB | 87 KiB/s Receiving objects: 58% (14168/24062), 4.80 MiB | 65 KiB/s Receiving objects: 58% (14168/24062), 4.81 MiB | 57 KiB/s Receiving objects: 58% (14169/24062), 4.85 MiB | 31 KiB/s Receiving objects: 58% (14170/24062), 4.86 MiB | 9 KiB/s Receiving objects: 58% (14170/24062), 4.86 MiB | 4 KiB/s Receiving objects: 58% (14173/24062), 4.91 MiB | 1 KiB/s Receiving objects: 58% (14173/24062), 4.93 MiB | 1 KiB/s Receiving objects: 58% (14174/24062), 4.96 MiB Receiving objects: 58% (14175/24062), 4.97 MiB error: RPC failed; result=56, HTTP code = 200fatal: The remote end hung up unexpectedlyfatal: early EOFfatal: index-pack failed at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771) ... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Thread.java:722)-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_keystone_trunk #239
Title: precise_grizzly_keystone_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/239/Project:precise_grizzly_keystone_trunkDate of build:Thu, 04 Apr 2013 16:14:59 -0400Build duration:31 secBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesNo ChangesConsole OutputStarted by user Adam GandelmanBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunkCheckout:precise_grizzly_keystone_trunk / /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision d5dcb3babf57c6393c15f0ffc911fc7a833d58bd (origin/milestone-proposed)Checkout:keystone / /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunk/keystone - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/keystone.gitERROR: Couldn't find any revision to build. Verify the repository and branch configuration for this job.Email was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_keystone_trunk #240
Title: precise_grizzly_keystone_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/240/Project:precise_grizzly_keystone_trunkDate of build:Thu, 04 Apr 2013 16:20:19 -0400Build duration:4 min 24 secBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 494 lines...]Receiving objects: 59% (14385/24062), 5.03 MiB | 44 KiB/s Receiving objects: 59% (14410/24062), 5.04 MiB | 33 KiB/s Receiving objects: 59% (14432/24062), 5.07 MiB | 25 KiB/s Receiving objects: 60% (14438/24062), 5.07 MiB | 25 KiB/s Receiving objects: 60% (14554/24062), 5.11 MiB | 21 KiB/s Receiving objects: 60% (14665/24062), 5.14 MiB | 15 KiB/s Receiving objects: 61% (14678/24062), 5.14 MiB | 15 KiB/s Receiving objects: 61% (14804/24062), 5.18 MiB | 11 KiB/s Receiving objects: 61% (14812/24062), 5.19 MiB | 8 KiB/s Receiving objects: 61% (14828/24062), 5.21 MiB | 5 KiB/s Receiving objects: 62% (14919/24062), 5.21 MiB | 5 KiB/s Receiving objects: 62% (14943/24062), 5.25 MiB | 3 KiB/s Receiving objects: 62% (15052/24062), 5.25 MiB | 3 KiB/s Receiving objects: 63% (15160/24062), 5.25 MiB | 3 KiB/s Receiving objects: 63% (15218/24062), 5.32 MiB | 4 KiB/s Receiving objects: 63% (15281/24062), 5.32 MiB | 4 KiB/s Receiving objects: 64% (15400/24062), 5.32 MiB | 4 KiB/s Receiving objects: 64% (15489/24062), 5.40 MiB | 5 KiB/s Receiving objects: 64% (15519/24062), 5.41 MiB | 5 KiB/s Receiving objects: 65% (15641/24062), 5.41 MiB | 5 KiB/s Receiving objects: 65% (15728/24062), 5.44 MiB | 5 KiB/s Receiving objects: 65% (15864/24062), 5.45 MiB | 5 KiB/s Receiving objects: 66% (15881/24062), 5.45 MiB | 5 KiB/s Receiving objects: 66% (15987/24062), 5.46 MiB | 5 KiB/s Receiving objects: 67% (16122/24062), 5.46 MiB | 5 KiB/s Receiving objects: 67% (16271/24062), 5.51 MiB | 6 KiB/s Receiving objects: 68% (16363/24062), 5.51 MiB | 6 KiB/s Receiving objects: 68% (16431/24062), 5.55 MiB | 8 KiB/s Receiving objects: 68% (16595/24062), 5.57 MiB | 9 KiB/s Receiving objects: 69% (16603/24062), 5.57 MiB | 9 KiB/s at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771) ... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Thread.java:722)-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_keystone_trunk #241
Title: precise_grizzly_keystone_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/241/Project:precise_grizzly_keystone_trunkDate of build:Thu, 04 Apr 2013 16:24:53 -0400Build duration:3 min 31 secBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 364 lines...]Receiving objects: 43% (10347/24062), 2.36 MiB | 635 KiB/s Receiving objects: 43% (10550/24062), 2.36 MiB | 635 KiB/s Receiving objects: 44% (10588/24062), 2.36 MiB | 635 KiB/s Receiving objects: 45% (10828/24062), 2.36 MiB | 635 KiB/s Receiving objects: 46% (11069/24062), 2.36 MiB | 635 KiB/s Receiving objects: 47% (11310/24062), 2.36 MiB | 635 KiB/s Receiving objects: 48% (11550/24062), 2.36 MiB | 635 KiB/s Receiving objects: 49% (11791/24062), 2.36 MiB | 635 KiB/s Receiving objects: 50% (12031/24062), 2.36 MiB | 635 KiB/s Receiving objects: 50% (12200/24062), 2.60 MiB | 504 KiB/s Receiving objects: 51% (12272/24062), 2.60 MiB | 504 KiB/s Receiving objects: 52% (12513/24062), 2.60 MiB | 504 KiB/s Receiving objects: 52% (12619/24062), 2.65 MiB | 287 KiB/s Receiving objects: 52% (12672/24062), 2.66 MiB | 195 KiB/s Receiving objects: 52% (12694/24062), 2.68 MiB | 142 KiB/s Receiving objects: 52% (12694/24062), 2.69 MiB | 31 KiB/s Receiving objects: 52% (12694/24062), 2.70 MiB | 16 KiB/s Receiving objects: 52% (12695/24062), 2.81 MiB | 7 KiB/s Receiving objects: 52% (12695/24062), 2.86 MiB | 4 KiB/s Receiving objects: 52% (12717/24062), 2.89 MiB | 3 KiB/s Receiving objects: 52% (12740/24062), 2.89 MiB | 2 KiB/s Receiving objects: 53% (12753/24062), 2.89 MiB | 2 KiB/s Receiving objects: 53% (12833/24062), 2.93 MiB | 2 KiB/s Receiving objects: 53% (12873/24062), 2.98 MiB | 1 KiB/s Receiving objects: 53% (12883/24062), 3.00 MiB | 1 KiB/s error: RPC failed; result=56, HTTP code = 200fatal: The remote end hung up unexpectedlyfatal: early EOFfatal: index-pack failed at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771) ... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Thread.java:722)-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_glance_trunk #9
Title: precise_havana_glance_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_glance_trunk/9/Project:precise_havana_glance_trunkDate of build:Thu, 04 Apr 2013 16:33:41 -0400Build duration:8.2 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesNo ChangesConsole Output[...truncated 218 lines...]remote: Compressing objects: 75% (3809/5078) [Kremote: Compressing objects: 76% (3860/5078) [Kremote: Compressing objects: 77% (3911/5078) [Kremote: Compressing objects: 78% (3961/5078) [Kremote: Compressing objects: 79% (4012/5078) [Kremote: Compressing objects: 80% (4063/5078) [Kremote: Compressing objects: 81% (4114/5078) [Kremote: Compressing objects: 82% (4164/5078) [Kremote: Compressing objects: 83% (4215/5078) [Kremote: Compressing objects: 84% (4266/5078) [Kremote: Compressing objects: 85% (4317/5078) [Kremote: Compressing objects: 86% (4368/5078) [Kremote: Compressing objects: 87% (4418/5078) [Kremote: Compressing objects: 88% (4469/5078) [Kremote: Compressing objects: 89% (4520/5078) [Kremote: Compressing objects: 90% (4571/5078) [Kremote: Compressing objects: 91% (4621/5078) [Kremote: Compressing objects: 92% (4672/5078) [Kremote: Compressing objects: 93% (4723/5078) [Kremote: Compressing objects: 94% (4774/5078) [Kremote: Compressing objects: 95% (4825/5078) [Kremote: Compressing objects: 96% (4875/5078) [Kremote: Compressing objects: 97% (4926/5078) [Kremote: Compressing objects: 98% (4977/5078) [Kremote: Compressing objects: 99% (5028/5078) [Kremote: Compressing objects: 100% (5078/5078) [Kremote: Compressing objects: 100% (5078/5078), done.[KReceiving objects: 0% (1/20594) Receiving objects: 1% (206/20594) Receiving objects: 1% (284/20594), 84.00 KiB | 24 KiB/s at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771) ... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Thread.java:722)-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: precise_folsom_nova_stable #721
Title: precise_folsom_nova_stable General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_nova_stable/721/Project:precise_folsom_nova_stableDate of build:Thu, 04 Apr 2013 16:31:30 -0400Build duration:2 min 19 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 346 lines...]Receiving objects: 2% (4496/168014), 1.32 MiB | 26 KiB/s Receiving objects: 2% (4550/168014), 1.32 MiB | 26 KiB/s Receiving objects: 2% (4948/168014), 1.45 MiB | 37 KiB/s Receiving objects: 2% (5027/168014), 1.47 MiB | 53 KiB/s Receiving objects: 3% (5041/168014), 1.47 MiB | 53 KiB/s Receiving objects: 3% (5392/168014), 1.58 MiB | 55 KiB/s Receiving objects: 3% (6064/168014), 1.66 MiB | 69 KiB/s Receiving objects: 4% (6721/168014), 1.79 MiB | 81 KiB/s Receiving objects: 4% (6814/168014), 1.98 MiB | 102 KiB/s Receiving objects: 4% (7453/168014), 2.14 MiB | 121 KiB/s Receiving objects: 4% (7602/168014), 2.18 MiB | 85 KiB/s Receiving objects: 4% (8308/168014), 2.18 MiB | 85 KiB/s Receiving objects: 5% (8401/168014), 2.18 MiB | 85 KiB/s Receiving objects: 5% (8965/168014), 2.56 MiB | 126 KiB/s Receiving objects: 5% (9028/168014), 2.57 MiB | 114 KiB/s Receiving objects: 5% (9057/168014), 2.58 MiB | 100 KiB/s Receiving objects: 5% (9197/168014), 2.62 MiB | 67 KiB/s Receiving objects: 5% (9257/168014), 2.64 MiB | 42 KiB/s Receiving objects: 5% (9316/168014), 2.64 MiB | 42 KiB/s Receiving objects: 5% (9344/168014), 2.66 MiB | 18 KiB/s Receiving objects: 5% (9489/168014), 2.70 MiB | 14 KiB/s Receiving objects: 6% (10081/168014), 2.70 MiB | 14 KiB/s Receiving objects: 6% (10109/168014), 2.86 MiB | 18 KiB/s Receiving objects: 6% (10943/168014), 2.89 MiB | 12 KiB/s Receiving objects: 7% (11761/168014), 2.89 MiB | 12 KiB/s Receiving objects: 7% (13405/168014), 3.48 MiB | 25 KiB/s Receiving objects: 8% (13442/168014), 3.48 MiB | 25 KiB/s Receiving objects: 8% (13447/168014), 3.49 MiB | 25 KiB/s Receiving objects: 8% (14861/168014), 3.71 MiB | 34 KiB/s Receiving objects: 8% (15020/168014), 3.79 MiB | 65 KiB/s at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771) ... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Thread.java:722)-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: quantal_folsom_nova_stable #712
Title: quantal_folsom_nova_stable General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_nova_stable/712/Project:quantal_folsom_nova_stableDate of build:Thu, 04 Apr 2013 16:34:03 -0400Build duration:1 min 23 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesNo ChangesConsole Output[...truncated 10 lines...]hudson.plugins.git.GitException: Could not clone https://github.com/openstack/nova.git at hudson.plugins.git.GitAPI.clone(GitAPI.java:245) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1029) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Thread.java:722)Caused by: hudson.plugins.git.GitException: Error performing command: git clone --progress -o origin https://github.com/openstack/nova.git /var/lib/jenkins/slave/workspace/quantal_folsom_nova_stable/novaCommand "git clone --progress -o origin https://github.com/openstack/nova.git /var/lib/jenkins/slave/workspace/quantal_folsom_nova_stable/nova" returned status code 143: Cloning into '/var/lib/jenkins/slave/workspace/quantal_folsom_nova_stable/nova'...Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:776) at hudson.plugins.git.GitAPI.access$000(GitAPI.java:38) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:241) at hudson.plugins.git.GitAPI$1.invoke(GitAPI.java:221) at hudson.FilePath.act(FilePath.java:842) at hudson.FilePath.act(FilePath.java:824) at hudson.plugins.git.GitAPI.clone(GitAPI.java:221) ... 13 moreCaused by: hudson.plugins.git.GitException: Command "git clone --progress -o origin https://github.com/openstack/nova.git /var/lib/jenkins/slave/workspace/quantal_folsom_nova_stable/nova" returned status code 143: Cloning into '/var/lib/jenkins/slave/workspace/quantal_folsom_nova_stable/nova'...Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771) ... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Thread.java:722)-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Failure: precise_grizzly_keystone_trunk #244
Title: precise_grizzly_keystone_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/244/Project:precise_grizzly_keystone_trunkDate of build:Thu, 04 Apr 2013 16:35:28 -0400Build duration:6 min 15 secBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 376 lines...]Receiving objects: 37% (8903/24062), 1.92 MiB | 85 KiB/s Receiving objects: 38% (9144/24062), 1.92 MiB | 85 KiB/s Receiving objects: 38% (9197/24062), 1.92 MiB | 85 KiB/s Receiving objects: 39% (9385/24062), 1.92 MiB | 85 KiB/s Receiving objects: 40% (9625/24062), 2.21 MiB | 95 KiB/s Receiving objects: 41% (9866/24062), 2.21 MiB | 95 KiB/s Receiving objects: 42% (10107/24062), 2.21 MiB | 95 KiB/s Receiving objects: 43% (10347/24062), 2.21 MiB | 95 KiB/s Receiving objects: 44% (10588/24062), 2.40 MiB | 116 KiB/s Receiving objects: 44% (10637/24062), 2.40 MiB | 116 KiB/s Receiving objects: 45% (10828/24062), 2.40 MiB | 116 KiB/s Receiving objects: 46% (11069/24062), 2.40 MiB | 116 KiB/s Receiving objects: 47% (11310/24062), 2.40 MiB | 116 KiB/s Receiving objects: 48% (11550/24062), 2.40 MiB | 116 KiB/s Receiving objects: 49% (11791/24062), 2.40 MiB | 116 KiB/s Receiving objects: 50% (12031/24062), 2.40 MiB | 116 KiB/s Receiving objects: 50% (12200/24062), 2.60 MiB | 158 KiB/s Receiving objects: 51% (12272/24062), 2.60 MiB | 158 KiB/s Receiving objects: 51% (12476/24062), 2.61 MiB | 77 KiB/s Receiving objects: 52% (12513/24062), 2.61 MiB | 77 KiB/s Receiving objects: 52% (12527/24062), 2.63 MiB | 24 KiB/s Receiving objects: 52% (12672/24062), 2.66 MiB | 10 KiB/s Receiving objects: 52% (12694/24062), 2.68 MiB | 6 KiB/s Receiving objects: 52% (12694/24062), 2.71 MiB | 2 KiB/s Receiving objects: 52% (12695/24062), 2.71 MiB | 2 KiB/s error: RPC failed; result=56, HTTP code = 200fatal: The remote end hung up unexpectedlyfatal: early EOFfatal: index-pack failed at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:771) ... 19 moreTrying next repositoryERROR: Could not clone repositoryFATAL: Could not clonehudson.plugins.git.GitException: Could not clone at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:1041) at hudson.plugins.git.GitSCM$2.invoke(GitSCM.java:970) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2236) at hudson.remoting.UserRequest.perform(UserRequest.java:118) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at hudson.remoting.Engine$1$1.run(Engine.java:60) at java.lang.Thread.run(Thread.java:722)-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_keystone_trunk #245
Title: precise_grizzly_keystone_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/245/Project:precise_grizzly_keystone_trunkDate of build:Thu, 04 Apr 2013 16:48:54 -0400Build duration:18 secBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole OutputStarted by user Adam GandelmanBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunkCheckout:precise_grizzly_keystone_trunk / /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision d5dcb3babf57c6393c15f0ffc911fc7a833d58bd (origin/milestone-proposed)Checkout:keystone / /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunk/keystone - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/keystone.gitERROR: Couldn't find any revision to build. Verify the repository and branch configuration for this job.Email was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_keystone_trunk #247
Title: precise_grizzly_keystone_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/247/Project:precise_grizzly_keystone_trunkDate of build:Thu, 04 Apr 2013 16:54:40 -0400Build duration:9.9 secBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole OutputStarted by user Adam GandelmanBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunkCheckout:precise_grizzly_keystone_trunk / /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision d5dcb3babf57c6393c15f0ffc911fc7a833d58bd (origin/milestone-proposed)Checkout:keystone / /var/lib/jenkins/slave/workspace/precise_grizzly_keystone_trunk/keystone - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/keystone.gitERROR: Couldn't find any revision to build. Verify the repository and branch configuration for this job.Email was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_nova_trunk #39
Title: precise_havana_nova_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_nova_trunk/39/Project:precise_havana_nova_trunkDate of build:Thu, 04 Apr 2013 17:02:30 -0400Build duration:2 min 52 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 3 out of the last 5 builds failed.40ChangesAdd CRUD methods for tags to the EC2 API.by stephen.graneditnova/tests/test_db_api.pyeditnova/db/sqlalchemy/api.pyeditnova/compute/api.pyeditnova/db/api.pyeditnova/tests/api/ec2/test_cloud.pyeditnova/api/ec2/cloud.pyeditnova/tests/fake_policy.pyeditnova/api/ec2/ec2utils.pyFallback to conductor if types are not stashed.by bdelliotteditnova/compute/resource_tracker.pyeditnova/tests/compute/test_resource_tracker.pyRemove deprecated Grizzly code.by jogoeditnova/scheduler/filters/__init__.pyeditnova/availability_zones.pyeditnova/tests/scheduler/test_host_filters.pyeditnova/virt/hyperv/volumeops.pydeletenova/virt/libvirt/volume_nfs.pyeditnova/scheduler/filters/type_filter.pydeletenova/scheduler/weights/least_cost.pydeletenova/network/api_deprecated.pyeditnova/scheduler/filters/trusted_filter.pydeletenova/tests/network/test_deprecated_api.pyeditnova/tests/scheduler/test_weights.pyeditdoc/source/devref/filter_scheduler.rstdeletenova/tests/scheduler/test_least_cost.pyeditnova/virt/hyperv/vmops.pyeditnova/virt/hyperv/vif.pyeditnova/scheduler/weights/__init__.pyMove console scripts to entrypoints.by mikaleditbin/nova-consoleautheditbin/nova-rootwrapaddnova/cmd/rpc_zmq_receiver.pyaddnova/cmd/api_os_compute.pyeditnova/tests/test_nova_manage.pyeditbin/nova-certeditbin/nova-dhcpbridgeaddnova/cmd/consoleauth.pyaddnova/cmd/baremetal_deploy_helper.pyaddnova/cmd/cert.pyedittools/run_pep8.shaddnova/cmd/dhcpbridge.pyeditbin/nova-spicehtml5proxyeditbin/nova-api-os-computeaddnova/cmd/scheduler.pyaddnova/cmd/compute.pyaddnova/cmd/rootwrap.pyeditbin/nova-xvpvncproxyeditsetup.pyaddnova/cmd/api.pyeditnova/tests/baremetal/test_nova_baremetal_deploy_helper.pyeditbin/nova-clear-rabbit-queuesaddnova/cmd/network.pyeditbin/nova-api-metadataaddnova/cmd/spicehtml5proxy.pyaddnova/cmd/xvpvncproxy.pyeditbin/nova-api-ec2addnova/cmd/clear_rabbit_queues.pyeditbin/nova-baremetal-manageaddnova/cmd/conductor.pyeditnova/test.pyaddnova/cmd/api_ec2.pyeditbin/nova-alladdnova/cmd/console.pyaddnova/cmd/all.pyaddnova/cmd/cells.pyeditbin/nova-consoleeditbin/nova-schedulereditbin/nova-rpc-zmq-receivereditbin/nova-cellseditbin/nova-networkeditbin/nova-conductoraddnova/cmd/objectstore.pyeditbin/nova-baremetal-deploy-helpereditbin/nova-computeaddnova/cmd/baremetal_manage.pyeditbin/nova-manageaddnova/cmd/__init__.pyeditbin/nova-novncproxyeditnova/tests/baremetal/test_nova_baremetal_manage.pyaddnova/cmd/api_metadata.pyeditbin/nova-apiaddnova/cmd/novncproxy.pyeditbin/nova-objectstoreaddnova/cmd/manage.pyOptimize resource tracker queries for instancesby bdelliotteditnova/tests/test_db_api.pyeditnova/db/sqlalchemy/api.pyConsole Output[...truncated 4365 lines...]Source-Version: 1:2013.2+git201304041703~precise-0ubuntu1Space: 0Status: failedVersion: 1:2013.2+git201304041703~precise-0ubuntu1Finished at 20130404-1704Build needed 00:00:00, 0k disc spaceE: Package build dependencies not satisfied; skippingERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2+git201304041703~precise-0ubuntu1.dsc']' returned non-zero exit status 3ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'nova_2013.2+git201304041703~precise-0ubuntu1.dsc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/grizzly /tmp/tmpZBmRad/novamk-build-deps -i -r -t apt-get -y /tmp/tmpZBmRad/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log b32573ca258cf9a42a72b7d5700bf9da5e432adc..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304041703~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [e844946] Optimize resource tracker queries for instancesdch -a [799a925] Move console scripts to entrypoints.dch -a [820f43f] Remove deprecated Grizzly code.dch -a [e9c88b7] Fallback
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_folsom_nova_stable #722
Title: precise_folsom_nova_stable General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_nova_stable/722/Project:precise_folsom_nova_stableDate of build:Thu, 04 Apr 2013 17:01:30 -0400Build duration:7 min 36 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesAdd a format_message method to the Exceptionsby ndipanoveditnova/exception.pyeditnova/tests/test_exception.pyUse format_message on exceptions instead of str()by ndipanoveditnova/api/openstack/compute/servers.pyeditnova/compute/api.pyeditnova/api/openstack/compute/contrib/admin_actions.pyeditnova/api/openstack/compute/contrib/consoles.pyeditnova/api/openstack/compute/contrib/flavorextraspecs.pyeditnova/api/openstack/compute/contrib/flavor_access.pyeditnova/api/openstack/compute/contrib/security_groups.pyeditnova/api/openstack/compute/contrib/flavormanage.pyeditnova/api/openstack/compute/contrib/floating_ip_dns.pyeditnova/api/openstack/compute/server_metadata.pyConsole Output[...truncated 2480 lines...]Hunk #8 FAILED at 1014.Hunk #9 succeeded at 1076 with fuzz 1 (offset 21 lines).Hunk #10 succeeded at 1184 with fuzz 1 (offset 24 lines).6 out of 10 hunks FAILED -- rejects in file nova/tests/test_quota.pyPatch CVE-2013-1838.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-af2c221e-2cbb-4527-afa5-36c7f2a6fea5', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-af2c221e-2cbb-4527-afa5-36c7f2a6fea5', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/nova/folsom /tmp/tmphhHtyn/novamk-build-deps -i -r -t apt-get -y /tmp/tmphhHtyn/nova/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 5b43cef510b68cff1f6e2f80742d3204b0b51e45..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 2012.2.4+git201304041707~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [056a7df] Use format_message on exceptions instead of str()dch -a [c4c417e] Set default fixed_ip quota to unlimited.dch -a [8f8ef21] Add a format_message method to the Exceptionsdch -a [c85683e] Adding netmask to dnsmasq argument --dhcp-rangedch -a [50dece6] Fix Wrong syntax for set:tag in dnsmasq startup optiondch -a [2dd8f3e] LibvirtHybridOVSBridgeDriver update for STPdch -a [69ba489] Fixes PowerVM spawn failed as missing attr supported_instancesdch -a [28aacf6] Fix bad Log statement in nova-managedch -a [524a5a3] Don't include traceback when wrapping exceptionsdch -a [67eb495] Decouple EC2 API from using instance iddch -a [f8c5492] libvirt: Optimize test_connection and capabilitiesdch -a [53626bf] populate dnsmasq lease db with valid leasesdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-af2c221e-2cbb-4527-afa5-36c7f2a6fea5', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-af2c221e-2cbb-4527-afa5-36c7f2a6fea5', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Fixed: precise_havana_glance_trunk #10
Title: precise_havana_glance_trunk General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_glance_trunk/10/Project:precise_havana_glance_trunkDate of build:Thu, 04 Apr 2013 17:05:22 -0400Build duration:11 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 4 out of the last 5 builds failed.20ChangesRemove internal store references from migration 017by jbresnaheditglance/db/sqlalchemy/migrate_repo/versions/017_quote_encrypted_swift_credentials.pyFunctional tests display the logs of the services they startedby jbresnaheditglance/tests/functional/__init__.pyMake is_public an argument rather than a filter.by mark.washenbergereditglance/db/sqlalchemy/api.pyeditglance/tests/unit/v1/test_api.pyeditglance/tests/functional/db/base.pyeditglance/db/simple/api.pyeditglance/tests/unit/test_clients.pyeditglance/registry/api/v1/images.pyInvalid reference to self in functional test test_scrubber.pyby jbresnaheditglance/tests/functional/test_scrubber.pyConsole Output[...truncated 6446 lines...]gpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key)"gpg: Signature made Thu Apr 4 17:07:30 2013 EDT using RSA key ID 9935ACDCgpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) "Checking signature on .changesGood signature on /tmp/tmpzaS840/glance_2013.2+git201304041706~precise-0ubuntu1_source.changes.Checking signature on .dscGood signature on /tmp/tmpzaS840/glance_2013.2+git201304041706~precise-0ubuntu1.dsc.Uploading to ppa (via ftp to ppa.launchpad.net): Uploading glance_2013.2+git201304041706~precise-0ubuntu1.dsc: done. Uploading glance_2013.2+git201304041706~precise.orig.tar.gz: done. Uploading glance_2013.2+git201304041706~precise-0ubuntu1.debian.tar.gz: done. Uploading glance_2013.2+git201304041706~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-havana', 'glance_2013.2+git201304041706~precise-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/g/glance/glance-api_2013.2+git201304022337~precise-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance-common_2013.2+git201304022337~precise-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance-registry_2013.2+git201304022337~precise-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/glance_2013.2+git201304022337~precise-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/python-glance-doc_2013.2+git201304022337~precise-0ubuntu1_all.debdeleting and forgetting pool/main/g/glance/python-glance_2013.2+git201304022337~precise-0ubuntu1_all.debINFO:root:Storing current commit for next build: 07eda706f43c907b1d212a48c6b9514d83bbcdc1INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpzaS840/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpzaS840/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log bd9405402a232eafe701fbd15b4740b7503ed27f..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304041706~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [07eda70] Invalid reference to self in functional test test_scrubber.pydch -a [d8e3f87] Make is_public an argument rather than a filter.dch -a [6ea699b] Functional tests display the logs of the services they starteddch -a [f17f483] Add 'set_image_location' policy optiondch -a [f1804c4] Remove internal store references from migration 017debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.2+git201304041706~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A glance_2013.2+git201304041706~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana glance_2013.2+git201304041706~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana glance_2013.2+git201304041706~precise-0ubuntu1_amd64.changesEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Fixed: precise_havana_quantum_trunk #23
Title: precise_havana_quantum_trunk General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/23/Project:precise_havana_quantum_trunkDate of build:Thu, 04 Apr 2013 17:03:28 -0400Build duration:18 minBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 1 out of the last 5 builds failed.80ChangesAdd expected IDs to router.interface.* notificationsby eglynneditquantum/db/l3_db.pyeditquantum/tests/unit/test_l3_plugin.pyConsole Output[...truncated 12037 lines...]INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-havana', 'quantum_2013.2+git201304041705~precise-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-havana/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/q/quantum/python-quantum_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-common_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-dhcp-agent_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-l3-agent_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-lbaas-agent_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-metadata-agent_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-bigswitch_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-brocade_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-cisco_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-hyperv_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-linuxbridge-agent_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-linuxbridge_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-metaplugin_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-midonet_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-nec-agent_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-nec_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-nicira_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-openvswitch-agent_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-openvswitch_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-plumgrid_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-ryu-agent_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-plugin-ryu_2013.2+git201304040306~precise-0ubuntu1_all.debdeleting and forgetting pool/main/q/quantum/quantum-server_2013.2+git201304040306~precise-0ubuntu1_all.debINFO:root:Storing current commit for next build: 1daf927c9df046c17a8c3f07350f60cf488a4215INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/quantum/grizzly /tmp/tmpked35M/quantummk-build-deps -i -r -t apt-get -y /tmp/tmpked35M/quantum/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log f6f253a6cf981f41a6e725abada22ebc51b1aa4e..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304041705~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [1daf927] Add expected IDs to router.interface.* notificationsdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304041705~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304041705~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/havana quantum_2013.2+git201304041705~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-havana quantum_2013.2+git201304041705~precise-0ubuntu1_amd64.changesEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to :
[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_grizzly_keystone_trunk #249
Title: precise_grizzly_keystone_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/249/Project:precise_grizzly_keystone_trunkDate of build:Thu, 04 Apr 2013 17:17:46 -0400Build duration:6 min 28 secBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesNo ChangesConsole Output[...truncated 38490 lines...]Fail-Stage: buildHost Architecture: amd64Install-Time: 27Job: keystone_2013.1+git201304041718~precise-0ubuntu1.dscMachine Architecture: amd64Package: keystonePackage-Time: 300Source-Version: 1:2013.1+git201304041718~precise-0ubuntu1Space: 14916Status: attemptedVersion: 1:2013.1+git201304041718~precise-0ubuntu1Finished at 20130404-1724Build needed 00:05:00, 14916k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_2013.1+git201304041718~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_2013.1+git201304041718~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/keystone/grizzly /tmp/tmp4XCw7b/keystonemk-build-deps -i -r -t apt-get -y /tmp/tmp4XCw7b/keystone/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log 93b8f2ddf7f00747d7ccace8401b2b68a11bf98f..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.1+git201304041718~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [ec9115b] Bump stable/grizzly next version to 2013.1.1dch -a [f4b8ae2] use the roles in the token when recreatingdebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.1+git201304041718~precise-0ubuntu1_source.changessbuild -d precise-grizzly -n -A keystone_2013.1+git201304041718~precise-0ubuntu1.dscTraceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_2013.1+git201304041718~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last): File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-grizzly', '-n', '-A', 'keystone_2013.1+git201304041718~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp
[Openstack-ubuntu-testing-notifications] Build Still Failing: raring_grizzly_keystone_trunk #240
Title: raring_grizzly_keystone_trunk General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_keystone_trunk/240/Project:raring_grizzly_keystone_trunkDate of build:Thu, 04 Apr 2013 17:31:58 -0400Build duration:17 secBuild cause:Started by user Adam GandelmanBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesNo ChangesConsole OutputStarted by user Adam GandelmanBuilding remotely on pkg-builder in workspace /var/lib/jenkins/slave/workspace/raring_grizzly_keystone_trunkCheckout:raring_grizzly_keystone_trunk / /var/lib/jenkins/slave/workspace/raring_grizzly_keystone_trunk - hudson.remoting.Channel@2c824005:pkg-builderUsing strategy: DefaultLast Built Revision: Revision d5dcb3babf57c6393c15f0ffc911fc7a833d58bd (origin/milestone-proposed)Checkout:keystone / /var/lib/jenkins/slave/workspace/raring_grizzly_keystone_trunk/keystone - hudson.remoting.LocalChannel@18da3e94Wiping out workspace first.Cloning the remote Git repositoryCloning repository originFetching upstream changes from https://github.com/openstack/keystone.gitERROR: Couldn't find any revision to build. Verify the repository and branch configuration for this job.Email was triggered for: FailureSending email for trigger: Failure-- Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications More help : https://help.launchpad.net/ListHelp